title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Enumeration of Graphs and the Characteristic Polynomial of the Hyperplane Arrangements $\mathcal{J}_n$
We give a complete formula for the characteristic polynomial of hyperplane arrangements $\mathcal J_n$ consisting of the hyperplanes $x_i+x_j=1$, $x_k=0$, $x_l=1$, $ 1\leq i, j, k, l\leq n$. The formula is obtained by associating hyperplane arrangements with graphs, and then enumerating central graphs via generating functions for the number of bipartite graphs of given order, size and number of connected components.
0
0
1
0
0
0
On the Scientific Value of Large-scale Testbeds for Wireless Multi-hop Networks
Large-scale wireless testbeds have been setup in the last years with the goal to study wireless multi-hop networks in more realistic environments. Since the setup and operation of such a testbed is expensive in terms of money, time, and labor, the crucial question rises whether this effort is justified with the scientific results the testbed generates. In this paper, we give an answer to this question based on our experience with the DES-Testbed, a large-scale wireless sensor network and wireless mesh network testbed. The DES-Testbed has been operated for almost 5 years. Our analysis comprises more than 1000 experiments that have been run on the testbed in the years 2010 and 2011. We discuss the scientific value in respect to the effort of experimentation.
1
0
0
0
0
0
Brain Damage and Motor Cortex Impairment in Chronic Obstructive Pulmonary Disease: Implication of Nonrapid Eye Movement Sleep Desaturation
Nonrapid eye movement (NREM) sleep desaturation may cause neuronal damage due to the withdrawal of cerebrovascular reactivity. The current study (1) assessed the prevalence of NREM sleep desaturation in nonhypoxemic patients with chronic obstructive pulmonary disease (COPD) and (2) compared a biological marker of cerebral lesion and neuromuscular function in patients with and without NREM sleep desaturation.
0
1
0
0
0
0
FPT-algorithms for The Shortest Lattice Vector and Integer Linear Programming Problems
In this paper, we present FPT-algorithms for special cases of the shortest vector problem (SVP) and the integer linear programming problem (ILP), when matrices included to the problems' formulations are near square. The main parameter is the maximal absolute value of rank minors of matrices included to the problem formulation. Additionally, we present FPT-algorithms with respect to the same main parameter for the problems, when the matrices have no singular rank sub-matrices.
1
0
0
0
0
0
A dynamic graph-cuts method with integrated multiple feature maps for segmenting kidneys in ultrasound images
Purpose: To improve kidney segmentation in clinical ultrasound (US) images, we develop a new graph cuts based method to segment kidney US images by integrating original image intensity information and texture feature maps extracted using Gabor filters. Methods: To handle large appearance variation within kidney images and improve computational efficiency, we build a graph of image pixels close to kidney boundary instead of building a graph of the whole image. To make the kidney segmentation robust to weak boundaries, we adopt localized regional information to measure similarity between image pixels for computing edge weights to build the graph of image pixels. The localized graph is dynamically updated and the GC based segmentation iteratively progresses until convergence. The proposed method has been evaluated and compared with state of the art image segmentation methods based on clinical kidney US images of 85 subjects. We randomly selected US images of 20 subjects as training data for tuning the parameters, and validated the methods based on US images of the remaining 65 subjects. The segmentation results have been quantitatively analyzed using 3 metrics, including Dice Index, Jaccard Index, and Mean Distance. Results: Experiment results demonstrated that the proposed method obtained segmentation results for bilateral kidneys of 65 subjects with average Dice index of 0.9581, Jaccard index of 0.9204, and Mean Distance of 1.7166, better than other methods under comparison (p<10-19, paired Wilcoxon rank sum tests). Conclusions: The proposed method achieved promising performance for segmenting kidneys in US images, better than segmentation methods that built on any single channel of image information. This method will facilitate extraction of kidney characteristics that may predict important clinical outcomes such progression chronic kidney disease.
1
0
0
0
0
0
Natural Scales in Geographical Patterns
Human mobility is known to be distributed across several orders of magnitude of physical distances , which makes it generally difficult to endogenously find or define typical and meaningful scales. Relevant analyses, from movements to geographical partitions, seem to be relative to some ad-hoc scale, or no scale at all. Relying on geotagged data collected from photo-sharing social media, we apply community detection to movement networks constrained by increasing percentiles of the distance distribution. Using a simple parameter-free discontinuity detection algorithm, we discover clear phase transitions in the community partition space. The detection of these phases constitutes the first objective method of characterising endogenous, natural scales of human movement. Our study covers nine regions, ranging from cities to countries of various sizes and a transnational area. For all regions, the number of natural scales is remarkably low (2 or 3). Further, our results hint at scale-related behaviours rather than scale-related users. The partitions of the natural scales allow us to draw discrete multi-scale geographical boundaries, potentially capable of providing key insights in fields such as epidemiology or cultural contagion where the introduction of spatial boundaries is pivotal.
1
1
0
0
0
0
Drawing materials studied by THz spectroscopy
THz time-domain spectroscopy in transmission mode was applied to study dry and wet drawing inks. In specific, cochineal-, indigo- and iron-gall based inks have been investigated; some prepared following ancient recipes and others by using synthetic materials. The THz investigations have been realized on both pellet samples, made by dried inks blended with polyethylene powder, and layered inks, made by liquid deposition on polyethylene pellicles. We implemented an improved THz spectroscopic technique that enabled the measurement of the material optical parameters and thicknesses of the layered ink samples on absolute scale. This experimental investigation shows that the THz techniques have the potentiality to recognize drawing inks by their spectroscopic features.
0
1
0
0
0
0
A feasibility study for predicting optimal radiation therapy dose distributions of prostate cancer patients from patient anatomy using deep learning
With the advancement of treatment modalities in radiation therapy for cancer patients, outcomes have improved, but at the cost of increased treatment plan complexity and planning time. The accurate prediction of dose distributions would alleviate this issue by guiding clinical plan optimization to save time and maintain high quality plans. We have modified a convolutional deep network model, U-net (originally designed for segmentation purposes), for predicting dose from patient image contours of the planning target volume (PTV) and organs at risk (OAR). We show that, as an example, we are able to accurately predict the dose of intensity-modulated radiation therapy (IMRT) for prostate cancer patients, where the average Dice similarity coefficient is 0.91 when comparing the predicted vs. true isodose volumes between 0% and 100% of the prescription dose. The average value of the absolute differences in [max, mean] dose is found to be under 5% of the prescription dose, specifically for each structure is [1.80%, 1.03%](PTV), [1.94%, 4.22%](Bladder), [1.80%, 0.48%](Body), [3.87%, 1.79%](L Femoral Head), [5.07%, 2.55%](R Femoral Head), and [1.26%, 1.62%](Rectum) of the prescription dose. We thus managed to map a desired radiation dose distribution from a patient's PTV and OAR contours. As an additional advantage, relatively little data was used in the techniques and models described in this paper.
1
1
0
0
0
0
Single-molecule imaging of DNA gyrase activity in living Escherichia coli
Bacterial DNA gyrase introduces negative supercoils into chromosomal DNA and relaxes positive supercoils introduced by replication and transiently by transcription. Removal of these positive supercoils is essential for replication fork progression and for the overall unlinking of the two duplex DNA strands, as well as for ongoing transcription. To address how gyrase copes with these topological challenges, we used high-speed single-molecule fluorescence imaging in live Escherichia coli cells. We demonstrate that at least 300 gyrase molecules are stably bound to the chromosome at any time, with ~12 enzymes enriched near each replication fork. Trapping of reaction intermediates with ciprofloxacin revealed complexes undergoing catalysis. Dwell times of ~2 s were observed for the dispersed gyrase molecules, which we propose maintain steady-state levels of negative supercoiling of the chromosome. In contrast, the dwell time of replisome-proximal molecules was ~8 s, consistent with these catalyzing processive positive supercoil relaxation in front of the progressing replisome.
0
0
0
0
1
0
Shedding Light on Black Box Machine Learning Algorithms: Development of an Axiomatic Framework to Assess the Quality of Methods that Explain Individual Predictions
From self-driving vehicles and back-flipping robots to virtual assistants who book our next appointment at the hair salon or at that restaurant for dinner - machine learning systems are becoming increasingly ubiquitous. The main reason for this is that these methods boast remarkable predictive capabilities. However, most of these models remain black boxes, meaning that it is very challenging for humans to follow and understand their intricate inner workings. Consequently, interpretability has suffered under this ever-increasing complexity of machine learning models. Especially with regards to new regulations, such as the General Data Protection Regulation (GDPR), the necessity for plausibility and verifiability of predictions made by these black boxes is indispensable. Driven by the needs of industry and practice, the research community has recognised this interpretability problem and focussed on developing a growing number of so-called explanation methods over the past few years. These methods explain individual predictions made by black box machine learning models and help to recover some of the lost interpretability. With the proliferation of these explanation methods, it is, however, often unclear, which explanation method offers a higher explanation quality, or is generally better-suited for the situation at hand. In this thesis, we thus propose an axiomatic framework, which allows comparing the quality of different explanation methods amongst each other. Through experimental validation, we find that the developed framework is useful to assess the explanation quality of different explanation methods and reach conclusions that are consistent with independent research.
0
0
0
1
0
0
Click-based porous cationic polymers for enhanced carbon dioxide capture
Imidazolium based porous cationic polymers were synthesized using an innovative and facile approach, which takes advantage of the Debus Radziszewski reaction to obtain meso- and microporous polymers following click chemistry principles. In the obtained set of materials, click based porous cationic polymers have the same cationic backbone, whereas they bear the commonly used anions of imidazolium poly(ionic liquid)s. These materials show hierarchical porosity and a good specific surface area. Furthermore, their chemical structure was extensively characterized using ATR FTIR and SS NMR spectroscopies, and HR MS. These polymers show good performance towards carbon dioxide sorption, especially those possessing the acetate anion. This polymer has an uptake of 2 mmol per g of CO2 at 1 bar and 273 K, a value which is among the highest recorded for imidazolium poly(ionic liquid)s. These polymers were also modified in order to introduce N-heterocyclic carbenes along the backbone. Carbon dioxide loading in the carbene-containing polymer is in the same range as that of the non-modified versions, but the nature of the interaction is substantially different. The combined use of in situ FTIR spectroscopy and microcalorimetry evidenced a chemisorption phenomenon that brings about the formation of an imidazolium carboxylate zwitterion.
0
1
0
0
0
0
Oscillons in the presence of external potential
We discuss similarity between oscillons and oscillational mode in perturbed $\phi^4$. For small depths of the perturbing potential it is difficult to distinguish between oscillons and the mode in moderately long time evolution, moreover one can transform one into the other by adiabatically switching on and off the potential. Basins of attraction are presented in the parameter space describing the potential and initial conditions.
0
1
0
0
0
0
Dynamical Tides in Highly Eccentric Binaries: Chaos, Dissipation and Quasi-Steady State
Highly eccentric binary systems appear in many astrophysical contexts, ranging from tidal capture in dense star clusters, precursors of stellar disruption by massive black holes, to high-eccentricity migration of giant planets. In a highly eccentric binary, the tidal potential of one body can excite oscillatory modes in the other during a pericenter passage, resulting in energy exchange between the modes and the binary orbit. These modes exhibit one of three behaviors over multiple passages: low-amplitude oscillations, large amplitude oscillations corresponding to a resonance between the orbital frequency and the mode frequency, and chaotic growth. We study these phenomena with an iterative map, fully exploring how the mode evolution depends on the pericenter distance and other parameters. In addition, we show that the dissipation of mode energy results in a quasi-steady state, with gradual orbital decay punctuated by resonances, even in systems where the mode amplitude would initially grow stochastically. A newly captured star around a black hole can experience significant orbital decay and heating due to the chaotic growth of the mode amplitude and dissipation. A giant planet pushed into a high-eccentricity orbit may experience a similar effect and become a hot or warm Jupiter.
0
1
0
0
0
0
Calabi-Yau threefolds fibred by high rank lattice polarized K3 surfaces
We study threefolds fibred by K3 surfaces admitting a lattice polarization by a certain class of rank 19 lattices. We begin by showing that any family of such K3 surfaces is completely determined by a map from the base of the family to the appropriate K3 moduli space, which we call the generalized functional invariant. Then we show that if the threefold total space is a smooth Calabi-Yau, there are only finitely many possibilities for the polarizing lattice and the form of the generalized functional invariant. Finally, we construct explicit examples of Calabi-Yau threefolds realizing each case and compute their Hodge numbers.
0
0
1
0
0
0
Lipschitz perturbations of Morse-Smale semigroups
In this paper we will deal with Lipschitz continuous perturbations of Morse-Smale semigroups with only equilibrium points as critical elements. We study the behavior of the structure of equilibrium points and their connections when subjected to non-differentiable perturbations. To this end we define more general notions of \emph{hyperbolicity} and \emph{transversality}, which do not require differentiability.
0
0
1
0
0
0
Semi-Supervised Learning for Detecting Human Trafficking
Human trafficking is one of the most atrocious crimes and among the challenging problems facing law enforcement which demands attention of global magnitude. In this study, we leverage textual data from the website "Backpage"- used for classified advertisement- to discern potential patterns of human trafficking activities which manifest online and identify advertisements of high interest to law enforcement. Due to the lack of ground truth, we rely on a human analyst from law enforcement, for hand-labeling a small portion of the crawled data. We extend the existing Laplacian SVM and present S3VM-R, by adding a regularization term to exploit exogenous information embedded in our feature space in favor of the task at hand. We train the proposed method using labeled and unlabeled data and evaluate it on a fraction of the unlabeled data, herein referred to as unseen data, with our expert's further verification. Results from comparisons between our method and other semi-supervised and supervised approaches on the labeled data demonstrate that our learner is effective in identifying advertisements of high interest to law enforcement
1
0
0
0
0
0
Einstein's 1935 papers: EPR=ER?
In May of 1935, Einstein published with two co-authors the famous EPR-paper about entangled particles, which questioned the completeness of Quantum Mechanics by means of a gedankenexperiment. Only one month later, he published a work that seems unconnected to the EPR-paper at first, the so called Einstein-Rosen-paper, that presented a solution of the field equations for particles in the framework of general relativity. Both papers ask for the conception of completeness in a theory and, from a modern perspective, it is easy to believe that there is a connection between these topics. We question whether Einstein might have considered that a correlation between nonlocal features of Quantum Mechanics and the Einstein-Rosen bridge can be used to explain entanglement. We analyse this question by discussing the used conceptions of "completeness," "atomistic structure of matter," and "quantum phenomena." We discuss the historical embedding of the two works and the context to modern research. Recent approaches are presented that formulate an EPR=ER principle and claim an equivalence of the basic principles of these two papers.
0
1
0
0
0
0
An overview of process model quality literature - The Comprehensive Process Model Quality Framework
The rising interest in the construction and the quality of (business) process models resulted in an abundancy of emerged research studies and different findings about process model quality. The lack of overview and the lack of consensus hinder the development of the research field. The research objective is to collect, analyse, structure, and integrate the existing knowledge in a comprehensive framework that strives to find a balance between completeness and relevance without hindering the overview. The Systematic Literature Review methodology was applied to collect the relevant studies. Because several studies exist that each partially addresses this research objective, the review was performed at a tertiary level. Based on a critical analysis of the collected papers, a comprehensive, but structured overview of the state of the art in the field was composed. The existing academic knowledge about process model quality was carefully integrated and structured into the Comprehensive Process Model Quality Framework (CPMQF). The framework summarizes 39 quality dimensions, 21 quality metrics, 28 quality (sub)drivers, 44 (sub)driver metrics, 64 realization initiatives and 15 concrete process model purposes related to 4 types of organizational benefits, as well as the relations between all of these. This overview is thus considered to form a valuable instrument for both researchers and practitioners that are concerned about process model quality. The framework is the first to address the concept of process model quality in such a comprehensive way.
1
0
0
0
0
0
Convergence rates of least squares regression estimators with heavy-tailed errors
We study the performance of the Least Squares Estimator (LSE) in a general nonparametric regression model, when the errors are independent of the covariates but may only have a $p$-th moment ($p\geq 1$). In such a heavy-tailed regression setting, we show that if the model satisfies a standard `entropy condition' with exponent $\alpha \in (0,2)$, then the $L_2$ loss of the LSE converges at a rate \begin{align*} \mathcal{O}_{\mathbf{P}}\big(n^{-\frac{1}{2+\alpha}} \vee n^{-\frac{1}{2}+\frac{1}{2p}}\big). \end{align*} Such a rate cannot be improved under the entropy condition alone. This rate quantifies both some positive and negative aspects of the LSE in a heavy-tailed regression setting. On the positive side, as long as the errors have $p\geq 1+2/\alpha$ moments, the $L_2$ loss of the LSE converges at the same rate as if the errors are Gaussian. On the negative side, if $p<1+2/\alpha$, there are (many) hard models at any entropy level $\alpha$ for which the $L_2$ loss of the LSE converges at a strictly slower rate than other robust estimators. The validity of the above rate relies crucially on the independence of the covariates and the errors. In fact, the $L_2$ loss of the LSE can converge arbitrarily slowly when the independence fails. The key technical ingredient is a new multiplier inequality that gives sharp bounds for the `multiplier empirical process' associated with the LSE. We further give an application to the sparse linear regression model with heavy-tailed covariates and errors to demonstrate the scope of this new inequality.
0
0
1
1
0
0
Some properties of nested Kriging predictors
Kriging is a widely employed technique, in particular for computer experiments, in machine learning or in geostatistics. An important challenge for Kriging is the computational burden when the data set is large. We focus on a class of methods aiming at decreasing this computational cost, consisting in aggregating Kriging predictors based on smaller data subsets. We prove that aggregations based solely on the conditional variances provided by the different Kriging predictors can yield an inconsistent final Kriging prediction. In contrasts, we study theoretically the recent proposal by [Rulli{è}re et al., 2017] and obtain additional attractive properties for it. We prove that this predictor is consistent, we show that it can be interpreted as an exact conditional distribution for a modified process and we provide error bounds for it.
0
0
1
1
0
0
Optimal Transport on Discrete Domains
Inspired by the matching of supply to demand in logistical problems, the optimal transport (or Monge--Kantorovich) problem involves the matching of probability distributions defined over a geometric domain such as a surface or manifold. In its most obvious discretization, optimal transport becomes a large-scale linear program, which typically is infeasible to solve efficiently on triangle meshes, graphs, point clouds, and other domains encountered in graphics and machine learning. Recent breakthroughs in numerical optimal transport, however, enable scalability to orders-of-magnitude larger problems, solvable in a fraction of a second. Here, we discuss advances in numerical optimal transport that leverage understanding of both discrete and smooth aspects of the problem. State-of-the-art techniques in discrete optimal transport combine insight from partial differential equations (PDE) with convex analysis to reformulate, discretize, and optimize transportation problems. The end result is a set of theoretically-justified models suitable for domains with thousands or millions of vertices. Since numerical optimal transport is a relatively new discipline, special emphasis is placed on identifying and explaining open problems in need of mathematical insight and additional research.
1
0
0
0
0
0
High-Resolution Altitude Profiles of the Atmospheric Turbulence with PML at the Sutherland Observatory
With the prospect of the next generation of ground-based telescopes, the extremely large telescopes (ELTs), increasingly complex and demanding adaptive optics (AO) systems are needed. This is to compensate for image distortion caused by atmospheric turbulence and fully take advantage of mirrors with diameters of 30 to 40 m. This requires a more precise characterization of the turbulence. The PML (Profiler of Moon Limb) was developed within this context. The PML aims to provide high-resolution altitude profiles of the turbulence using differential measurements of the Moon limb position to calculate the transverse spatio-angular covariance of the Angle of Arrival fluctuations. The covariance of differential image motion for different separation angles is sensitive to the altitude distribution of the seeing. The use of the continuous Moon limb provides a large number of separation angles allowing for the high-resolution altitude of the profiles. The method is presented and tested with simulated data. Moreover a PML instrument was deployed at the Sutherland Observatory in South Africa in August 2011. We present here the results of this measurement campaign.
0
1
0
0
0
0
Spice up Your Chat: The Intentions and Sentiment Effects of Using Emoji
Emojis, as a new way of conveying nonverbal cues, are widely adopted in computer-mediated communications. In this paper, first from a message sender perspective, we focus on people's motives in using four types of emojis -- positive, neutral, negative, and non-facial. We compare the willingness levels of using these emoji types for seven typical intentions that people usually apply nonverbal cues for in communication. The results of extensive statistical hypothesis tests not only report the popularities of the intentions, but also uncover the subtle differences between emoji types in terms of intended uses. Second, from a perspective of message recipients, we further study the sentiment effects of emojis, as well as their duplications, on verbal messages. Different from previous studies in emoji sentiment, we study the sentiments of emojis and their contexts as a whole. The experiment results indicate that the powers of conveying sentiment are different between four emoji types, and the sentiment effects of emojis vary in the contexts of different valences.
1
0
0
0
0
0
Self-shielding of hydrogen in the IGM during the epoch of reionization
We investigate self-shielding of intergalactic hydrogen against ionizing radiation in radiative transfer simulations of cosmic reionization carefully calibrated with Lyman alpha forest data. While self-shielded regions manifest as Lyman-limit systems in the post-reionization Universe, here we focus on their evolution during reionization (redshifts z=6-10). At these redshifts, the spatial distribution of hydrogen-ionizing radiation is highly inhomogeneous, and some regions of the Universe are still neutral. After masking the neutral regions and ionizing sources in the simulation, we find that the hydrogen photoionization rate depends on the local hydrogen density in a manner very similar to that in the post-reionization Universe. The characteristic physical hydrogen density above which self-shielding becomes important at these redshifts is about $\mathrm{n_H \sim 3 \times 10^{-3} cm^{-3}}$, or $\sim$ 20 times the mean hydrogen density, reflecting the fact that during reionization photoionization rates are typically low enough that the filaments in the cosmic web are often self-shielded. The value of the typical self-shielding density decreases by a factor of 3 between redshifts z=3 and 10, and follows the evolution of the average photoionization rate in ionized regions in a simple fashion. We provide a simple parameterization of the photoionization rate as a function of density in self-shielded regions during the epoch of reionization.
0
1
0
0
0
0
Universal Conditional Machine
We propose a single neural probabilistic model based on variational autoencoder that can be conditioned on an arbitrary subset of observed features and then sample the remaining features in "one shot". The features may be both real-valued and categorical. Training of the model is performed by stochastic variational Bayes. The experimental evaluation on synthetic data, as well as feature imputation and image inpainting problems, shows the effectiveness of the proposed approach and diversity of the generated samples.
0
0
0
1
0
0
A hybrid isogeometric approach on multi-patches with applications to Kirchhoff plates and eigenvalue problems
We present a systematic study on higher-order penalty techniques for isogeometric mortar methods. In addition to the weak-continuity enforced by a mortar method, normal derivatives across the interface are penalized. The considered applications are fourth order problems as well as eigenvalue problems for second and fourth order equations. The hybrid coupling enables the discretization of fourth order problems in a multi-patch setting as well as a convenient implementation of natural boundary conditions. For second order eigenvalue problems, the pollution of the discrete spectrum - typically referred to as 'outliers' - can be avoided. Numerical results illustrate the good behaviour of the proposed method in simple systematic studies as well as more complex multi-patch mapped geometries for linear elasticity and Kirchhoff plates.
0
0
1
0
0
0
Circumstellar discs: What will be next?
This prospective chapter gives our view on the evolution of the study of circumstellar discs within the next 20 years from both observational and theoretical sides. We first present the expected improvements in our knowledge of protoplanetary discs as for their masses, sizes, chemistry, the presence of planets as well as the evolutionary processes shaping these discs. We then explore the older debris disc stage and explain what will be learnt concerning their birth, the intrinsic links between these discs and planets, the hot dust and the gas detected around main sequence stars as well as discs around white dwarfs.
0
1
0
0
0
0
Algorithmic Verification of Linearizability for Ordinary Differential Equations
For a nonlinear ordinary differential equation solved with respect to the highest order derivative and rational in the other derivatives and in the independent variable, we devise two algorithms to check if the equation can be reduced to a linear one by a point transformation of the dependent and independent variables. The first algorithm is based on a construction of the Lie point symmetry algebra and on the computation of its derived algebra. The second algorithm exploits the differential Thomas decomposition and allows not only to test the linearizability, but also to generate a system of nonlinear partial differential equations that determines the point transformation and the coefficients of the linearized equation. Both algorithms have been implemented in Maple and their application is illustrated using several examples.
1
0
1
0
0
0
Space weather challenges of the polar cap ionosphere
This paper presents research on polar cap ionosphere space weather phenomena conducted during the European Cooperation in Science and Technology (COST) action ES0803 from 2008 to 2012. The main part of the work has been directed toward the study of plasma instabilities and scintillations in association with cusp flow channels and polar cap electron density structures/patches,which is considered as critical knowledge in order to develop forecast models for scintillations in the polar cap. We have approached this problem by multi-instrument techniques that comprise the EISCAT Svalbard Radar, SuperDARN radars, in-situ rocket, and GPS scintillation measurements. The Discussion section aims to unify the bits and pieces of highly specialized information from several papers into a generalized picture. The cusp ionosphere appears as a hot region in GPS scintillation climatology maps. Our results are consistent with the existing view that scintillations in the cusp and the polar cap ionosphere are mainly due to multi-scale structures generated by instability processes associated with the cross-polar transport of polar cap patches. We have demonstrated that the SuperDARN convection model can be used to track these patches backward and forward in time. Hence, once a patch has been detected in the cusp inflow region, SuperDARN can be used to forecast its destination in the future. However, the high-density gradient of polar cap patches is not the only prerequisite for high-latitude scintillations. Unprecedented high resolution rocket measurements reveal that the cusp ionosphere is associated with filamentary precipitation giving rise to kilometer scale gradients onto which the gradient drift instability can operate very efficiently... (continued)
0
1
0
0
0
0
Unconventional Large Linear Magnetoresistance in Cu$_{2-x}$Te
We report a large linear magnetoresistance in Cu$_{2-x}$Te, reaching $\Delta\rho/\rho(0)$ = 250\% at 2 K in a 9 T field. This is observed for samples with $x$ in the range 0.13 to 0.22, and the results are comparable to the effects observed in Ag$_2 X$ materials, although in this case the results appear for a much wider range of bulk carrier density. Examining the magnitude vs. crossover field from low-field quadratic to high-field linear behavior, we show that models based on classical transport behavior best explain the observed results. The effects are traced to misdirected currents due to topologically inverted behavior in this system, such that stable surface states provide the high mobility transport channels. The resistivity also crosses over to a $T^2$ dependence in the temperature range where the large linear MR appears, an indicator of electron-electron interaction effects within the surface states. Thus this is an example of a system in which these interactions dominate the low-temperature behavior of the surface states.
0
1
0
0
0
0
Which Distribution Distances are Sublinearly Testable?
Given samples from an unknown distribution $p$ and a description of a distribution $q$, are $p$ and $q$ close or far? This question of "identity testing" has received significant attention in the case of testing whether $p$ and $q$ are equal or far in total variation distance. However, in recent work, the following questions have been been critical to solving problems at the frontiers of distribution testing: -Alternative Distances: Can we test whether $p$ and $q$ are far in other distances, say Hellinger? -Tolerance: Can we test when $p$ and $q$ are close, rather than equal? And if so, close in which distances? Motivated by these questions, we characterize the complexity of distribution testing under a variety of distances, including total variation, $\ell_2$, Hellinger, Kullback-Leibler, and $\chi^2$. For each pair of distances $d_1$ and $d_2$, we study the complexity of testing if $p$ and $q$ are close in $d_1$ versus far in $d_2$, with a focus on identifying which problems allow strongly sublinear testers (i.e., those with complexity $O(n^{1 - \gamma})$ for some $\gamma > 0$ where $n$ is the size of the support of the distributions $p$ and $q$). We provide matching upper and lower bounds for each case. We also study these questions in the case where we only have samples from $q$ (equivalence testing), showing qualitative differences from identity testing in terms of when tolerance can be achieved. Our algorithms fall into the classical paradigm of $\chi^2$-statistics, but require crucial changes to handle the challenges introduced by each distance we consider. Finally, we survey other recent results in an attempt to serve as a reference for the complexity of various distribution testing problems.
1
0
1
1
0
0
Hund's coupling driven photo-carrier relaxation in the two-band Mott insulator
We study the relaxation dynamics of photo-carriers in the paramagnetic Mott insulating phase of the half-filled two-band Hubbard model. Using nonequilibrium dynamical mean field theory, we excite charge carriers across the Mott gap by a short hopping modulation, and simulate the evolution of the photo-doped population within the Hubbard bands. We observe an ultrafast charge-carrier relaxation driven by emission of local spin excitations with an inverse relaxation time proportional to the Hund's coupling. The photo-doping generates additional side-bands in the spectral function, and for strong Hund's coupling, the photo-doped population also splits into several resonances. The dynamics of the local many-body states reveals two effects, thermal blocking and kinetic freezing, which manifest themselves when the Hund's coupling becomes of the order of the temperature or the bandwidth, respectively. These effects, which are absent in the single-band Hubbard model, should be relevant for the interpretation of experiments on correlated materials with multiple active orbitals. In particular, the features revealed in the non-equilibrium energy distribution of the photo-carriers are experimentally accessible, and provide information on the role of the Hund's coupling in these materials.
0
1
0
0
0
0
A Connection between Feed-Forward Neural Networks and Probabilistic Graphical Models
Two of the most popular modelling paradigms in computer vision are feed-forward neural networks (FFNs) and probabilistic graphical models (GMs). Various connections between the two have been studied in recent works, such as e.g. expressing mean-field based inference in a GM as an FFN. This paper establishes a new connection between FFNs and GMs. Our key observation is that any FFN implements a certain approximation of a corresponding Bayesian network (BN). We characterize various benefits of having this connection. In particular, it results in a new learning algorithm for BNs. We validate the proposed methods for a classification problem on CIFAR-10 dataset and for binary image segmentation on Weizmann Horse dataset. We show that statistically learned BNs improve performance, having at the same time essentially better generalization capability, than their FFN counterparts.
1
0
0
1
0
0
On the digital representation of smooth numbers
Let $b \ge 2$ be an integer. Among other results, we establish, in a quantitative form, that any sufficiently large integer which is not a multiple of $b$ cannot have simultaneously only few distinct prime factors and only few nonzero digits in its representation in base $b$.
0
0
1
0
0
0
On Constraint Qualifications of a Nonconvex Inequality
In this paper, we study constraint qualifications for the nonconvex inequality defined by a proper lower semicontinuous function. These constraint qualifications involve the generalized construction of normal cones and subdifferentials. Several conditions for these constraint qualifications are also provided therein. When restricted to the convex inequality, these constraint qualifications reduce to basic constraint qualification (BCQ) and strong BCQ studied in [SIAM J. Optim., 14(2004), 757-772] and [Math. Oper. Res., 30 (2005), 956-965].
0
0
1
0
0
0
A Framework for Dynamic Stability Analysis of Power Systems with Volatile Wind Power
We propose a framework employing stochastic differential equations to facilitate the long-term stability analysis of power grids with intermittent wind power generations. This framework takes into account the discrete dynamics which play a critical role in the long-term stability analysis, incorporates the model of wind speed with different probability distributions, and also develops an approximation methodology (by a deterministic hybrid model) for the stochastic hybrid model to reduce the computational burden brought about by the uncertainty of wind power. The theoretical and numerical studies show that a deterministic hybrid model can provide an accurate trajectory approximation and stability assessments for the stochastic hybrid model under mild conditions. In addition, we discuss the critical cases that the deterministic hybrid model fails and discover that these cases are caused by a violation of the proposed sufficient conditions. Such discussion complements the proposed framework and methodology and also reaffirms the importance of the stochastic hybrid model when the system operates close to its stability limit.
1
0
0
0
0
0
Confidence interval for correlation estimator between latent processes
Kimura and Yoshida treated a model in which the finite variation part of a two-dimensional semimartingale is expressed by time-integration of latent processes. They proposed a correlation estimator between the latent processes and proved its consistency and asymptotic mixed normality. In this paper, we discuss the confidence interval of the correlation estimator to detect the correlation. %between latent processes. We propose two types of estimators for asymptotic variance of the correlation estimator and prove their consistency in a high frequency setting. Our model includes doubly stochastic Poisson processes whose intensity processes are correlated Itô processes. We compare our estimators based on the simulation of the doubly stochastic Poisson processes.
0
0
1
1
0
0
Novel event classification based on spectral analysis of scintillation waveforms in Double Chooz
Liquid scintillators are a common choice for neutrino physics experiments, but their capabilities to perform background rejection by scintillation pulse shape discrimination is generally limited in large detectors. This paper describes a novel approach for a pulse shape based event classification developed in the context of the Double Chooz reactor antineutrino experiment. Unlike previous implementations, this method uses the Fourier power spectra of the scintillation pulse shapes to obtain event-wise information. A classification variable built from spectral information was able to achieve an unprecedented performance, despite the lack of optimization at the detector design level. Several examples of event classification are provided, ranging from differentiation between the detector volumes and an efficient rejection of instrumental light noise, to some sensitivity to the particle type, such as stopping muons, ortho-positronium formation, alpha particles as well as electrons and positrons. In combination with other techniques the method is expected to allow for a versatile and more efficient background rejection in the future, especially if detector optimization is taken into account at the design level.
0
1
0
0
0
0
Symmetric Riemannian problem on the group of proper isometries of hyperbolic plane
We consider the Lie group PSL(2) (the group of orientation preserving isometries of the hyperbolic plane) and a left-invariant Riemannian metric on this group with two equal eigenvalues that correspond to space-like eigenvectors (with respect to the Killing form). For such metrics we find a parametrization of geodesics, the conjugate time, the cut time and the cut locus. The injectivity radius is computed. We show that the cut time and the cut locus in such Riemannian problem converge to the cut time and the cut locus in the corresponding sub-Riemannian problem as the third eigenvalue of the metric tends to infinity. Similar results are also obtained for SL(2).
0
0
1
0
0
0
Heterogeneous Supervision for Relation Extraction: A Representation Learning Approach
Relation extraction is a fundamental task in information extraction. Most existing methods have heavy reliance on annotations labeled by human experts, which are costly and time-consuming. To overcome this drawback, we propose a novel framework, REHession, to conduct relation extractor learning using annotations from heterogeneous information source, e.g., knowledge base and domain heuristics. These annotations, referred as heterogeneous supervision, often conflict with each other, which brings a new challenge to the original relation extraction task: how to infer the true label from noisy labels for a given instance. Identifying context information as the backbone of both relation extraction and true label discovery, we adopt embedding techniques to learn the distributed representations of context, which bridges all components with mutual enhancement in an iterative fashion. Extensive experimental results demonstrate the superiority of REHession over the state-of-the-art.
1
0
0
0
0
0
Variational integrators for anelastic and pseudo-incompressible flows
The anelastic and pseudo-incompressible equations are two well-known soundproof approximations of compressible flows useful for both theoretical and numerical analysis in meteorology, atmospheric science, and ocean studies. In this paper, we derive and test structure-preserving numerical schemes for these two systems. The derivations are based on a discrete version of the Euler-Poincaré variational method. This approach relies on a finite dimensional approximation of the (Lie) group of diffeomorphisms that preserve weighted-volume forms. These weights describe the background stratification of the fluid and correspond to the weighed velocity fields for anelastic and pseudo-incompressible approximations. In particular, we identify to these discrete Lie group configurations the associated Lie algebras such that elements of the latter correspond to weighted velocity fields that satisfy the divergence-free conditions for both systems. Defining discrete Lagrangians in terms of these Lie algebras, the discrete equations follow by means of variational principles. Descending from variational principles, the schemes exhibit further a discrete version of Kelvin circulation theorem, are applicable to irregular meshes, and show excellent long term energy behavior. We illustrate the properties of the schemes by performing preliminary test cases.
0
1
1
0
0
0
The reparameterization trick for acquisition functions
Bayesian optimization is a sample-efficient approach to solving global optimization problems. Along with a surrogate model, this approach relies on theoretically motivated value heuristics (acquisition functions) to guide the search process. Maximizing acquisition functions yields the best performance; unfortunately, this ideal is difficult to achieve since optimizing acquisition functions per se is frequently non-trivial. This statement is especially true in the parallel setting, where acquisition functions are routinely non-convex, high-dimensional, and intractable. Here, we demonstrate how many popular acquisition functions can be formulated as Gaussian integrals amenable to the reparameterization trick and, ensuingly, gradient-based optimization. Further, we use this reparameterized representation to derive an efficient Monte Carlo estimator for the upper confidence bound acquisition function in the context of parallel selection.
1
0
0
1
0
0
Multi-model ensembles for ecosystem prediction
When making predictions about ecosystems, we often have available a number of different ecosystem models that attempt to represent their dynamics in a detailed mechanistic way. Each of these can be used as simulators of large-scale experiments and make forecasts about the fate of ecosystems under different scenarios in order to support the development of appropriate management strategies. However, structural differences, systematic discrepancies and uncertainties lead to different models giving different predictions under these scenarios. This is further complicated by the fact that the models may not be run with the same species or functional groups, spatial structure or time scale. Rather than simply trying to select a 'best' model, or taking some weighted average, it is important to exploit the strengths of each of the available models, while learning from the differences between them. To achieve this, we construct a flexible statistical model of the relationships between a collection or 'ensemble' of mechanistic models and their biases, allowing for structural and parameter uncertainty and for different ways of representing reality. Using this statistical meta-model, we can combine prior beliefs, model estimates and direct observations using Bayesian methods, and make coherent predictions of future outcomes under different scenarios with robust measures of uncertainty. In this paper we present the modelling framework and discuss results obtained using a diverse ensemble of models in scenarios involving future changes in fishing levels. These examples illustrate the value of our approach in predicting outcomes for possible strategies pertaining to climate and fisheries policy aimed at improving food security and maintaining ecosystem integrity.
0
0
0
1
0
0
Nonlinear atomic vibrations and structural phase transitions in strained carbon chains
We consider longitudinal nonlinear atomic vibrations in uniformly strained carbon chains with the cumulene structure ($=C=C=)_{n}$. With the aid of ab initio simulations, based on the density functional theory, we have revealed the phenomenon of the $\pi$-mode softening in a certain range of its amplitude for the strain above the critical value $\eta_{c}\approx 11\,{\%}$. Condensation of this soft mode induces the structural transformation of the carbon chain with doubling of its unit cell. This is the Peierls phase transition in the strained cumulene, which was previously revealed in [Nano Lett. 14, 4224 (2014)]. The Peierls transition leads to appearance of the energy gap in the electron spectrum of the strained carbyne, and this material transforms from the conducting state to semiconducting or insulating states. The authors of the above paper emphasize that such phenomenon can be used for construction of various nanodevices. The $\pi$-mode softening occurs because the old equilibrium positions (EQPs), around which carbon atoms vibrate at small strains, lose their stability and these atoms begin to vibrate in the new potential wells located near old EQPs. We study the stability of the new EQPs, as well as stability of vibrations in their vicinity. In previous paper [Physica D 203, 121(2005)], we proved that only three symmetry-determined Rosenberg nonlinear normal modes can exist in monoatomic chains with arbitrary interparticle interactions. They are the above-discussed $\pi$-mode and two other modes, which we call $\sigma$-mode and $\tau$-mode. These modes correspond to the multiplication of the unit cell of the vibrational state by two, three or four times compared to that of the equilibrium state. We study properties of these modes in the chain model with arbitrary pair potential of interparticle interactions.
0
1
0
0
0
0
Homotopy classes of gauge fields and the lattice
For a smooth manifold $M$, possibly with boundary and corners, and a Lie group $G$, we consider a suitable description of gauge fields in terms of parallel transport, as groupoid homomorphisms from a certain path groupoid in $M$ to $G$. Using a cotriangulation $\mathscr{C}$ of $M$, and collections of finite-dimensional families of paths relative to $\mathscr{C}$, we define a homotopical equivalence relation of parallel transport maps, leading to the concept of an extended lattice gauge (ELG) field. A lattice gauge field, as used in Lattice Gauge Theory, is part of the data contained in an ELG field, but the latter contains further local topological information sufficient to reconstruct a principal $G$-bundle on $M$ up to equivalence. The space of ELG fields of a given pair $(M,\mathscr{C})$ is a covering for the space of fields in Lattice Gauge Theory, whose connected components parametrize equivalence classes of principal $G$-bundles on $M$. We give a criterion to determine when ELG fields over different cotriangulations define equivalent bundles.
0
0
1
0
0
0
Classical System of Martin-Lof's Inductive Definitions is not Equivalent to Cyclic Proofs
A cyclic proof system, called CLKID-omega, gives us another way of representing inductive definitions and effcient proof search. The 2011 paper by Brotherston and Simpson showed that the provability of CLKID-omega includes the provability of Martin-Lof's system of inductive definitions, called LKID, and conjectured the equivalence. Since then, the equivalence has been left an open question. This paper shows that CLKID-omega and LKID are indeed not equivalent. This paper considers a statement called 2-Hydra in these two systems with the first-order language formed by 0, the successor, the natural number predicate, and a binary predicate symbol used to express 2-Hydra. This paper shows that the 2-Hydra statement is provable in CLKID-omega, but the statement is not provable in LKID, by constructing some Henkin model where the statement is false.
1
0
0
0
0
0
Factorization and non-factorization theorems for pseudocontinuable functions
Let $\theta$ be an inner function on the unit disk, and let $K^p_\theta:=H^p\cap\theta\overline{H^p_0}$ be the associated star-invariant subspace of the Hardy space $H^p$, with $p\ge1$. While a nontrivial function $f\in K^p_\theta$ is never divisible by $\theta$, it may have a factor $h$ which is "not too different" from $\theta$ in the sense that the ratio $h/\theta$ (or just the anti-analytic part thereof) is smooth on the circle. In this case, $f$ is shown to have additional integrability and/or smoothness properties, much in the spirit of the Hardy--Littlewood--Sobolev embedding theorem. The appropriate norm estimates are established, and their sharpness is discussed.
0
0
1
0
0
0
Adaptively Detecting Malicious Queries in Web Attacks
Web request query strings (queries), which pass parameters to the referenced resource, are always manipulated by attackers to retrieve sensitive data and even take full control of victim web servers and web applications. However, existing malicious query detection approaches in the current literature cannot cope with changing web attacks with constant detection models. In this paper, we propose AMODS, an adaptive system that periodically updates the detection model to detect the latest unknown attacks. We also propose an adaptive learning strategy, called SVM HYBRID, leveraged by our system to minimize manual work. In the evaluation, an up-to-date detection model is trained on a ten-day query dataset collected from an academic institute's web server logs. Our system outperforms existing web attack detection methods, with an F-value of 94.79% and FP rate of 0.09%. The total number of malicious queries obtained by SVM HYBRID is 2.78 times that by the popular Support Vector Machine Adaptive Learning (SVM AL) method. The malicious queries obtained can be used to update the Web Application Firewall (WAF) signature library.
1
0
0
0
0
0
Derivatives pricing using signature payoffs
We introduce signature payoffs, a family of path-dependent derivatives that are given in terms of the signature of the price path of the underlying asset. We show that these derivatives are dense in the space of continuous payoffs, a result that is exploited to quickly price arbitrary continuous payoffs. This approach to pricing derivatives is then tested with European options, American options, Asian options, lookback options and variance swaps. As we show, signature payoffs can be used to price these derivatives with very high accuracy.
0
0
0
0
0
1
ALMA constraints on star-forming gas in a prototypical z=1.5 clumpy galaxy: the dearth of CO(5-4) emission from UV-bright clumps
We present deep ALMA CO(5-4) observations of a main sequence, clumpy galaxy at z=1.5 in the HUDF. Thanks to the ~0.5" resolution of the ALMA data, we can link stellar population properties to the CO(5-4) emission on scales of a few kpc. We detect strong CO(5-4) emission from the nuclear region of the galaxy, consistent with the observed $L_{\rm IR}$-$L^{\prime}_{\rm CO(5-4)}$ correlation and indicating on-going nuclear star formation. The CO(5-4) gas component appears more concentrated than other star formation tracers or the dust distribution in this galaxy. We discuss possible implications of this difference in terms of star formation efficiency and mass build-up at the galaxy centre. Conversely, we do not detect any CO(5-4) emission from the UV-bright clumps. This might imply that clumps have a high star formation efficiency (although they do not display unusually high specific star formation rates) and are not entirely gas dominated, with gas fractions no larger than that of their host galaxy (~50%). Stellar feedback and disk instability torques funnelling gas towards the galaxy centre could contribute to the relatively low gas content. Alternatively, clumps could fall in a more standard star formation efficiency regime if their actual star-formation rates are lower than generally assumed. We find that clump star-formation rates derived with several different, plausible methods can vary by up to an order of magnitude. The lowest estimates would be compatible with a CO(5-4) non-detection even for main-sequence like values of star formation efficiency and gas content.
0
1
0
0
0
0
Data-driven framework for real-time thermospheric density estimation
In this paper, we demonstrate a new data-driven framework for real-time neutral density estimation via model-data fusion in quasi-physical ionosphere-thermosphere models. The framework has two main components: (i) the development of a quasi-physical dynamic reduced order model (ROM) that uses a linear approximation of the underlying dynamics and effect of the drivers, and (ii) dynamic calibration of the ROM through estimation of the ROM coefficients that represent the model parameters. We have previously demonstrated the development of a quasi-physical ROM using simulation output from a physical model and assimilation of non-operational density estimates derived from accelerometer measurements along a single orbit. In this paper, we demonstrate the potential of the framework for use with operational measurements. We use simulated GPS-derived orbit ephemerides with 5 minute resolution as measurements. The framework is a first of its kind, simple yet robust and accurate method with high potential for providing real-time operational updates to the state of the upper atmosphere using quasi-physical models with inherent forecasting/predictive capabilities.
1
0
0
0
0
0
Synergies between Asteroseismology and Three-dimensional Simulations of Stellar Turbulence
Turbulent mixing of chemical elements by convection has fundamental effects on the evolution of stars. The standard algorithm at present, mixing-length theory (MLT), is intrinsically local, and must be supplemented by extensions with adjustable parameters. As a step toward reducing this arbitrariness, we compare asteroseismically inferred internal structures of two Kepler slowly pulsating B stars (SPB's; $M\sim 3.25 M_\odot$) to predictions of 321D turbulence theory, based upon well-resolved, truly turbulent three-dimensional simulations (Arnett , et al. 2015, Christini, et al. 2016) which include boundary physics absent from MLT. We find promising agreement between the steepness and shapes of the theoretically-predicted composition profile outside the convective region in 3D simulations and in asteroseismically constrained composition profiles in the best 1D models of the two SPBs. The structure and motion of the boundary layer, and the generation of waves, are discussed.
0
1
0
0
0
0
A polyharmonic Maass form of depth 3/2 for SL_2(Z)
Duke, Imamoglu, and Toth constructed a polyharmonic Maass form of level 4 whose Fourier coefficients encode real quadratic class numbers. A more general construction of such forms was subsequently given by Bruinier, Funke, and Imamoglu. Here we give a direct construction of such a form for the full modular group and study the properties of its coefficients. We give interpretations of the coefficients of the holomorphic parts of each of these polyharmonic Maass forms as inner products of certain weakly holomorphic modular forms and harmonic Maass forms. The coefficients of square index are particularly intractable; in order to address these, we develop various extensions of the usual normalized Peterson inner product using a strategy of Bringmann, Ehlen and Diamantis.
0
0
1
0
0
0
Systems of cubic forms in many variables
We consider a system of $R$ cubic forms in $n$ variables, with integer coefficients, which define a smooth complete intersection in projective space. Provided $n\geq 25R$, we prove an asymptotic formula for the number of integer points in an expanding box at which these forms simultaneously vanish. In particular we can handle systems of forms in $O(R)$ variables, previous work having required that $n \gg R^2$. One conjectures that $n \geq 6R+1$ should be sufficient. We reduce the problem to an upper bound for the number of solutions to a certain auxiliary inequality. To prove this bound we adapt a method of Davenport.
0
0
1
0
0
0
Nitrogen-doped Nanoporous Carbon Membranes Functionalized with Co/CoP Janus-type nanocrystals as Hydrogen Evolution Electrode in Both Acid and Alkaline Environment
Self-supported electrocatalysts being generated and employed directly as electrode for energy conversion has been intensively pursued in the fields of materials chemistry and energy. Herein, we report a synthetic strategy to prepare freestanding hierarchically structured, nitrogen-doped nanoporous graphitic carbon membranes functionalized with Janus-type Co/CoP nanocrystals (termed as HNDCM-Co/CoP), which were successfully applied as a highly-efficient, binder-free electrode in hydrogen evolution reaction (HER). Benefited from multiple structural merits, such as high degree of graphitization, three-dimensionally interconnected micro-/meso-/macropores, uniform nitrogen-doping, well-dispersed Co/CoP nanocrystals as well as the confinement effect of the thin carbon layer on the nanocrystals, HNDCM-Co/CoP exhibited superior electrocatalytic activity and long-term operation stability for HER under both acid and alkaline conditions. As a proof-of-concept of practical usage, a macroscopic piece of HNDCM-Co/CoP of 5.6 cm x 4 cm x 60 um in size was prepared in our laboratory. Driven by a solar cell, electroreduction of water in alkaline condition (pH 14) was performed, and H2 has been produced at a rate of 16 ml/min, demonstrating its potential as real-life energy conversion systems.
0
1
0
0
0
0
Cyber-Physical Systems Security -- A Survey
With the exponential growth of cyber-physical systems (CPS), new security challenges have emerged. Various vulnerabilities, threats, attacks, and controls have been introduced for the new generation of CPS. However, there lack a systematic study of CPS security issues. In particular, the heterogeneity of CPS components and the diversity of CPS systems have made it very difficult to study the problem with one generalized model. In this paper, we capture and systematize existing research on CPS security under a unified framework. The framework consists of three orthogonal coordinates: (1) from the \emph{security} perspective, we follow the well-known taxonomy of threats, vulnerabilities, attacks and controls; (2)from the \emph{CPS components} perspective, we focus on cyber, physical, and cyber-physical components; and (3) from the \emph{CPS systems} perspective, we explore general CPS features as well as representative systems (e.g., smart grids, medical CPS and smart cars). The model can be both abstract to show general interactions of a CPS application and specific to capture any details when needed. By doing so, we aim to build a model that is abstract enough to be applicable to various heterogeneous CPS applications; and to gain a modular view of the tightly coupled CPS components. Such abstract decoupling makes it possible to gain a systematic understanding of CPS security, and to highlight the potential sources of attacks and ways of protection.
1
0
0
0
0
0
WebPol: Fine-grained Information Flow Policies for Web Browsers
In the standard web browser programming model, third-party scripts included in an application execute with the same privilege as the application's own code. This leaves the application's confidential data vulnerable to theft and leakage by malicious code and inadvertent bugs in the third-party scripts. Security mechanisms in modern browsers (the same-origin policy, cross-origin resource sharing and content security policies) are too coarse to suit this programming model. All these mechanisms (and their extensions) describe whether or not a script can access certain data, whereas the meaningful requirement is to allow untrusted scripts access to confidential data that they need and to prevent the scripts from leaking data on the side. Motivated by this gap, we propose WebPol, a policy mechanism that allows a website developer to include fine-grained policies on confidential application data in the familiar syntax of the JavaScript programming language. The policies can be associated with any webpage element, and specify what aspects of the element can be accessed by which third-party domains. A script can access data that the policy allows it to, but it cannot pass the data (or data derived from it) to other scripts or remote hosts in contravention of the policy. To specify the policies, we expose a small set of new native APIs in JavaScript. Our policies can be enforced using any of the numerous existing proposals for information flow tracking in web browsers. We have integrated our policies into one such proposal that we use to evaluate performance overheads and to test our examples.
1
0
0
0
0
0
Large Sample Asymptotics of the Pseudo-Marginal Method
The pseudo-marginal algorithm is a variant of the Metropolis-Hastings algorithm which samples asymptotically from a probability distribution when it is only possible to estimate unbiasedly an unnormalized version of its density. Practically, one has to trade-off the computational resources used to obtain this estimator against the asymptotic variances of the ergodic averages obtained by the pseudo-marginal algorithm. Recent works optimizing this trade-off rely on some strong assumptions which can cast doubts over their practical relevance. In particular, they all assume that the distribution of the additive error in the log-likelihood estimator is independent of the parameter value at which it is evaluated. Under weak regularity conditions we show here that, as the number of data points tends to infinity, a space-rescaled version of the pseudo-marginal chain converges weakly towards another pseudo-marginal chain for which this assumption indeed holds. A study of this limiting chain allows us to provide parameter dimension-dependent guidelines on how to optimally scale a normal random walk proposal and the number of Monte Carlo samples for the pseudo-marginal method in the large sample regime. This complements and validates currently available results.
0
0
0
1
0
0
Large-time behavior of solutions to Vlasov-Poisson-Fokker-Planck equations: from evanescent collisions to diffusive limit
The present contribution investigates the dynamics generated by the two-dimensional Vlasov-Poisson-Fokker-Planck equation for charged particles in a steady inhomogeneous background of opposite charges. We provide global in time estimates that are uniform with respect to initial data taken in a bounded set of a weighted $L^2$ space, and where dependencies on the mean-free path $\tau$ and the Debye length $\delta$ are made explicit. In our analysis the mean free path covers the full range of possible values: from the regime of evanescent collisions $\tau\to\infty$ to the strongly collisional regime $\tau\to0$. As a counterpart, the largeness of the Debye length, that enforces a weakly nonlinear regime, is used to close our nonlinear estimates. Accordingly we pay a special attention to relax as much as possible the $\tau$-dependent constraint on $\delta$ ensuring exponential decay with explicit $\tau$-dependent rates towards the stationary solution. In the strongly collisional limit $\tau\to0$, we also examine all possible asymptotic regimes selected by a choice of observation time scale. Here also, our emphasis is on strong convergence, uniformity with respect to time and to initial data in bounded sets of a $L^2$ space. Our proofs rely on a detailed study of the nonlinear elliptic equation defining stationary solutions and a careful tracking and optimization of parameter dependencies of hypocoercive/hypoelliptic estimates.
0
0
1
0
0
0
Towards Adaptive Resilience in High Performance Computing
Failure rates in high performance computers rapidly increase due to the growth in system size and complexity. Hence, failures became the norm rather than the exception. Different approaches on high performance computing (HPC) systems have been introduced, to prevent failures (e. g., redundancy) or at least minimize their impacts (e. g., checkpoint and restart). In most cases, when these approaches are employed to increase the resilience of certain parts of a system, energy consumption rapidly increases, or performance significantly degrades. To address this challenge, we propose on-demand resilience as an approach to achieve adaptive resilience in HPC systems. In this work, the HPC system is considered in its entirety and resilience mechanisms such as checkpointing, isolation, and migration, are activated on-demand. Using the proposed approach, the unavoidable increase in total energy consumption and system performance degradation is decreased compared to the typical checkpoint/restart and redundant resilience mechanisms. Our work aims to mitigate a large number of failures occurring at various layers in the system, to prevent their propagation, and to minimize their impact, all of this in an energy-saving manner. In the case of failures that are estimated to occur but cannot be mitigated using the proposed on-demand resilience approach, the system administrators will be notified in view of performing further investigations into the causes of these failures and their impacts.
1
0
0
0
0
0
Discrete Sequential Prediction of Continuous Actions for Deep RL
It has long been assumed that high dimensional continuous control problems cannot be solved effectively by discretizing individual dimensions of the action space due to the exponentially large number of bins over which policies would have to be learned. In this paper, we draw inspiration from the recent success of sequence-to-sequence models for structured prediction problems to develop policies over discretized spaces. Central to this method is the realization that complex functions over high dimensional spaces can be modeled by neural networks that predict one dimension at a time. Specifically, we show how Q-values and policies over continuous spaces can be modeled using a next step prediction model over discretized dimensions. With this parameterization, it is possible to both leverage the compositional structure of action spaces during learning, as well as compute maxima over action spaces (approximately). On a simple example task we demonstrate empirically that our method can perform global search, which effectively gets around the local optimization issues that plague DDPG. We apply the technique to off-policy (Q-learning) methods and show that our method can achieve the state-of-the-art for off-policy methods on several continuous control tasks.
1
0
0
1
0
0
Sampling Errors in Nested Sampling Parameter Estimation
Sampling errors in nested sampling parameter estimation differ from those in Bayesian evidence calculation, but have been little studied in the literature. This paper provides the first explanation of the two main sources of sampling errors in nested sampling parameter estimation, and presents a new diagrammatic representation for the process. We find no current method can accurately measure the parameter estimation errors of a single nested sampling run, and propose a method for doing so using a new algorithm for dividing nested sampling runs. We empirically verify our conclusions and the accuracy of our new method.
0
1
0
1
0
0
Towards a Generic Diver-Following Algorithm: Balancing Robustness and Efficiency in Deep Visual Detection
This paper explores the design and development of a class of robust diver-following algorithms for autonomous underwater robots. By considering the operational challenges for underwater visual tracking in diverse real-world settings, we formulate a set of desired features of a generic diver following algorithm. We attempt to accommodate these features and maximize general tracking performance by exploiting the state-of-the-art deep object detection models. We fine-tune the building blocks of these models with a goal of balancing the trade-off between robustness and efficiency in an onboard setting under real-time constraints. Subsequently, we design an architecturally simple Convolutional Neural Network (CNN)-based diver-detection model that is much faster than the state-of-the-art deep models yet provides comparable detection performances. In addition, we validate the performance and effectiveness of the proposed diver-following modules through a number of field experiments in closed-water and open-water environments.
1
0
0
0
0
0
Blockchain: A Graph Primer
Bitcoin and its underlying technology Blockchain have become popular in recent years. Designed to facilitate a secure distributed platform without central authorities, Blockchain is heralded as a paradigm that will be as powerful as Big Data, Cloud Computing and Machine learning. Blockchain incorporates novel ideas from various fields such as public key encryption and distributed systems. As such, a reader often comes across resources that explain the Blockchain technology from a certain perspective only, leaving the reader with more questions than before. We will offer a holistic view on Blockchain. Starting with a brief history, we will give the building blocks of Blockchain, and explain their interactions. As graph mining has become a major part its analysis, we will elaborate on graph theoretical aspects of the Blockchain technology. We also devote a section to the future of Blockchain and explain how extensions like Smart Contracts and De-centralized Autonomous Organizations will function. Without assuming any reader expertise, our aim is to provide a concise but complete description of the Blockchain technology.
1
0
0
0
0
0
Comparison of forcing functions in magnetohydrodynamic turbulence
Results are presented of direct numerical simulations of incompressible, homogeneous magnetohydrodynamic turbulence without a mean magnetic field, subject to different mechanical forcing functions commonly used in the literature. Specifically, the forces are negative damping (which uses the large-scale velocity field as a forcing function), a nonhelical random force, and a nonhelical static sinusoidal force (analogous to helical ABC forcing). The time evolution of the three ideal invariants (energy, magnetic helicity and cross helicity), the time-averaged energy spectra, the energy ratios and the dissipation ratios are examined. All three forcing functions produce qualitatively similar steady states with regards to the time evolution of the energy and magnetic helicity. However, differences in the cross helicity evolution are observed, particularly in the case of the static sinusoidal method of energy injection. Indeed, an ensemble of sinusoidally-forced simulations with identical parameters shows significant variations in the cross helicity over long time periods, casting some doubt on the validity of the principle of ergodicity in systems in which the injection of helicity cannot be controlled. Cross helicity can unexpectedly enter the system through the forcing function and must be carefully monitored.
0
1
0
0
0
0
Chance-Constrained AC Optimal Power Flow Integrating HVDC Lines and Controllability
The integration of large-scale renewable generation has major implications on the operation of power systems, two of which we address in this paper. First, system operators have to deal with higher degrees of uncertainty. Second, with abundant potential of renewable generation in remote locations, they need to incorporate the operation of High Voltage Direct Current lines (HVDC). This paper introduces an optimization tool that addresses both challenges by incorporating; the full AC power flow equations and chance constraints to address the uncertainty of renewable infeed, HVDC modeling for point-to-point lines, and optimizing generator and HVDC corrective control policies in reaction to uncertainty. The main contributions are twofold. First, we introduce a HVDC line model and the corresponding HVDC participation factors in a chance-constrained AC-OPF framework. Second, we modify an existing algorithm for solving the chance-constrained AC optimal power flow to allow for optimization of the generation and HVDC participation factors. Using realistic wind forecast data, and a 10 bus system with one HVDC line and two wind farms, we demonstrate the performance of our algorithm and show the benefit of controllability.
1
0
0
0
0
0
A Scalable, Linear-Time Dynamic Cutoff Algorithm for Molecular Dynamics
Recent results on supercomputers show that beyond 65K cores, the efficiency of molecular dynamics simulations of interfacial systems decreases significantly. In this paper, we introduce a dynamic cutoff method (DCM) for interfacial systems of arbitrarily large size. The idea consists in adopting a cutoff-based method in which the cutoff is cho- sen on a particle-by-particle basis, according to the distance from the interface. Computationally, the challenge is shifted from the long-range solvers to the detection of the interfaces and to the computation of the particle-interface distances. For these tasks, we present linear-time algorithms that do not rely on global communication patterns. As a result, the DCM algorithm is suited for large systems of particles and mas- sively parallel computers. To demonstrate its potential, we integrated DCM into the LAMMPS open-source molecular dynamics package, and simulated large liquid/vapor systems on two supercomputers: SuperMuc and JUQUEEN. In all cases, the accuracy of DCM is comparable to the traditional particle-particle particle-mesh (PPPM) algorithm, while the performance is considerably superior for large numbers of particles. For JUQUEEN, we provide timings for simulations running on the full system (458, 752 cores), and show nearly perfect strong and weak scaling.
1
1
0
0
0
0
Spiral arms and disc stability in the Andromeda galaxy
Aims: Density waves are often considered as the triggering mechanism of star formation in spiral galaxies. Our aim is to study relations between different star formation tracers (stellar UV and near-IR radiation and emission from HI, CO and cold dust) in the spiral arms of M31, to calculate stability conditions in the galaxy disc and to draw conclusions about possible star formation triggering mechanisms. Methods: We select fourteen spiral arm segments from the de-projected data maps and compare emission distributions along the cross sections of the segments in different datasets to each other, in order to detect spatial offsets between young stellar populations and the star forming medium. By using the disc stability condition as a function of perturbation wavelength and distance from the galaxy centre we calculate the effective disc stability parameters and the least stable wavelengths at different distances. For this we utilise a mass distribution model of M31 with four disc components (old and young stellar discs, cold and warm gaseous discs) embedded within the external potential of the bulge, the stellar halo and the dark matter halo. Each component is considered to have a realistic finite thickness. Results: No systematic offsets between the observed UV and CO/far-IR emission across the spiral segments are detected. The calculated effective stability parameter has a minimal value Q_{eff} ~ 1.8 at galactocentric distances 12 - 13 kpc. The least stable wavelengths are rather long, with the minimal values starting from ~ 3 kpc at distances R > 11 kpc. Conclusions: The classical density wave theory is not a realistic explanation for the spiral structure of M31. Instead, external causes should be considered, e.g. interactions with massive gas clouds or dwarf companions of M31.
0
1
0
0
0
0
Anomalous Brownian motion via linear Fokker-Planck equations
According to a traditional point of view Boltzmann entropy is intimately related to linear Fokker-Planck equations (Smoluchowski, Klein-Kramers, and Rayleigh equations) that describe a well-known nonequilibrium phenomenon: (normal) Brownian motion of a particle immersed in a thermal bath. Nevertheless, current researches have claimed that non-Boltzmann entropies (Tsallis and Renyi entropies, for instance) may give rise to anomalous Brownian motion through nonlinear Fokker-Planck equations. The novelty of the present article is to show that anomalous diffusion could be investigated within the framework of non-Markovian linear Fokker-Planck equations. So on the ground of this non-Markovian approach to Brownian motion, we find out anomalous diffusion characterized by the mean square displacement of a free particle and a harmonic oscillator in absence of inertial force as well as the mean square momentum of a free particle in presence of inertial force.
0
1
0
0
0
0
An RKHS model for variable selection in functional regression
A mathematical model for variable selection in functional regression models with scalar response is proposed. By "variable selection" we mean a procedure to replace the whole trajectories of the functional explanatory variables with their values at a finite number of carefully selected instants (or "impact points"). The basic idea of our approach is to use the Reproducing Kernel Hilbert Space (RKHS) associated with the underlying process, instead of the more usual L2[0,1] space, in the definition of the linear model. This turns out to be especially suitable for variable selection purposes, since the finite-dimensional linear model based on the selected "impact points" can be seen as a particular case of the RKHS-based linear functional model. In this framework, we address the consistent estimation of the optimal design of impact points and we check, via simulations and real data examples, the performance of the proposed method.
0
0
0
1
0
0
Dual-LED-based multichannel microscopy for whole-slide multiplane, multispectral, and phase imaging
We report the development of a multichannel microscopy for whole-slide multiplane, multispectral, and phase imaging. We use trinocular heads to split the beam path into 6 independent channels and employ a camera array for parallel data acquisition, achieving a maximum data throughput of ~1 gigapixel per second. To perform single-frame rapid autofocusing, we place two near-infrared LEDs at the back focal plane of the condenser lens to illuminate the sample from two different incident angles. A hot mirror is used to direct the near-infrared light to an autofocusing camera. For multiplane whole slide imaging (WSI), we acquire 6 different focal planes of a thick specimen simultaneously. For multispectral WSI, we relay the 6 independent image planes to the same focal position and simultaneously acquire information at 6 spectral bands. For whole-slide phase imaging, we acquire images at 3 focal positions simultaneously and use the transport-of-intensity equation to recover the phase information. We also provide an open-source design to further increase the number of channels from 6 to 15. The reported platform provides a simple solution for multiplexed fluorescence imaging and multimodal WSI. Acquiring an instant focal stack without z-scanning may also enable fast 3D dynamic tracking of various biological samples.
0
1
0
0
0
0
Revisiting the quest for a universal log-law and the role of pressure gradient in "canonical" wall-bounded turbulent flows
The trinity of so-called "canonical" wall-bounded turbulent flows, comprising the zero pressure gradient turbulent boundary layer, abbreviated ZPG TBL, turbulent pipe flow and channel/duct flows has continued to receive intense attention as new and more reliable experimental data have become available. Nevertheless, the debate on whether the logarithmic part of the mean velocity profile, in particular the Kármán constant $\kappa$, is identical for these three canonical flows or flow-dependent is still ongoing. In this paper, which expands upon Monkewitz and Nagib (24th ICTAM Conf., Montreal, 2016), the asymptotic matching requirement of equal $\kappa$ in the log-law and in the expression for the centerline/free-stream velocity is reiterated and shown to preclude a single universal log-law in the three canonical flows or at least make it very unlikely. The current re-analysis of high quality mean velocity profiles in ZPG TBL's, the Princeton "Superpipe" and in channels and ducts leads to a coherent description of (almost) all seemingly contradictory data interpretations in terms of TWO logarithmic regions in pipes and channels: A universal interior, near-wall logarithmic region with the same parameters as in the ZPG TBL, in particular $\kappa_{\mathrm{wall}} \cong 0.384$, but only extending from around $150$ to around $10^3$ wall units, and shrinking with increasing pressure gradient, followed by an exterior logarithmic region with a flow specific $\kappa$ matching the logarithmic slope of the respective free-stream or centerline velocity. The log-law parameters of the exterior logarithmic region in channels and pipes are shown to depend monotonically on the pressure gradient.
0
1
0
0
0
0
Teaching robots to imitate a human with no on-teacher sensors. What are the key challenges?
In this paper, we consider the problem of learning object manipulation tasks from human demonstration using RGB or RGB-D cameras. We highlight the key challenges in capturing sufficiently good data with no tracking devices - starting from sensor selection and accurate 6DoF pose estimation to natural language processing. In particular, we focus on two showcases: gluing task with a glue gun and simple block-stacking with variable blocks. Furthermore, we discuss how a linguistic description of the task could help to improve the accuracy of task description. We also present the whole architecture of our transfer of the imitated task to the simulated and real robot environment.
1
0
0
0
0
0
Control and Observability Aspects of Phase Synchronization
This paper addresses important control and observability aspects of the phase synchronization of two oscillators. To this aim a feedback control framework is proposed based on which issues related to master-slave synchronization are analyzed. Comparing results using Cartesian and cylindrical coordinates in the context of the proposed framework it is argued that: i)~observability does not play a significant role in phase synchronization, although it is granted that it might be relevant for complete synchronization; and ii)~a practical difficulty is faced when phase synchronization is aimed at but the control action is not a direct function of the phase error. A procedure for overcoming such a problem is proposed. The only assumption made is that the phase can be estimated using the arctangent function. The main aspects of the paper are illustrated using the Poincaré equations, van der Pol and Rössler oscillators in dynamical regimes for which the phase is well defined.
0
1
0
0
0
0
Dynamic density structure factor of a unitary Fermi gas at finite temperature
We present a theoretical investigation of the dynamic density structure factor of a strongly interacting Fermi gas near a Feshbach resonance at finite temperature. The study is based on a gauge invariant linear response theory. The theory is consistent with a diagrammatic approach for the equilibrium state taking into account the pair fluctuation effects and respects some important restrictions like the $f$-sum rule. Our numerical results show that the dynamic density structure factor at large incoming momentum and at half recoil frequency has a qualitatively similar behavior as the order parameter, which can signify the appearance of the condensate. This qualitatively agrees with the recent Bragg spectroscopy experiment results. We also present the results at small incoming momentum.
0
1
0
0
0
0
Surjunctivity and topological rigidity of algebraic dynamical systems
Let $X$ be a compact metrizable group and $\Gamma$ a countable group acting on $X$ by continuous group automorphisms. We give sufficient conditions under which the dynamical system $(X,\Gamma)$ is surjunctive, i.e., every injective continuous map $\tau \colon X \to X$ commuting with the action of $\Gamma$ is surjective.
0
0
1
0
0
0
High Resilience Diverse Domain Multilevel Audio Watermarking with Adaptive Threshold
A novel diverse domain (DCT-SVD & DWT-SVD) watermarking scheme is proposed in this paper. Here, the watermark is embedded simultaneously onto the two domains. It is shown that an audio signal watermarked using this scheme has better subjective and objective quality when compared with other watermarking schemes. Also proposed are two novel watermark detection algorithms viz., AOT (Adaptively Optimised Threshold) and AOTx (AOT eXtended). The fundamental idea behind both is finding an optimum threshold for detecting a known character embedded along with the actual watermarks in a known location, with the constraint that the Bit Error Rate (BER) is minimum. This optimum threshold is used for detecting the other characters in the watermarks. This approach is shown to make the watermarking scheme less susceptible to various signal processing attacks, thus making the watermarks more robust.
1
0
0
0
0
0
Multiprocessor Approximate Message Passing with Column-Wise Partitioning
Solving a large-scale regularized linear inverse problem using multiple processors is important in various real-world applications due to the limitations of individual processors and constraints on data sharing policies. This paper focuses on the setting where the matrix is partitioned column-wise. We extend the algorithmic framework and the theoretical analysis of approximate message passing (AMP), an iterative algorithm for solving linear inverse problems, whose asymptotic dynamics are characterized by state evolution (SE). In particular, we show that column-wise multiprocessor AMP (C-MP-AMP) obeys an SE under the same assumptions when the SE for AMP holds. The SE results imply that (i) the SE of C-MP-AMP converges to a state that is no worse than that of AMP and (ii) the asymptotic dynamics of C-MP-AMP and AMP can be identical. Moreover, for a setting that is not covered by SE, numerical results show that damping can improve the convergence performance of C-MP-AMP.
1
0
1
0
0
0
Qualitative robustness for bootstrap approximations
An important property of statistical estimators is qualitative robustness, that is small changes in the distribution of the data only result in small chances of the distribution of the estimator. Moreover, in practice, the distribution of the data is commonly unknown, therefore bootstrap approximations can be used to approximate the distribution of the estimator. Hence qualitative robustness of the statistical estimator under the bootstrap approximation is a desirable property. Currently most theoretical investigations on qualitative robustness assume independent and identically distributed pairs of random variables. However, in practice this assumption is not fulfilled. Therefore, we examine the qualitative robustness of bootstrap approximations for non-i.i.d. random variables, for example $\alpha$-mixing and weakly dependent processes. In the i.i.d. case qualitative robustness is ensured via the continuity of the statistical operator, representing the estimator, see Hampel (1971) and Cuevas and Romo (1993). We show, that qualitative robustness of the bootstrap approximation is still ensured under the assumption that the statistical operator is continuous and under an additional assumption on the stochastic process. In particular, we require a convergence condition of the empirical measure of the underlying process, the so called Varadarajan property.
0
0
1
1
0
0
On Asymptotic Properties of Hyperparameter Estimators for Kernel-based Regularization Methods
The kernel-based regularization method has two core issues: kernel design and hyperparameter estimation. In this paper, we focus on the second issue and study the properties of several hyperparameter estimators including the empirical Bayes (EB) estimator, two Stein's unbiased risk estimators (SURE) and their corresponding Oracle counterparts, with an emphasis on the asymptotic properties of these hyperparameter estimators. To this goal, we first derive and then rewrite the first order optimality conditions of these hyperparameter estimators, leading to several insights on these hyperparameter estimators. Then we show that as the number of data goes to infinity, the two SUREs converge to the best hyperparameter minimizing the corresponding mean square error, respectively, while the more widely used EB estimator converges to another best hyperparameter minimizing the expectation of the EB estimation criterion. This indicates that the two SUREs are asymptotically optimal but the EB estimator is not. Surprisingly, the convergence rate of two SUREs is slower than that of the EB estimator, and moreover, unlike the two SUREs, the EB estimator is independent of the convergence rate of $\Phi^T\Phi/N$ to its limit, where $\Phi$ is the regression matrix and $N$ is the number of data. A Monte Carlo simulation is provided to demonstrate the theoretical results.
1
0
0
0
0
0
SECS: Efficient Deep Stream Processing via Class Skew Dichotomy
Despite that accelerating convolutional neural network (CNN) receives an increasing research focus, the save on resource consumption always comes with a decrease in accuracy. To both increase accuracy and decrease resource consumption, we explore an environment information, called class skew, which is easily available and exists widely in daily life. Since the class skew may switch as time goes, we bring up probability layer to utilize class skew without any overhead during the runtime. Further, we observe class skew dichotomy that some class skew may appear frequently in the future, called hot class skew, and others will never appear again or appear seldom, called cold class skew. Inspired by techniques from source code optimization, two modes, i.e., interpretation and compilation, are proposed. The interpretation mode pursues efficient adaption during runtime for cold class skew and the compilation mode aggressively optimize on hot ones for more efficient deployment in the future. Aggressive optimization is processed by class-specific pruning and provides extra benefit. Finally, we design a systematic framework, SECS, to dynamically detect class skew, processing interpretation and compilation, as well as select the most accurate architectures under the runtime resource budget. Extensive evaluations show that SECS can realize end-to-end classification speedups by a factor of 3x to 11x relative to state-of-the-art convolutional neural networks, at a higher accuracy.
1
0
0
0
0
0
Implementation of infinite-range exterior complex scaling to the time-dependent complete-active-space self-consistent-field method
We present a numerical implementation of the infinite-range exterior complex scaling (irECS) [Phys. Rev. A 81, 053845 (2010)] as an efficient absorbing boundary to the time-dependent complete-active-space self-consistent field (TD-CASSCF) method [Phys. Rev. A 94, 023405 (2016)] for multielectron atoms subject to an intense laser pulse. We introduce Gauss-Laguerre-Radau quadrature points to construct discrete variable representation basis functions in the last radial finite element extending to infinity. This implementation is applied to strong-field ionization and high-harmonic generation in He, Be, and Ne atoms. It efficiently prevents unphysical reflection of photoelectron wave packets at the simulation boundary, enabling accurate simulations with substantially reduced computational cost, even under significant (~ 50%) double ionization. For the case of a simulation of high-harmonic generation from Ne, for example, 80% cost reduction is achieved, compared to a mask-function absorption boundary.
0
1
0
0
0
0
A consistent measure of the merger histories of massive galaxies using close-pair statistics I: Major mergers at $z < 3.5$
We use a large sample of $\sim 350,000$ galaxies constructed by combining the UKIDSS UDS, VIDEO/CFHT-LS, UltraVISTA/COSMOS and GAMA survey regions to probe the major merging histories of massive galaxies ($>10^{10}\ \mathrm{M}_\odot$) at $0.005 < z < 3.5$. We use a method adapted from that presented in Lopez-Sanjuan et al. (2014) using the full photometric redshift probability distributions, to measure pair $\textit{fractions}$ of flux-limited, stellar mass selected galaxy samples using close-pair statistics. The pair fraction is found to weakly evolve as $\propto (1+z)^{0.8}$ with no dependence on stellar mass. We subsequently derive major merger $\textit{rates}$ for galaxies at $> 10^{10}\ \mathrm{M}_\odot$ and at a constant number density of $n > 10^{-4}$ Mpc$^{-3}$, and find rates a factor of 2-3 smaller than previous works, although this depends strongly on the assumed merger timescale and likelihood of a close-pair merging. Galaxies undergo approximately 0.5 major mergers at $z < 3.5$, accruing an additional 1-4 $\times 10^{10}\ \mathrm{M}_\odot$ in the process. Major merger accretion rate densities of $\sim 2 \times 10^{-4}$ $\mathrm{M}_\odot$ yr$^{-1}$ Mpc$^{-3}$ are found for number density selected samples, indicating that direct progenitors of local massive ($>10^{11}\mathrm{M}_\odot$) galaxies have experienced a steady supply of stellar mass via major mergers throughout their evolution. While pair fractions are found to agree with those predicted by the Henriques et al. (2014) semi-analytic model, the Illustris hydrodynamical simulation fails to quantitatively reproduce derived merger rates. Furthermore, we find major mergers become a comparable source of stellar mass growth compared to star-formation at $z < 1$, but is 10-100 times smaller than the SFR density at higher redshifts.
0
1
0
0
0
0
EE-Grad: Exploration and Exploitation for Cost-Efficient Mini-Batch SGD
We present a generic framework for trading off fidelity and cost in computing stochastic gradients when the costs of acquiring stochastic gradients of different quality are not known a priori. We consider a mini-batch oracle that distributes a limited query budget over a number of stochastic gradients and aggregates them to estimate the true gradient. Since the optimal mini-batch size depends on the unknown cost-fidelity function, we propose an algorithm, {\it EE-Grad}, that sequentially explores the performance of mini-batch oracles and exploits the accumulated knowledge to estimate the one achieving the best performance in terms of cost-efficiency. We provide performance guarantees for EE-Grad with respect to the optimal mini-batch oracle, and illustrate these results in the case of strongly convex objectives. We also provide a simple numerical example that corroborates our theoretical findings.
1
0
0
1
0
0
Test of special relativity using a fiber network of optical clocks
Phase compensated optical fiber links enable high accuracy atomic clocks separated by thousands of kilometers to be compared with unprecedented statistical resolution. By searching for a daily variation of the frequency difference between four strontium optical lattice clocks in different locations throughout Europe connected by such links, we improve upon previous tests of time dilation predicted by special relativity. We obtain a constraint on the Robertson--Mansouri--Sexl parameter $|\alpha|\lesssim 1.1 \times10^{-8}$ quantifying a violation of time dilation, thus improving by a factor of around two the best known constraint obtained with Ives--Stilwell type experiments, and by two orders of magnitude the best constraint obtained by comparing atomic clocks. This work is the first of a new generation of tests of fundamental physics using optical clocks and fiber links. As clocks improve, and as fiber links are routinely operated, we expect that the tests initiated in this paper will improve by orders of magnitude in the near future.
0
1
0
0
0
0
Magnetic phases of spin-1 lattice gases with random interactions
A spin-1 atomic gas in an optical lattice, in the unit-filling Mott Insulator (MI) phase and in the presence of disordered spin-dependent interaction, is considered. In this regime, at zero temperature, the system is well described by a disordered rotationally-invariant spin-1 bilinear-biquadratic model. We study, via the density matrix renormalization group algorithm, a bounded disorder model such that the spin interactions can be locally either ferromagnetic or antiferromagnetic. Random interactions induce the appearance of a disordered ferromagnetic phase characterized by a non-vanishing value of spin-glass order parameter across the boundary between a ferromagnetic phase and a dimer phase exhibiting random singlet order. The study of the distribution of the block entanglement entropy reveals that in this region there is no random singlet order.
0
1
0
0
0
0
On the second boundary value problem for Monge-Ampere type equations and geometric optics
In this paper, we prove the existence of classical solutions to second boundary value prob- lems for generated prescribed Jacobian equations, as recently developed by the second author, thereby obtaining extensions of classical solvability of optimal transportation problems to problems arising in near field geometric optics. Our results depend in particular on a priori second derivative estimates recently established by the authors under weak co-dimension one convexity hypotheses on the associated matrix functions with respect to the gradient variables, (A3w). We also avoid domain deformations by using the convexity theory of generating functions to construct unique initial solutions for our homotopy family, thereby enabling application of the degree theory for nonlinear oblique boundary value problems.
0
0
1
0
0
0
Is the kinetic equation for turbulent gas-particle flows ill-posed?
This paper is about well-posedness and realizability of the kinetic equation for gas-particle flows and its relationship to the Generalized Langevin Model (GLM) PDF equation. Previous analyses claim that this kinetic equation is ill-posed, that in particular it has the properties of a backward heat equation and as a consequence, its solutions will in the course of time exhibit finite-time singularities. We show that the analysis leading to this conclusion is fundamentally incorrect because it ignores the coupling between the phase space variables in the kinetic equation and the time and particle inertia dependence of the phase space diffusion tensor. This contributes an extra $+ve$ diffusion that always outweighs the contribution from the$-ve$ diffusion associated with the dispersion along one of the principal axes of the phase space diffusion tensor. This is confirmed by a numerical evaluation of analytic solutions of these $+ve$ and $-ve$ contributions to the particle diffusion coefficient along this principal axis. We also examine other erroneous claims and assumptions made in previous studies that demonstrate the apparent superiority of the GLM PDF approach over the kinetic approach. In so doing we have drawn attention to the limitations of the GLM approach which these studies have ignored or not properly considered, to give a more balanced appraisal of the benefits of both PDF approaches.
0
1
0
0
0
0
On the construction of small subsets containing special elements in a finite field
In this note we construct a series of small subsets containing a non-d-th power element in a finite field by applying certain bounds on incomplete character sums. Precisely, let $h=\lfloor q^{\delta}\rfloor>1$ and $d\mid q^h-1$. Let $r$ be a prime divisor of $q-1$ such that the largest prime power part of $q-1$ has the form $r^s$. Then there is a constant $0<\epsilon<1$ such that for a ratio at least $ {q^{-\epsilon h}}$ of $\alpha\in \mathbb{F}_{q^{h}} \backslash\mathbb{F}_{q}$, the set $S=\{ \alpha-x^t, x\in\mathbb{F}_{q}\}$ of cardinality $1+\frac {q-1} {M(h)}$ contains a non-d-th power in $\mathbb{F}_{q^{\lfloor q^\delta\rfloor}}$, where $t$ is the largest power of $r$ such that $t<\sqrt{q}/h$ and $M(h)$ is defined as $$M(h)=\max_{r \mid (q-1)} r^{\min\{v_r(q-1), \lfloor\log_r{q}/2-\log_r h\rfloor\}}.$$ Here $r$ runs thourgh prime divisors and $v_r(x)$ is the $r$-adic oder of $x$. For odd $q$, the choice of $\delta=\frac 12-d, d=o(1)>0$ shows that there exists an explicit subset of cardinality $q^{1-d}=O(\log^{2+\epsilon'}(q^h))$ containing a non-quadratic element in the field $\mathbb{F}_{q^h}$. On the other hand, the choice of $h=2$ shows that for any odd prime power $q$, there is an explicit subset of cardinality $1+\frac {q-1}{M(2)}$ containing a non-quadratic element in $\mathbb{F}_{q^2}$. This improves a $q-1$ construction by Coulter and Kosick \cite{CK} since $\lfloor \log_2{(q-1)}\rfloor\leq M(2) < \sqrt{q}$. In addition, we obtain a similar construction for small sets containing a primitive element. The construction works well provided $\phi(q^h-1)$ is very small, where $\phi$ is the Euler's totient function.
1
0
1
0
0
0
Information Potential Auto-Encoders
In this paper, we suggest a framework to make use of mutual information as a regularization criterion to train Auto-Encoders (AEs). In the proposed framework, AEs are regularized by minimization of the mutual information between input and encoding variables of AEs during the training phase. In order to estimate the entropy of the encoding variables and the mutual information, we propose a non-parametric method. We also give an information theoretic view of Variational AEs (VAEs), which suggests that VAEs can be considered as parametric methods that estimate entropy. Experimental results show that the proposed non-parametric models have more degree of freedom in terms of representation learning of features drawn from complex distributions such as Mixture of Gaussians, compared to methods which estimate entropy using parametric approaches, such as Variational AEs.
1
0
0
1
0
0
Irreducible compositions of degree two polynomials over finite fields have regular structure
Let $q$ be an odd prime power and $D$ be the set of monic irreducible polynomials in $\mathbb F_q[x]$ which can be written as a composition of monic degree two polynomials. In this paper we prove that $D$ has a natural regular structure by showing that there exists a finite automaton having $D$ as accepted language. Our method is constructive.
1
0
1
0
0
0
Quantum interferometry in multi-mode systems
We consider the situation when the signal propagating through each arm of an interferometer has a complicated multi-mode structure. We find the relation between the particle-entanglement and the possibility to surpass the shot-noise limit of the phase estimation. Our results are general---they apply to pure and mixed states of identical and distinguishable particles (or combinations of both), for a fixed and fluctuating number of particles. We also show that the method for detecting the entanglement often used in two-mode system can give misleading results when applied to the multi-mode case.
0
1
0
0
0
0
Glider representations of chains of semisimple Lie algebras
We start the study of glider representations in the setting of semisimple Lie algebras. A glider representation is defined for some positively filtered ring $FR$ and here we consider the right bounded algebra filtration $FU(\mathfrak{g})$ on the universal enveloping algebra $U(\mathfrak{g})$ of some semisimple Lie algebra $\mathfrak{g}$ given by a fixed chain of semisimple sub Lie algebras $\mathfrak{g}_1 \subset \mathfrak{g}_2 \subset \ldots \subset \mathfrak{g}_n = \mathfrak{g}$. Inspired by the classical representation theory, we introduce so-called Verma glider representations. Their existence is related to the relations between the root systems of the appearing Lie algebras $\mathfrak{g}_i$. In particular, we consider chains of simple Lie algebras of the same type $A,B,C$ and $D$.
0
0
1
0
0
0
Equilibrium configurations of large nanostructures using the embedded saturated-fragments stochastic density functional theory
An \emph{ab initio} Langevin dynamics approach is developed based on stochastic density functional theory (sDFT) within a new \emph{embedded saturated } \emph{fragment }formalism, applicable to covalently bonded systems. The forces on the nuclei generated by sDFT contain a random component natural to Langevin dynamics and its standard deviation is used to estimate the friction term on each atom by satisfying the fluctuation\textendash dissipation relation. The overall approach scales linearly with system size even if the density matrix is not local and is thus applicable to ordered as well as disordered extended systems. We implement the approach for a series of silicon nanocrystals (NCs) of varying size with a diameter of up to $3$nm corresponding to $N_{e}=3000$ electrons and generate a set of configurations that are distributed canonically at a fixed temperature, ranging from cryogenic to room temperature. We also analyze the structure properties of the NCs and discuss the reconstruction of the surface geometry.
0
1
0
0
0
0
Born to Learn: the Inspiration, Progress, and Future of Evolved Plastic Artificial Neural Networks
Biological plastic neural networks are systems of extraordinary computational capabilities shaped by evolution, development, and lifetime learning. The interplay of these elements leads to the emergence of adaptive behavior and intelligence. Inspired by such intricate natural phenomena, Evolved Plastic Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed plastic neural networks with a large variety of dynamics, architectures, and plasticity rules: these artificial systems are composed of inputs, outputs, and plastic components that change in response to experiences in an environment. These systems may autonomously discover novel adaptive algorithms, and lead to hypotheses on the emergence of biological adaptation. EPANNs have seen considerable progress over the last two decades. Current scientific and technological advances in artificial neural networks are now setting the conditions for radically new approaches and results. In particular, the limitations of hand-designed networks could be overcome by more flexible and innovative solutions. This paper brings together a variety of inspiring ideas that define the field of EPANNs. The main methods and results are reviewed. Finally, new opportunities and developments are presented.
1
0
0
0
0
0
Hypergraph Convolution and Hypergraph Attention
Recently, graph neural networks have attracted great attention and achieved prominent performance in various research fields. Most of those algorithms have assumed pairwise relationships of objects of interest. However, in many real applications, the relationships between objects are in higher-order, beyond a pairwise formulation. To efficiently learn deep embeddings on the high-order graph-structured data, we introduce two end-to-end trainable operators to the family of graph neural networks, i.e., hypergraph convolution and hypergraph attention. Whilst hypergraph convolution defines the basic formulation of performing convolution on a hypergraph, hypergraph attention further enhances the capacity of representation learning by leveraging an attention module. With the two operators, a graph neural network is readily extended to a more flexible model and applied to diverse applications where non-pairwise relationships are observed. Extensive experimental results with semi-supervised node classification demonstrate the effectiveness of hypergraph convolution and hypergraph attention.
1
0
0
1
0
0
Network Capacity Bound for Personalized PageRank in Multimodal Networks
In a former paper the concept of Bipartite PageRank was introduced and a theorem on the limit of authority flowing between nodes for personalized PageRank has been generalized. In this paper we want to extend those results to multimodal networks. In particular we introduce a hypergraph type that may be used for describing multimodal network where a hyperlink connects nodes from each of the modalities. We introduce a generalisation of PageRank for such graphs and define the respective random walk model that can be used for computations. we finally state and prove theorems on the limit of outflow of authority for cases where individual modalities have identical and distinct damping factors.
1
1
0
0
0
0
Hessian corrections to Hybrid Monte Carlo
A method for the introduction of second-order derivatives of the log likelihood into HMC algorithms is introduced, which does not require the Hessian to be evaluated at each leapfrog step but only at the start and end of trajectories.
0
0
0
1
0
0
Are Over-massive Haloes of Ultra Diffuse Galaxies Consistent with Extended MOND?
A sample of Coma cluster ultra-diffuse galaxies (UDGs) are modelled in the context of Extended Modified Newtonian Dynamics (EMOND) with the aim to explain the large dark matter-like effect observed in these cluster galaxies. We first build a model of the Coma cluster in the context of EMOND using gas and galaxy mass profiles from the literature. Then assuming the dynamical mass of the UDGs satisfies the fundamental manifold of other ellipticals, and that the UDG stellar mass-to-light matches their colour, we can verify the EMOND formulation by comparing two predictions of the baryonic mass of UDGs. We find that EMOND can explain the UDG mass, within the expected modelling errors, if they lie on the fundamental manifold of ellipsoids, however, given that measurements show one UDG lying off the fundamental manifold, observations of more UDGs are needed to confirm this assumption.
0
1
0
0
0
0
Magnetic-Visual Sensor Fusion-based Dense 3D Reconstruction and Localization for Endoscopic Capsule Robots
Reliable and real-time 3D reconstruction and localization functionality is a crucial prerequisite for the navigation of actively controlled capsule endoscopic robots as an emerging, minimally invasive diagnostic and therapeutic technology for use in the gastrointestinal (GI) tract. In this study, we propose a fully dense, non-rigidly deformable, strictly real-time, intraoperative map fusion approach for actively controlled endoscopic capsule robot applications which combines magnetic and vision-based localization, with non-rigid deformations based frame-to-model map fusion. The performance of the proposed method is demonstrated using four different ex-vivo porcine stomach models. Across different trajectories of varying speed and complexity, and four different endoscopic cameras, the root mean square surface reconstruction errors 1.58 to 2.17 cm.
1
0
0
0
0
0