title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Recommendation with k-anonymized Ratings
Recommender systems are widely used to predict personalized preferences of goods or services using users' past activities, such as item ratings or purchase histories. If collections of such personal activities were made publicly available, they could be used to personalize a diverse range of services, including targeted advertisement or recommendations. However, there would be an accompanying risk of privacy violations. The pioneering work of Narayanan et al.\ demonstrated that even if the identifiers are eliminated, the public release of user ratings can allow for the identification of users by those who have only a small amount of data on the users' past ratings. In this paper, we assume the following setting. A collector collects user ratings, then anonymizes and distributes them. A recommender constructs a recommender system based on the anonymized ratings provided by the collector. Based on this setting, we exhaustively list the models of recommender systems that use anonymized ratings. For each model, we then present an item-based collaborative filtering algorithm for making recommendations based on anonymized ratings. Our experimental results show that an item-based collaborative filtering based on anonymized ratings can perform better than collaborative filterings based on 5--10 non-anonymized ratings. This surprising result indicates that, in some settings, privacy protection does not necessarily reduce the usefulness of recommendations. From the experimental analysis of this counterintuitive result, we observed that the sparsity of the ratings can be reduced by anonymization and the variance of the prediction can be reduced if $k$, the anonymization parameter, is appropriately tuned. In this way, the predictive performance of recommendations based on anonymized ratings can be improved in some settings.
1
0
0
1
0
0
Ro-vibrational states of H$_2^+$. Variational calculations
The nonrelativistic variational calculation of a complete set of ro-vibrational states in the H$_2^+$ molecular ion supported by the ground $1s\sigma$ adiabatic potential is presented. It includes both bound states and resonances located above the $n=1$ threshold. In the latter case we also evaluate a predissociation width of a state wherever it is significant. Relativistic and radiative corrections are discussed and effective adiabatic potentials of these corrections are included as supplementary files.
0
1
0
0
0
0
Control of automated guided vehicles without collision by quantum annealer and digital devices
We formulate an optimization problem to control a large number of automated guided vehicles in a plant without collision. The formulation consists of binary variables. A quadratic cost function over these variables enables us to utilize certain solvers on digital computers and recently developed purpose-specific hardware such as D-Wave 2000Q and the Fujitsu digital annealer. In the present study, we consider an actual plant in Japan, in which vehicles run, and assess efficiency of our formulation for optimizing the vehicles via several solvers. We confirm that our formulation can be a powerful approach for performing smooth control while avoiding collisions between vehicles, as compared to a conventional method. In addition, comparative experiments performed using several solvers reveal that D-Wave 2000Q can be useful as a rapid solver for generating a plan for controlling the vehicles in a short time although it deals only with a small number of vehicles, while a digital computer can rapidly solve the corresponding optimization problem even with a large number of binary variables.
1
0
0
0
0
0
Meta-learning: searching in the model space
There is no free lunch, no single learning algorithm that will outperform other algorithms on all data. In practice different approaches are tried and the best algorithm selected. An alternative solution is to build new algorithms on demand by creating a framework that accommodates many algorithms. The best combination of parameters and procedures is searched here in the space of all possible models belonging to the framework of Similarity-Based Methods (SBMs). Such meta-learning approach gives a chance to find the best method in all cases. Issues related to the meta-learning and first tests of this approach are presented.
0
0
0
1
0
0
GIFT: Guided and Interpretable Factorization for Tensors - An Application to Large-Scale Multi-platform Cancer Analysis
Given multi-platform genome data with prior knowledge of functional gene sets, how can we extract interpretable latent relationships between patients and genes? More specifically, how can we devise a tensor factorization method which produces an interpretable gene factor matrix based on gene set information while maintaining the decomposition quality and speed? We propose GIFT, a Guided and Interpretable Factorization for Tensors. GIFT provides interpretable factor matrices by encoding prior knowledge as a regularization term in its objective function. Experiment results demonstrate that GIFT produces interpretable factorizations with high scalability and accuracy, while other methods lack interpretability. We apply GIFT to the PanCan12 dataset, and GIFT reveals significant relations between cancers, gene sets, and genes, such as influential gene sets for specific cancer (e.g., interferon-gamma response gene set for ovarian cancer) or relations between cancers and genes (e.g., BRCA cancer - APOA1 gene and OV, UCEC cancers - BST2 gene).
1
0
0
0
1
0
Synchronization of spin torque oscillators through spin Hall magnetoresistance
Spin torque oscillators placed onto a nonmagnetic heavy metal show synchronized auto-oscillations due to the coupling originating from spin Hall magnetoresistance effect. Here, we study a system having two spin torque oscillators under the effect of the spin Hall torque, and show that switching the external current direction enables us to control the phase difference of the synchronization between in-phase and antiphase.
0
1
0
0
0
0
Long quasi-polycyclic $t-$CIS codes
We study complementary information set codes of length $tn$ and dimension $n$ of order $t$ called ($t-$CIS code for short). Quasi-cyclic and quasi-twisted $t$-CIS codes are enumerated by using their concatenated structure. Asymptotic existence results are derived for one-generator and have co-index $n$ by Artin's conjecture for quasi cyclic and special case for quasi twisted. This shows that there are infinite families of long QC and QT $t$-CIS codes with relative distance satisfying a modified Varshamov-Gilbert bound for rate $1/t$ codes. Similar results are defined for the new and more general class of quasi-polycyclic codes introduced recently by Berger and Amrani.
1
0
0
0
0
0
Fairness in Criminal Justice Risk Assessments: The State of the Art
Objectives: Discussions of fairness in criminal justice risk assessments typically lack conceptual precision. Rhetoric too often substitutes for careful analysis. In this paper, we seek to clarify the tradeoffs between different kinds of fairness and between fairness and accuracy. Methods: We draw on the existing literatures in criminology, computer science and statistics to provide an integrated examination of fairness and accuracy in criminal justice risk assessments. We also provide an empirical illustration using data from arraignments. Results: We show that there are at least six kinds of fairness, some of which are incompatible with one another and with accuracy. Conclusions: Except in trivial cases, it is impossible to maximize accuracy and fairness at the same time, and impossible simultaneously to satisfy all kinds of fairness. In practice, a major complication is different base rates across different legally protected groups. There is a need to consider challenging tradeoffs.
0
0
0
1
0
0
Remarks to the article: New Light on the Invention of the Achromatic Telescope Objective
The article analysis was carried out within the confines of the replication project of the telescope, which was used by Mikhail Lomonosov at observation the transit of Venus in 1761. At that time he discovered the Venusian atmosphere. It is known that Lomonosov used Dollond 4.5 feet long achromatic telescope. The investigation revealed significant faults in the description of the approximation method, which most likely was used by J. Dollond & Son during manufacturing of the early achromatic lenses.
0
1
0
0
0
0
The spin-Brauer diagram algebra
We investigate the spin-Brauer diagram algebra, denoted ${\bf SB}_n(\delta)$, that arises from studying an analogous form of Schur-Weyl duality for the action of the pin group on ${\bf V}^{\otimes n} \otimes \Delta$. Here ${\bf V}$ is the standard $N$-dimensional complex representation of ${\bf Pin}(N)$ and $\Delta$ is the spin representation. When $\delta = N$ is a positive integer, we define a surjective map ${\bf SB}_n(N) \twoheadrightarrow {\rm End}_{{\bf Pin}(N)}({\bf V}^{\otimes n} \otimes \Delta)$ and show it is an isomorphism for $N \geq 2n$. We show ${\bf SB}_n(\delta)$ is a cellular algebra and use cellularity to characterize its irreducible representations.
0
0
1
0
0
0
Superheating in coated niobium
Using muon spin rotation it is shown that the field of first flux penetration H_entry in Nb is enhanced by about 30% if coated with an overlayer of Nb_3Sn or MgB_2. This is consistent with an increase from the lower critical magnetic field H_c1 up to the superheating field H_sh of the Nb substrate. In the experiments presented here coatings of Nb_3Sn and MgB_2 with a thickness between 50 and 2000nm have been tested. H_entry does not depend on material or thickness. This suggests that the energy barrier at the boundary between the two materials prevents flux entry up to H_sh of the substrate. A mechanism consistent with these findings is that the proximity effect recovers the stability of the energy barrier for flux penetration, which is suppressed by defects for uncoated samples. Additionally, a low temperature baked Nb sample has been tested. Here a 6% increase of H_entry was found, also pushing H_entry beyond H_c1.
0
1
0
0
0
0
Contraction Analysis of Nonlinear DAE Systems
This paper studies the contraction properties of nonlinear differential-algebraic equation (DAE) systems. Specifically we develop scalable techniques for constructing the attraction regions associated with a particular stable equilibrium, by establishing the relation between the contraction rates of the original systems and the corresponding virtual extended systems. We show that for a contracting DAE system, the reduced system always contracts faster than the extended ones; furthermore, there always exists an extension with contraction rate arbitrarily close to that of the original system. The proposed construction technique is illustrated with a power system example in the context of transient stability assessment.
0
0
1
0
0
0
New Braided $T$-Categories over Hopf (co)quasigroups
Let $H$ be a Hopf quasigroup with bijective antipode and let $Aut_{HQG}(H)$ be the set of all Hopf quasigroup automorphisms of $H$. We introduce a category ${_{H}\mathcal{YDQ}^{H}}(\alpha,\beta)$ with $\alpha,\beta\in Aut_{HQG}(H)$ and construct a braided $T$-category $\mathcal{YDQ}(H)$ having all the categories ${_{H}\mathcal{YDQ}^{H}}(\alpha,\beta)$ as components.
0
0
1
0
0
0
On the Reconstruction Risk of Convolutional Sparse Dictionary Learning
Sparse dictionary learning (SDL) has become a popular method for adaptively identifying parsimonious representations of a dataset, a fundamental problem in machine learning and signal processing. While most work on SDL assumes a training dataset of independent and identically distributed samples, a variant known as convolutional sparse dictionary learning (CSDL) relaxes this assumption, allowing more general sequential data sources, such as time series or other dependent data. Although recent work has explored the statistical properties of classical SDL, the statistical properties of CSDL remain unstudied. This paper begins to study this by identifying the minimax convergence rate of CSDL in terms of reconstruction risk, by both upper bounding the risk of an established CSDL estimator and proving a matching information-theoretic lower bound. Our results indicate that consistency in reconstruction risk is possible precisely in the `ultra-sparse' setting, in which the sparsity (i.e., the number of feature occurrences) is in $o(N)$ in terms of the length N of the training sequence. Notably, our results make very weak assumptions, allowing arbitrary dictionaries and dependent measurement noise. Finally, we verify our theoretical results with numerical experiments on synthetic data.
1
0
1
1
0
0
Adaptive Questionnaires for Direct Identification of Optimal Product Design
We consider the problem of identifying the most profitable product design from a finite set of candidates under unknown consumer preference. A standard approach to this problem follows a two-step strategy: First, estimate the preference of the consumer population, represented as a point in part-worth space, using an adaptive discrete-choice questionnaire. Second, integrate the estimated part-worth vector with engineering feasibility and cost models to determine the optimal design. In this work, we (1) demonstrate that accurate preference estimation is neither necessary nor sufficient for identifying the optimal design, (2) introduce a novel adaptive questionnaire that leverages knowledge about engineering feasibility and manufacturing costs to directly determine the optimal design, and (3) interpret product design in terms of a nonlinear segmentation of part-worth space, and use this interpretation to illuminate the intrinsic difficulty of optimal design in the presence of noisy questionnaire responses. We establish the superiority of the proposed approach using a well-documented optimal product design task. This study demonstrates how the identification of optimal product design can be accelerated by integrating marketing and manufacturing knowledge into the adaptive questionnaire.
1
0
0
1
0
0
Identities involving Bernoulli and Euler polynomials
We present various identities involving the classical Bernoulli and Euler polynomials. Among others, we prove that $$ \sum_{k=0}^{[n/4]}(-1)^k {n\choose 4k}\frac{B_{n-4k}(z) }{2^{6k}} =\frac{1}{2^{n+1}}\sum_{k=0}^{n} (-1)^k \frac{1+i^k}{(1+i)^k} {n\choose k}{B_{n-k}(2z)} $$ and $$ \sum_{k=1}^{n} 2^{2k-1} {2n\choose 2k-1} B_{2k-1}(z) = \sum_{k=1}^n k \, 2^{2k} {2n\choose 2k} E_{2k-1}(z). $$ Applications of our results lead to formulas for Bernoulli and Euler numbers, like, for instance, $$ n E_{n-1} =\sum_{k=1}^{[n/2]} \frac{2^{2k}-1}{k} (2^{2k}-2^n){n\choose 2k-1} B_{2k}B_{n-2k}. $$
0
0
1
0
0
0
Gate Tunable Magneto-resistance of Ultra-Thin WTe2 Devices
In this work, the magneto-resistance (MR) of ultra-thin WTe2/BN heterostructures far away from electron-hole equilibrium is measured. The change of MR of such devices is found to be determined largely by a single tunable parameter, i.e. the amount of imbalance between electrons and holes. We also found that the magnetoresistive behavior of ultra-thin WTe2 devices is well-captured by a two-fluid model. According to the model, the change of MR could be as large as 400,000%, the largest potential change of MR among all materials known, if the ultra-thin samples are tuned to neutrality when preserving the mobility of 167,000 cm2V-1s-1 observed in bulk samples. Our findings show the prospects of ultra-thin WTe2 as a variable magnetoresistance material in future applications such as magnetic field sensors, information storage and extraction devices, and galvanic isolators. The results also provide important insight into the electronic structure and the origin of the large MR in ultra-thin WTe2 samples.
0
1
0
0
0
0
Morpheo: Traceable Machine Learning on Hidden data
Morpheo is a transparent and secure machine learning platform collecting and analysing large datasets. It aims at building state-of-the art prediction models in various fields where data are sensitive. Indeed, it offers strong privacy of data and algorithm, by preventing anyone to read the data, apart from the owner and the chosen algorithms. Computations in Morpheo are orchestrated by a blockchain infrastructure, thus offering total traceability of operations. Morpheo aims at building an attractive economic ecosystem around data prediction by channelling crypto-money from prediction requests to useful data and algorithms providers. Morpheo is designed to handle multiple data sources in a transfer learning approach in order to mutualize knowledge acquired from large datasets for applications with smaller but similar datasets.
1
0
0
1
0
0
Hölder regularity of the 2D dual semigeostrophic equations via analysis of linearized Monge-Ampère equations
We obtain the Hölder regularity of time derivative of solutions to the dual semigeostrophic equations in two dimensions when the initial potential density is bounded away from zero and infinity. Our main tool is an interior Hölder estimate in two dimensions for an inhomogeneous linearized Monge-Ampère equation with right hand side being the divergence of a bounded vector field. As a further application of our Hölder estimate, we prove the Hölder regularity of the polar factorization for time-dependent maps in two dimensions with densities bounded away from zero and infinity. Our applications improve previous work by G. Loeper who considered the cases of densities sufficiently close to a positive constant.
0
0
1
0
0
0
New nanostructures of carbon: Quasifullerenes Cn-q (n=20,42,48,60)
Based on the third allotropic form of carbon (Fullerenes) through theoretical study have been predicted structures described as non-classical fullerenes. We have studied novel allotropic carbon structures with a closed cage configuration that have been predicted for the first time, by using DFT at the B3LYP level. Such carbon Cn-q structures (where, n=20, 42, 48 and 60), combine states of hybridization sp1 and sp2, for the formation of bonds. A comparative analysis of quasi-fullerenes with respect to their isomers of greater stability was also performed. Chemical stability was evaluated with the criteria of aromaticity through the different rings that build the systems. The results show new isomerism of carbon nanostructures with interesting chemical properties such as hardness, chemical potential and HOMO-LUMO gaps. We also studied thermal stability with Lagrangian molecular dynamics method using Atom- Center Density propagation (ADMP) method.
0
1
0
0
0
0
The Top 10 Topics in Machine Learning Revisited: A Quantitative Meta-Study
Which topics of machine learning are most commonly addressed in research? This question was initially answered in 2007 by doing a qualitative survey among distinguished researchers. In our study, we revisit this question from a quantitative perspective. Concretely, we collect 54K abstracts of papers published between 2007 and 2016 in leading machine learning journals and conferences. We then use machine learning in order to determine the top 10 topics in machine learning. We not only include models, but provide a holistic view across optimization, data, features, etc. This quantitative approach allows reducing the bias of surveys. It reveals new and up-to-date insights into what the 10 most prolific topics in machine learning research are. This allows researchers to identify popular topics as well as new and rising topics for their research.
1
0
0
0
0
0
Random taste heterogeneity in discrete choice models: Flexible nonparametric finite mixture distributions
This study proposes a mixed logit model with multivariate nonparametric finite mixture distributions. The support of the distribution is specified as a high-dimensional grid over the coefficient space, with equal or unequal intervals between successive points along the same dimension; the location of each point on the grid and the probability mass at that point are model parameters that need to be estimated. The framework does not require the analyst to specify the shape of the distribution prior to model estimation, but can approximate any multivariate probability distribution function to any arbitrary degree of accuracy. The grid with unequal intervals, in particular, offers greater flexibility than existing multivariate nonparametric specifications, while requiring the estimation of a small number of additional parameters. An expectation maximization algorithm is developed for the estimation of these models. Multiple synthetic datasets and a case study on travel mode choice behavior are used to demonstrate the value of the model framework and estimation algorithm. Compared to extant models that incorporate random taste heterogeneity through continuous mixture distributions, the proposed model provides better out-of-sample predictive ability. Findings reveal significant differences in willingness to pay measures between the proposed model and extant specifications. The case study further demonstrates the ability of the proposed model to endogenously recover patterns of attribute non-attendance and choice set formation.
0
0
0
1
0
0
Microscopic mechanism of tunable band gap in potassium doped few-layer black phosphorus
Tuning band gaps in two-dimensional (2D) materials is of great interest in the fundamental and practical aspects of contemporary material sciences. Recently, black phosphorus (BP) consisting of stacked layers of phosphorene was experimentally observed to show a widely tunable band gap by means of the deposition of potassium (K) atoms on the surface, thereby allowing great flexibility in design and optimization of electronic and optoelectronic devices. Here, based on the density-functional theory calculations, we demonstrates that the donated electrons from K dopants are mostly localized at the topmost BP layer and such a surface charging efficiently screens the K ion potential. It is found that, as the K doping increases, the extreme surface charging and its screening of K atoms shift the conduction bands down in energy, i.e., towards higher binding energy, because they have more charge near the surface, while it has little influence on the valence bands having more charge in the deeper layers. This result provides a different explanation for the observed tunable band gap compared to the previously proposed giant Stark effect where a vertical electric field from the positively ionized K overlayer to the negatively charged BP layers shifts the conduction band minimum ${\Gamma}_{\rm 1c}$ (valence band minimum ${\Gamma}_{\rm 8v}$) downwards (upwards). The present prediction of ${\Gamma}_{\rm 1c}$ and ${\Gamma}_{\rm 8v}$ as a function of the K doping reproduces well the widely tunable band gap, anisotropic Dirac semimetal state, and band-inverted semimetal state, as observed by angle-resolved photoemission spectroscopy experiment. Our findings shed new light on a route for tunable band gap engineering of 2D materials through the surface doping of alkali metals.
0
1
0
0
0
0
Quasi-steady state reduction for the Michaelis-Menten reaction-diffusion system
The Michaelis-Menten mechanism is probably the best known model for an enzyme-catalyzed reaction. For spatially homogeneous concentrations, QSS reductions are well known, but this is not the case when chemical species are allowed to diffuse. We will discuss QSS reductions for both the irreversible and reversible Michaelis-Menten reaction in the latter case, given small initial enzyme concentration and slow diffusion. Our work is based on a heuristic method to obtain an ordinary differential equation which admits reduction by Tikhonov-Fenichel theory. We will not give convergence proofs but we provide numerical results that support the accuracy of the reductions.
0
0
1
0
0
0
All the people around me: face discovery in egocentric photo-streams
Given an unconstrained stream of images captured by a wearable photo-camera (2fpm), we propose an unsupervised bottom-up approach for automatic clustering appearing faces into the individual identities present in these data. The problem is challenging since images are acquired under real world conditions; hence the visible appearance of the people in the images undergoes intensive variations. Our proposed pipeline consists of first arranging the photo-stream into events, later, localizing the appearance of multiple people in them, and finally, grouping various appearances of the same person across different events. Experimental results performed on a dataset acquired by wearing a photo-camera during one month, demonstrate the effectiveness of the proposed approach for the considered purpose.
1
0
0
0
0
0
Graphene oxide nanosheets disrupt lipid composition, Ca2+ homeostasis and synaptic transmission in primary cortical neurons
Graphene has the potential to make a very significant impact on society, with important applications in the biomedical field. The possibility to engineer graphene-based medical devices at the neuronal interface is of particular interest, making it imperative to determine the biocompatibility of graphene materials with neuronal cells. Here we conducted a comprehensive analysis of the effects of chronic and acute exposure of rat primary cortical neurons to few-layers pristine graphene (GR) and monolayer graphene oxide (GO) flakes. By combining a range of cell biology, microscopy, electrophysiology and omics approaches we characterized the graphene neuron interaction from the first steps of membrane contact and internalization to the long-term effects on cell viability, synaptic transmission and cell metabolism. GR/GO flakes are found in contact with the neuronal membrane, free in the cytoplasm and internalized through the endolysosomal pathway, with no significant impact on neuron viability. However, GO exposure selectively caused the inhibition of excitatory transmission, paralleled by a reduction in the number of excitatory synaptic contacts, and a concomitant enhancement of the inhibitory activity. This was accompanied by induction of autophagy, altered Ca2+ dynamics and by a downregulation of some of the main players in the regulation of Ca2+ homeostasis in both excitatory and inhibitory neurons. Our results show that, although graphene exposure does not impact on neuron viability, it does nevertheless have important effects on neuronal transmission and network functionality, thus warranting caution when planning to employ this material for neuro-biological applications.
0
0
0
0
1
0
Physical insight into the thermodynamic uncertainty relation using Brownian motion in tilted periodic potentials
Using Brownian motion in periodic potentials $V(x)$ tilted by a force $f$, we provide physical insight into the thermodynamic uncertainty relation, a recently conjectured principle for statistical errors and irreversible heat dissipation in nonequilibrium steady states. According to the relation, nonequilibrium output generated from dissipative processes necessarily incurs an energetic cost or heat dissipation $q$, and in order to limit the output fluctuation within a relative uncertainty $\epsilon$, at least $2k_BT/\epsilon^2$ of heat must be dissipated. Our model shows that this bound is attained not only at near-equilibrium ($f\ll V'(x)$) but also at far-from-equilibrium $(f\gg V'(x))$, more generally when the dissipated heat is normally distributed. Furthermore, the energetic cost is maximized near the critical force when the barrier separating the potential wells is about to vanish and the fluctuation of Brownian particle is maximized. These findings indicate that the deviation of heat distribution from Gaussianity gives rise to the inequality of the uncertainty relation, further clarifying the meaning of the uncertainty relation. Our derivation of the uncertainty relation also recognizes a new bound of nonequilibrium fluctuations that the variance of dissipated heat ($\sigma_q^2$) increases with its mean ($\mu_q$) and cannot be smaller than $2k_BT\mu_q$.
0
1
0
0
0
0
Observation of a Modulational Instability in Bose-Einstein condensates
We observe the breakup dynamics of an elongated cloud of condensed $^{85}$Rb atoms placed in an optical waveguide. The number of localized spatial components observed in the breakup is compared with the number of solitons predicted by a plane-wave stability analysis of the nonpolynomial nonlinear Schrödinger equation, an effective one-dimensional approximation of the Gross-Pitaevskii equation for cigar-shaped condensates. It is shown that the numbers predicted from the fastest growing sidebands are consistent with the experimental data, suggesting that modulational instability is the key underlying physical mechanism driving the breakup.
0
1
0
0
0
0
Dynamics of the scenery flow and conical density theorems
Conical density theorems are used in the geometric measure theory to derive geometric information from given metric information. The idea is to examine how a measure is distributed in small balls. Finding conditions that guarantee the measure to be effectively spread out in different directions is a classical question going back to Besicovitch (1938) and Marstrand (1954). Classically, conical density theorems deal with the distribution of the Hausdorff measure. The process of taking blow-ups of a measure around a point induces a natural dynamical system called the scenery flow. Relying on this dynamics makes it possible to apply ergodic-theoretical methods to understand the statistical behavior of tangent measures. This approach was initiated by Furstenberg (1970, 2008) and greatly developed by Hochman (2010). The scenery flow is a well-suited tool to address problems concerning conical densities. In this survey, we demonstrate how to develop the ergodic-theoretical machinery around the scenery flow and use it to study conical density theorems.
0
0
1
0
0
0
Model-independent analyses of non-Gaussianity in Planck CMB maps using Minkowski Functionals
Despite the wealth of $Planck$ results, there are difficulties in disentangling the primordial non-Gaussianity of the Cosmic Microwave Background (CMB) from the secondary and the foreground non-Gaussianity (NG). For each of these forms of NG the lack of complete data introduces model-dependencies. Aiming at detecting the NGs of the CMB temperature anisotropy $\delta T$, while paying particular attention to a model-independent quantification of NGs, our analysis is based upon statistical and morphological univariate descriptors, respectively: the probability density function $P(\delta T)$, related to ${\mathrm v}_{0}$, the first Minkowski Functional (MF), and the two other MFs, ${\mathrm v}_{1}$ and ${\mathrm v}_{2}$. From their analytical Gaussian predictions we build the discrepancy functions $\Delta_{k}$ ($k=P,0,1,2$) which are applied to an ensemble of $10^{5}$ CMB realization maps of the $\Lambda$CDM model and to the $Planck$ CMB maps. In our analysis we use general Hermite expansions of the $\Delta_{k}$ up to the $12^{th}$ order, where the coefficients are explicitly given in terms of cumulants. Assuming hierarchical ordering of the cumulants, we obtain the perturbative expansions generalizing the $2^{nd}$ order expansions of Matsubara to arbitrary order in the standard deviation $\sigma_0$ for $P(\delta T)$ and ${\mathrm v}_0$, where the perturbative expansion coefficients are explicitly given in terms of complete Bell polynomials. The comparison of the Hermite expansions and the perturbative expansions is performed for the $\Lambda$CDM map sample and the $Planck$ data. We confirm the weak level of non-Gaussianity ($1$-$2$)$\sigma$ of the foreground corrected masked $Planck$ $2015$ maps.
0
1
0
0
0
0
Enhanced mixing in giant impact simulations with a new Lagrangian method
Giant impacts (GIs) are common in the late stage of planet formation. The Smoothed Particle Hydrodynamics (SPH) method is widely used for simulating the outcome of such violent collisions, one prominent example being the formation of the Moon. However, a decade of numerical studies in various areas of computational astrophysics has shown that the standard formulation of SPH suffers from several shortcomings such as artificial surface tension and its tendency to promptly damp turbulent motions on scales much larger than the physical dissipation scale, both resulting in the suppression of mixing. In order to quantify how severe these limitations are when modeling GIs we carried out a comparison of simulations with identical initial conditions performed with the standard SPH as well as with the novel Lagrangian Meshless Finite Mass (MFM) method in the GIZMO code. We confirm the lack of mixing between the impactor and target when SPH is employed, while MFM is capable of driving vigorous sub-sonic turbulence and leads to significant mixing between the two bodies. Modern SPH variants with artificial conductivity, a different formulation of the hydro force or reduced artificial viscosity, do not improve mixing as significantly. Angular momentum is conserved similarly well in both methods, but MFM does not suffer from spurious transport induced by artificial viscosity, resulting in a slightly higher angular momentum of the proto-lunar disk. Furthermore, SPH initial conditions exhibit an unphysical density discontinuity at the core-mantle boundary which is easily removed in MFM.
0
1
0
0
0
0
New Reinforcement Learning Using a Chaotic Neural Network for Emergence of "Thinking" - "Exploration" Grows into "Thinking" through Learning -
Expectation for the emergence of higher functions is getting larger in the framework of end-to-end reinforcement learning using a recurrent neural network. However, the emergence of "thinking" that is a typical higher function is difficult to realize because "thinking" needs non fixed-point, flow-type attractors with both convergence and transition dynamics. Furthermore, in order to introduce "inspiration" or "discovery" in "thinking", not completely random but unexpected transition should be also required. By analogy to "chaotic itinerancy", we have hypothesized that "exploration" grows into "thinking" through learning by forming flow-type attractors on chaotic random-like dynamics. It is expected that if rational dynamics are learned in a chaotic neural network (ChNN), coexistence of rational state transition, inspiration-like state transition and also random-like exploration for unknown situation can be realized. Based on the above idea, we have proposed new reinforcement learning using a ChNN as an actor. The positioning of exploration is completely different from the conventional one. The chaotic dynamics inside the ChNN produces exploration factors by itself. Since external random numbers for stochastic action selection are not used, exploration factors cannot be isolated from the output. Therefore, the learning method is also completely different from the conventional one. At each non-feedback connection, one variable named causality trace takes in and maintains the input through the connection according to the change in its output. Using the trace and TD error, the weight is updated. In this paper, as the result of a recent simple task to see whether the new learning works or not, it is shown that a robot with two wheels and two visual sensors reaches a target while avoiding an obstacle after learning though there are still many rooms for improvement.
1
0
0
0
0
0
Tensorizing Generative Adversarial Nets
Generative Adversarial Network (GAN) and its variants exhibit state-of-the-art performance in the class of generative models. To capture higher-dimensional distributions, the common learning procedure requires high computational complexity and a large number of parameters. The problem of employing such massive framework arises when deploying it on a platform with limited computational power such as mobile phones. In this paper, we present a new generative adversarial framework by representing each layer as a tensor structure connected by multilinear operations, aiming to reduce the number of model parameters by a large factor while preserving the generative performance and sample quality. To learn the model, we employ an efficient algorithm which alternatively optimizes both discriminator and generator. Experimental outcomes demonstrate that our model can achieve high compression rate for model parameters up to $35$ times when compared to the original GAN for MNIST dataset.
1
0
0
1
0
0
Pretending Fair Decisions via Stealthily Biased Sampling
Fairness by decision-makers is believed to be auditable by third parties. In this study, we show that this is not always true. We consider the following scenario. Imagine a decision-maker who discloses a subset of his dataset with decisions to make his decisions auditable. If he is corrupt, and he deliberately selects a subset that looks fair even though the overall decision is unfair, can we identify this decision-maker's fraud? We answer this question negatively. We first propose a sampling method that produces a subset whose distribution is biased from the original (to pretend to be fair); however, its differentiation from uniform sampling is difficult. We call such a sampling method as stealthily biased sampling, which is formulated as a Wasserstein distance minimization problem, and is solved through a minimum-cost flow computation. We proved that the stealthily biased sampling minimizes an upper-bound of the indistinguishability. We conducted experiments to see that the stealthily biased sampling is, in fact, difficult to detect.
1
0
0
1
0
0
Design of a Time Delay Reservoir Using Stochastic Logic: A Feasibility Study
This paper presents a stochastic logic time delay reservoir design. The reservoir is analyzed using a number of metrics, such as kernel quality, generalization rank, performance on simple benchmarks, and is also compared to a deterministic design. A novel re-seeding method is introduced to reduce the adverse effects of stochastic noise, which may also be implemented in other stochastic logic reservoir computing designs, such as echo state networks. Benchmark results indicate that the proposed design performs well on noise-tolerant classification problems, but more work needs to be done to improve the stochastic logic time delay reservoir's robustness for regression problems.
1
0
0
1
0
0
Polaritons in Living Systems: Modifying Energy Landscapes in Photosynthetic Organisms Using a Photonic Structure
Photosynthetic organisms rely on a series of self-assembled nanostructures with tuned electronic energy levels in order to transport energy from where it is collected by photon absorption, to reaction centers where the energy is used to drive chemical reactions. In the photosynthetic bacteria Chlorobaculum tepidum (Cba. tepidum), a member of the green sulphur bacteria (GSB) family, light is absorbed by large antenna complexes called chlorosomes. The exciton generated is transferred to a protein baseplate attached to the chlorosome, before traveling through the Fenna-Matthews-Olson (FMO) complex to the reaction center. The energy levels of these systems are generally defined by their chemical structure. Here we show that by placing bacteria within a photonic microcavity, we can access the strong exciton-photon coupling regime between a confined cavity mode and exciton states of the chlorosome, whereby a coherent exchange of energy between the bacteria and cavity mode results in the formation of polariton states. The polaritons have an energy distinct from that of the exciton and photon, and can be tuned in situ via the microcavity length. This results in real-time, non-invasive control over the relative energy levels within the bacteria. This demonstrates the ability to strongly influence living biological systems with photonic structures such as microcavities. We believe that by creating polariton states, that are in this case a superposition of a photon and excitons within a living bacteria, we can modify energy transfer pathways and therefore study the importance of energy level alignment on the efficiency of photosynthetic systems.
0
1
0
0
0
0
A Matrix Expander Chernoff Bound
We prove a Chernoff-type bound for sums of matrix-valued random variables sampled via a random walk on an expander, confirming a conjecture due to Wigderson and Xiao. Our proof is based on a new multi-matrix extension of the Golden-Thompson inequality which improves in some ways the inequality of Sutter, Berta, and Tomamichel, and may be of independent interest, as well as an adaptation of an argument for the scalar case due to Healy. Secondarily, we also provide a generic reduction showing that any concentration inequality for vector-valued martingales implies a concentration inequality for the corresponding expander walk, with a weakening of parameters proportional to the squared mixing time.
1
0
0
0
0
0
Learning Neural Representations of Human Cognition across Many fMRI Studies
Cognitive neuroscience is enjoying rapid increase in extensive public brain-imaging datasets. It opens the door to large-scale statistical models. Finding a unified perspective for all available data calls for scalable and automated solutions to an old challenge: how to aggregate heterogeneous information on brain function into a universal cognitive system that relates mental operations/cognitive processes/psychological tasks to brain networks? We cast this challenge in a machine-learning approach to predict conditions from statistical brain maps across different studies. For this, we leverage multi-task learning and multi-scale dimension reduction to learn low-dimensional representations of brain images that carry cognitive information and can be robustly associated with psychological stimuli. Our multi-dataset classification model achieves the best prediction performance on several large reference datasets, compared to models without cognitive-aware low-dimension representations, it brings a substantial performance boost to the analysis of small datasets, and can be introspected to identify universal template cognitive concepts.
1
0
0
1
0
0
Unsupervised Body Part Regression via Spatially Self-ordering Convolutional Neural Networks
Automatic body part recognition for CT slices can benefit various medical image applications. Recent deep learning methods demonstrate promising performance, with the requirement of large amounts of labeled images for training. The intrinsic structural or superior-inferior slice ordering information in CT volumes is not fully exploited. In this paper, we propose a convolutional neural network (CNN) based Unsupervised Body part Regression (UBR) algorithm to address this problem. A novel unsupervised learning method and two inter-sample CNN loss functions are presented. Distinct from previous work, UBR builds a coordinate system for the human body and outputs a continuous score for each axial slice, representing the normalized position of the body part in the slice. The training process of UBR resembles a self-organization process: slice scores are learned from inter-slice relationships. The training samples are unlabeled CT volumes that are abundant, thus no extra annotation effort is needed. UBR is simple, fast, and accurate. Quantitative and qualitative experiments validate its effectiveness. In addition, we show two applications of UBR in network initialization and anomaly detection.
1
0
0
0
0
0
Investigating the Application of Common-Sense Knowledge-Base for Identifying Term Obfuscation in Adversarial Communication
Word obfuscation or substitution means replacing one word with another word in a sentence to conceal the textual content or communication. Word obfuscation is used in adversarial communication by terrorist or criminals for conveying their messages without getting red-flagged by security and intelligence agencies intercepting or scanning messages (such as emails and telephone conversations). ConceptNet is a freely available semantic network represented as a directed graph consisting of nodes as concepts and edges as assertions of common sense about these concepts. We present a solution approach exploiting vast amount of semantic knowledge in ConceptNet for addressing the technically challenging problem of word substitution in adversarial communication. We frame the given problem as a textual reasoning and context inference task and utilize ConceptNet's natural-language-processing tool-kit for determining word substitution. We use ConceptNet to compute the conceptual similarity between any two given terms and define a Mean Average Conceptual Similarity (MACS) metric to identify out-of-context terms. The test-bed to evaluate our proposed approach consists of Enron email dataset (having over 600000 emails generated by 158 employees of Enron Corporation) and Brown corpus (totaling about a million words drawn from a wide variety of sources). We implement word substitution techniques used by previous researches to generate a test dataset. We conduct a series of experiments consisting of word substitution methods used in the past to evaluate our approach. Experimental results reveal that the proposed approach is effective.
1
0
0
0
0
0
Titanium dioxide hole-blocking layer in ultra-thin-film crystalline silicon solar cells
One of the remaining obstacles to approaching the theoretical efficiency limit of crystalline silicon (c-Si) solar cells is the exceedingly high interface recombination loss for minority carriers at the Ohmic contacts. In ultra-thin-film c-Si solar cells, this contact recombination loss is far more severe than for traditional thick cells due to the smaller volume and higher minority carrier concentration of the former. This paper presents a novel design of an electron passing (Ohmic) contact to n-type Si that is hole-blocking with significantly reduced hole recombination. This contact is formed by depositing a thin titanium dioxide (TiO2) layer to form a silicon metal-insulator-semiconductor (MIS) contact. A 2 {\mu}m thick Si cell with this TiO2 MIS contact achieved an open circuit voltage (Voc) of 645 mV, which is 10 mV higher than that of an ultra-thin cell with a metal contact. This MIS contact demonstrates a new path for ultra-thin-film c-Si solar cells to achieve high efficiencies as high as traditional thick cells, and enables the fabrication of high-efficiency c-Si solar cells at a lower cost.
0
1
0
0
0
0
Phase Space Sketching for Crystal Image Analysis based on Synchrosqueezed Transforms
Recent developments of imaging techniques enable researchers to visualize materials at the atomic resolution to better understand the microscopic structures of materials. This paper aims at automatic and quantitative characterization of potentially complicated microscopic crystal images, providing feedback to tweak theories and improve synthesis in materials science. As such, an efficient phase-space sketching method is proposed to encode microscopic crystal images in a translation, rotation, illumination, and scale invariant representation, which is also stable with respect to small deformations. Based on the phase-space sketching, we generalize our previous analysis framework for crystal images with simple structures to those with complicated geometry.
0
1
0
0
0
0
Specification tests in semiparametric transformation models - a multiplier bootstrap approach
We consider semiparametric transformation models, where after pre-estimation of a parametric transformation of the response the data are modeled by means of nonparametric regression. We suggest subsequent procedures for testing lack-of-fit of the regression function and for significance of covariables, which - in contrast to procedures from the literature - are asymptotically not influenced by the pre-estimation of the transformation. The test statistics are asymptotically pivotal and have the same asymptotic distribution as in regression models without transformation. We show validity of a multiplier bootstrap procedure which is easier to implement and much less computationally demanding than bootstrap procedures based on the transformation model. In a simulation study we demonstrate the superior performance of the procedure in comparison with the competitors from the literature.
0
0
1
1
0
0
Collusions in Teichmüller expansions
If $\mathfrak{p} \subseteq \mathbb{Z}[\zeta]$ is a prime ideal over $p$ in the $(p^d - 1)$th cyclotomic extension of $\mathbb{Z}$, then every element $\alpha$ of the completion $\mathbb{Z}[\zeta]_\mathfrak{p}$ has a unique expansion as a power series in $p$ with coefficients in $\mu_{p^d -1} \cup \{0\}$ called the Teichmüller expansion of $\alpha$ at $\mathfrak{p}$. We observe three peculiar and seemingly unrelated patterns that frequently appear in the computation of Teichmüller expansions, then develop a unifying theory to explain these patterns in terms of the dynamics of an affine group action on $\mathbb{Z}[\zeta]$.
0
0
1
0
0
0
Driver Drowsiness Estimation from EEG Signals Using Online Weighted Adaptation Regularization for Regression (OwARR)
One big challenge that hinders the transition of brain-computer interfaces (BCIs) from laboratory settings to real-life applications is the availability of high-performance and robust learning algorithms that can effectively handle individual differences, i.e., algorithms that can be applied to a new subject with zero or very little subject-specific calibration data. Transfer learning and domain adaptation have been extensively used for this purpose. However, most previous works focused on classification problems. This paper considers an important regression problem in BCI, namely, online driver drowsiness estimation from EEG signals. By integrating fuzzy sets with domain adaptation, we propose a novel online weighted adaptation regularization for regression (OwARR) algorithm to reduce the amount of subject-specific calibration data, and also a source domain selection (SDS) approach to save about half of the computational cost of OwARR. Using a simulated driving dataset with 15 subjects, we show that OwARR and OwARR-SDS can achieve significantly smaller estimation errors than several other approaches. We also provide comprehensive analyses on the robustness of OwARR and OwARR-SDS.
1
0
0
0
0
0
The complex social network of surnames: A comparison between Brazil and Portugal
We present a study of social networks based on the analysis of Brazilian and Portuguese family names (surnames). We construct networks whose nodes are names of families and whose edges represent parental relations between two families. From these networks we extract the connectivity distribution, clustering coefficient, shortest path and centrality. We find that the connectivity distribution follows an approximate power law. We associate the number of hubs, centrality and entropy to the degree of miscegenation in the societies in both countries. Our results show that Portuguese society has a higher miscegenation degree than Brazilian society. All networks analyzed lead to approximate inverse square power laws in the degree distribution. We conclude that the thermodynamic limit is reached for small networks (3 or 4 thousand nodes). The assortative mixing of all networks is negative, showing that the more connected vertices are connected to vertices with lower connectivity. Finally, the network of surnames presents some small world characteristics.
1
1
0
0
0
0
Electrical characterization of structured platinum diselenide devices
Platinum diselenide (PtSe2) is an exciting new member of the two-dimensional (2D) transition metal dichalcogenide (TMD) family. it has a semimetal to semiconductor transition when approaching monolayer thickness and has already shown significant potential for use in device applications. Notably, PtSe2 can be grown at low temperature making it potentially suitable for industrial usage. Here, we address thickness dependent transport properties and investigate electrical contacts to PtSe2, a crucial and universal element of TMD-based electronic devices. PtSe2 films have been synthesized at various thicknesses and structured to allow contact engineering and the accurate extraction of electrical properties. Contact resistivity and sheet resistance extracted from transmission line method (TLM) measurements are compared for different contact metals and different PtSe2 film thicknesses. Furthermore, the transition from semimetal to semiconductor in PtSe2 has been indirectly verified by electrical characterization of field-effect devices. Finally, the influence of edge contacts at the metal - PtSe2 interface has been studied by nanostructuring the contact area using electron beam lithography. By increasing the edge contact length, the contact resistivity was improved by up to 70% compared to devices with conventional top contacts. The results presented here represent crucial steps towards realizing high-performance nanoelectronic devices based on group-10 TMDs.
0
1
0
0
0
0
Topological $\mathbb{Z}_2$ Resonating-Valence-Bond Spin Liquid on the Square Lattice
A one-parameter family of long-range resonating valence bond (RVB) state on the square lattice was previously proposed to describe a critical spin liquid (SL) phase of the spin-$1/2$ frustrated Heisenberg model. We provide evidence that this RVB state in fact also realises a topological (long-range entangled) $\mathbb{Z}_2$ SL, limited by two transitions to critical SL phases. The topological phase is naturally connected to the $\mathbb{Z}_2$ gauge symmetry of the local tensor. This work shows that, on one hand, spin-$1/2$ topological SL with $C_{4v}$ point group symmetry and $SU(2)$ spin rotation symmetry exists on the square lattice and, on the other hand, criticality and nonbipartiteness are compatible. We also point out that, strong similarities between our phase diagram and the ones of classical interacting dimer models suggest both can be described by similar Kosterlitz-Thouless transitions. This scenario is further supported by the analysis of the one-dimensional boundary state.
0
1
0
0
0
0
Scalable Inference for Space-Time Gaussian Cox Processes
The log-Gaussian Cox process is a flexible and popular class of point pattern models for capturing spatial and space-time dependence for point patterns. Model fitting requires approximation of stochastic integrals which is implemented through discretization over the domain of interest. With fine scale discretization, inference based on Markov chain Monte Carlo is computationally burdensome because of the cost of matrix decompositions and storage, such as the Cholesky, for high dimensional covariance matrices associated with latent Gaussian variables. This article addresses these computational bottlenecks by combining two recent developments: (i) a data augmentation strategy that has been proposed for space-time Gaussian Cox processes that is based on exact Bayesian inference and does not require fine grid approximations for infinite dimensional integrals, and (ii) a recently developed family of sparsity-inducing Gaussian processes, called nearest-neighbor Gaussian processes, to avoid expensive matrix computations. Our inference is delivered within the fully model-based Bayesian paradigm and does not sacrifice the richness of traditional log-Gaussian Cox processes. We apply our method to crime event data in San Francisco and investigate the recovery of the intensity surface.
0
0
0
1
0
0
The MISRA C Coding Standard and its Role in the Development and Analysis of Safety- and Security-Critical Embedded Software
The MISRA project started in 1990 with the mission of providing world-leading best practice guidelines for the safe and secure application of both embedded control systems and standalone software. MISRA C is a coding standard defining a subset of the C language, initially targeted at the automotive sector, but now adopted across all industry sectors that develop C software in safety- and/or security-critical contexts. In this paper, we introduce MISRA C, its role in the development of critical software, especially in embedded systems, its relevance to industry safety standards, as well as the challenges of working with a general-purpose programming language standard that is written in natural language with a slow evolution over the last 40+ years. We also outline the role of static analysis in the automatic checking of compliance with respect to MISRA C, and the role of the MISRA C language subset in enabling a wider application of formal methods to industrial software written in C.
1
0
0
0
0
0
Finger Grip Force Estimation from Video using Two Stream Approach
Estimation of a hand grip force is essential for the understanding of force pattern during the execution of assembly or disassembly operations. Human demonstration of a correct way of doing an operation is a powerful source of information which can be used for guided robot teaching. Typically to assess this problem instrumented approach is used, which requires hand or object mounted devices and poses an inconvenience for an operator or limits the scope of addressable objects. The work demonstrates that contact force may be estimated using a noninvasive contactless method with the help of vision system alone. We propose a two-stream approach for video processing, which utilizes both spatial information of each frame and dynamic information of frame change. In this work, image processing and machine learning techniques are used along with dense optical flow for frame change tracking and Kalman filter is used for stream fusion. Our studies show that the proposed method can successfully estimate contact grip force with RMSE < 10% of sensor range (RMSE $\approx 0.2$ N), the performances of each stream and overall method performance are reported. The proposed method has a wide range of applications, including robot teaching through demonstration, haptic force feedback, and validation of human- performed operations.
1
0
0
0
0
0
Markov Properties for Graphical Models with Cycles and Latent Variables
We investigate probabilistic graphical models that allow for both cycles and latent variables. For this we introduce directed graphs with hyperedges (HEDGes), generalizing and combining both marginalized directed acyclic graphs (mDAGs) that can model latent (dependent) variables, and directed mixed graphs (DMGs) that can model cycles. We define and analyse several different Markov properties that relate the graphical structure of a HEDG with a probability distribution on a corresponding product space over the set of nodes, for example factorization properties, structural equations properties, ordered/local/global Markov properties, and marginal versions of these. The various Markov properties for HEDGes are in general not equivalent to each other when cycles or hyperedges are present, in contrast with the simpler case of directed acyclic graphical (DAG) models (also known as Bayesian networks). We show how the Markov properties for HEDGes - and thus the corresponding graphical Markov models - are logically related to each other.
0
0
1
1
0
0
A Fast Algorithm for Solving Henderson's Mixed Model Equation
This article investigates a fast and stable method to solve Henderson's mixed model equation. The proposed algorithm is stable in that it avoids inverting a matrix of a large dimension and hence is free from the curse of dimensionality. This tactic is enabled through row operations performed on the design matrix.
0
0
0
1
0
0
Regularity and stability results for the level set flow via the mean curvature flow with surgery
In this article we us the mean curvature flow with surgery to derive regularity estimates going past Brakke regularity for the level set flow. We also show a stability result for the plane under the level set flow.
0
0
1
0
0
0
Non-canonical Conformal Attractors for Single Field Inflation
We extend the idea of conformal attractors in inflation to non-canonical sectors by developing a non-canonical conformally invariant theory from two different approaches. In the first approach, namely, ${\cal N}=1$ supergravity, the construction is more or less phenomenological, where the non-canonical kinetic sector is derived from a particular form of the K$\ddot{a}$hler potential respecting shift symmetry. In the second approach i.e., superconformal theory, we derive the form of the Lagrangian from a superconformal action and it turns out to be exactly of the same form as in the first approach. Conformal breaking of these theories results in a new class of non-canonical models which can govern inflation with modulated shape of the T-models. We further employ this framework to explore inflationary phenomenology with a representative example and show how the form of the K$\ddot{a}$hler potential can possibly be constrained in non-canonical models using the latest confidence contour in the $n_s-r$ plane given by Planck.
0
1
0
0
0
0
Learning model-based planning from scratch
Conventional wisdom holds that model-based planning is a powerful approach to sequential decision-making. It is often very challenging in practice, however, because while a model can be used to evaluate a plan, it does not prescribe how to construct a plan. Here we introduce the "Imagination-based Planner", the first model-based, sequential decision-making agent that can learn to construct, evaluate, and execute plans. Before any action, it can perform a variable number of imagination steps, which involve proposing an imagined action and evaluating it with its model-based imagination. All imagined actions and outcomes are aggregated, iteratively, into a "plan context" which conditions future real and imagined actions. The agent can even decide how to imagine: testing out alternative imagined actions, chaining sequences of actions together, or building a more complex "imagination tree" by navigating flexibly among the previously imagined states using a learned policy. And our agent can learn to plan economically, jointly optimizing for external rewards and computational costs associated with using its imagination. We show that our architecture can learn to solve a challenging continuous control problem, and also learn elaborate planning strategies in a discrete maze-solving task. Our work opens a new direction toward learning the components of a model-based planning system and how to use them.
1
0
0
1
0
0
A Simple Reservoir Model of Working Memory with Real Values
The prefrontal cortex is known to be involved in many high-level cognitive functions, in particular, working memory. Here, we study to what extent a group of randomly connected units (namely an Echo State Network, ESN) can store and maintain (as output) an arbitrary real value from a streamed input, i.e. can act as a sustained working memory unit. Furthermore, we explore to what extent such an architecture can take advantage of the stored value in order to produce non-linear computations. Comparison between different architectures (with and without feedback, with and without a working memory unit) shows that an explicit memory improves the performances.
0
0
0
0
1
0
Tailoring spin defects in diamond
Atomic-size spin defects in solids are unique quantum systems. Most applications require nanometer positioning accuracy, which is typically achieved by low energy ion implantation. So far, a drawback of this technique is the significant residual implantation-induced damage to the lattice, which strongly degrades the performance of spins in quantum applications. In this letter we show that the charge state of implantation-induced defects drastically influences the formation of lattice defects during thermal annealing. We demonstrate that charging of vacancies localized at e.g. individual nitrogen implantation sites suppresses the formation of vacancy complexes, resulting in a tenfold-improved spin coherence time of single nitrogen-vacancy (NV) centers in diamond. This has been achieved by confining implantation defects into the space charge layer of free carriers generated by a nanometer-thin boron-doped diamond structure. Besides, a twofold-improved yield of formation of NV centers is observed. By combining these results with numerical calculations, we arrive at a quantitative understanding of the formation and dynamics of the implanted spin defects. The presented results pave the way for improved engineering of diamond spin defect quantum devices and other solid-state quantum systems.
0
1
0
0
0
0
Relaxed Oracles for Semi-Supervised Clustering
Pairwise "same-cluster" queries are one of the most widely used forms of supervision in semi-supervised clustering. However, it is impractical to ask human oracles to answer every query correctly. In this paper, we study the influence of allowing "not-sure" answers from a weak oracle and propose an effective algorithm to handle such uncertainties in query responses. Two realistic weak oracle models are considered where ambiguity in answering depends on the distance between two points. We show that a small query complexity is adequate for effective clustering with high probability by providing better pairs to the weak oracle. Experimental results on synthetic and real data show the effectiveness of our approach in overcoming supervision uncertainties and yielding high quality clusters.
1
0
0
1
0
0
Searching for axion stars and Q-balls with a terrestrial magnetometer network
Light (pseudo-)scalar fields are promising candidates to be the dark matter in the Universe. Under certain initial conditions in the early Universe and/or with certain types of self-interactions, they can form compact dark-matter objects such as axion stars or Q-balls. Direct encounters with such objects can be searched for by using a global network of atomic magnetometers. It is shown that for a range of masses and radii not ruled out by existing observations, the terrestrial encounter rate with axion stars or Q-balls can be sufficiently high (at least once per year) for a detection. Furthermore, it is shown that a global network of atomic magnetometers is sufficiently sensitive to pseudoscalar couplings to atomic spins so that a transit through an axion star or Q-ball could be detected over a broad range of unexplored parameter space.
0
1
0
0
0
0
Measuring Information Leakage in Website Fingerprinting Attacks
Tor is a low-latency anonymity system intended to provide low-latency anonymous and uncensored network access against a local or network adversary. Because of the design choice to minimize traffic overhead (and increase the pool of potential users) Tor allows some information about the client's connections to leak in the form of packet timing. Attacks that use (features extracted from) this information to infer the website a user visits are referred to as Website Fingerprinting (WF) attacks. We develop a methodology and tools to directly measure the amount of information about a website leaked by a given set of features. We apply this tool to a comprehensive set of features extracted from a large set of websites and WF defense mechanisms, allowing us to make more fine-grained observations about WF attack and defense mechanisms.
1
0
0
0
0
0
Improving galaxy morphology with machine learning
This paper presents machine learning experiments performed over results of galaxy classification into elliptical (E) and spiral (S) with morphological parameters: concetration (CN), assimetry metrics (A3), smoothness metrics (S3), entropy (H) and gradient pattern analysis parameter (GA). Except concentration, all parameters performed a image segmentation pre-processing. For supervision and to compute confusion matrices, we used as true label the galaxy classification from GalaxyZoo. With a 48145 objects dataset after preprocessing (44760 galaxies labeled as S and 3385 as E), we performed experiments with Support Vector Machine (SVM) and Decision Tree (DT). Whit a 1962 objects balanced dataset, we applied K- means and Agglomerative Hierarchical Clustering. All experiments with supervision reached an Overall Accuracy OA >= 97%.
0
1
0
0
0
0
Magnetized strange quark model with Big Rip singularity in $f(R,T)$ gravity
LRS (Locally Rotationally symmetric) Bianchi type-I magnetized strange quark matter cosmological model have been studied based on $f(R,T)$ gravity. The exact solutions of the field equations are derived with linearly time varying deceleration parameter which is consistent with observational data (from SNIa, BAO and CMB) of standard cosmology. It is observed that the model start with big bang and ends with a Big Rip. The transition of deceleration parameter from decelerating phase to accelerating phase with respect to redshift obtained in our model fits with the recent observational data obtained by Farook et al. in 2017. The well known Hubble parameter $H(z)$ and distance modulus $\mu(z)$ are discussed with redshift.
0
1
0
0
0
0
NPC: Neighbors Progressive Competition Algorithm for Classification of Imbalanced Data Sets
Learning from many real-world datasets is limited by a problem called the class imbalance problem. A dataset is imbalanced when one class (the majority class) has significantly more samples than the other class (the minority class). Such datasets cause typical machine learning algorithms to perform poorly on the classification task. To overcome this issue, this paper proposes a new approach Neighbors Progressive Competition (NPC) for classification of imbalanced datasets. Whilst the proposed algorithm is inspired by weighted k-Nearest Neighbor (k-NN) algorithms, it has major differences from them. Unlike k- NN, NPC does not limit its decision criteria to a preset number of nearest neighbors. In contrast, NPC considers progressively more neighbors of the query sample in its decision making until the sum of grades for one class is much higher than the other classes. Furthermore, NPC uses a novel method for grading the training samples to compensate for the imbalance issue. The grades are calculated using both local and global information. In brief, the contribution of this paper is an entirely new classifier for handling the imbalance issue effectively without any manually-set parameters or any need for expert knowledge. Experimental results compare the proposed approach with five representative algorithms applied to fifteen imbalanced datasets and illustrate this algorithms effectiveness.
1
0
0
1
0
0
On symmetric intersecting families
A family of sets is said to be \emph{symmetric} if its automorphism group is transitive, and \emph{intersecting} if any two sets in the family have nonempty intersection. Our purpose here is to study the following question: for $n, k\in \mathbb{N}$ with $k \le n/2$, how large can a symmetric intersecting family of $k$-element subsets of $\{1,2,\ldots,n\}$ be? As a first step towards a complete answer, we prove that such a family has size at most \[\exp\left(-\frac{c(n-2k)\log n}{k( \log n - \log k)} \right) \binom{n}{k},\] where $c > 0$ is a universal constant. We also describe various combinatorial and algebraic approaches to constructing such families.
0
0
1
0
0
0
The Geometry of Limit State Function Graphs and Subset Simulation
In the last fifteen the subset sampling method has often been used in reliability problems as a tool for calculating small probabilities. This method is extrapolating from an initial Monte Carlo estimate for the probability content of a failure domain found by a suitable higher level of the original limit state function. Then iteratively conditional probabilities are estimated for failures domains decreasing to the original failure domain. But there are assumptions not immediately obvious about the structure of the failure domains which must be fulfilled that the method works properly. Here examples are studied that show that at least in some cases if these premises are not fulfilled, inaccurate results may be obtained. For the further development of the subset sampling method it is certainly desirable to find approaches where it is possible to check that these implicit assumptions are not violated. Also it would be probably important to develop further improvements of the concept to get rid of these limitations.
0
0
0
1
0
0
Decay of Solutions to the Maxwell Equations on Schwarzschild-de Sitter Spacetimes
In this work, we consider solutions of the Maxwell equations on the Schwarzschild-de Sitter family of black hole spacetimes. We prove that, in the static region bounded by black hole and cosmological horizons, solutions of the Maxwell equations decay to stationary Coulomb solutions at a super-polynomial rate, with decay measured according to ingoing and outgoing null coordinates. Our method employs a differential transformation of Maxwell tensor components to obtain higher-order quantities satisfying a Fackerell-Ipser equation, in the style of Chandrasekhar and the more recent work of Pasqualotto. The analysis of the Fackerell-Ipser equation is accomplished by means of the vector field method, with decay estimates for the higher-order quantities leading to decay estimates for components of the Maxwell tensor.
0
0
1
0
0
0
Machine Learning pipeline for discovering neuroimaging-based biomarkers in neurology and psychiatry
We consider a problem of diagnostic pattern recognition/classification from neuroimaging data. We propose a common data analysis pipeline for neuroimaging-based diagnostic classification problems using various ML algorithms and processing toolboxes for brain imaging. We illustrate the pipeline application by discovering new biomarkers for diagnostics of epilepsy and depression based on clinical and MRI/fMRI data for patients and healthy volunteers.
0
0
0
1
0
0
Covering and separation of Chebyshev points for non-integrable Riesz potentials
For Riesz $s$-potentials $K(x,y)=|x-y|^{-s}$, $s>0$, we investigate separation and covering properties of $N$-point configurations $\omega^*_N=\{x_1, \ldots, x_N\}$ on a $d$-dimensional compact set $A\subset \mathbb{R}^\ell$ for which the minimum of $\sum_{j=1}^N K(x, x_j)$ is maximal. Such configurations are called $N$-point optimal Riesz $s$-polarization (or Chebyshev) configurations. For a large class of $d$-dimensional sets $A$ we show that for $s>d$ the configurations $\omega^*_N$ have the optimal order of covering. Furthermore, for these sets we investigate the asymptotics as $N\to \infty$ of the best covering constant. For these purposes we compare best-covering configurations with optimal Riesz $s$-polarization configurations and determine the $s$-th root asymptotic behavior (as $s\to \infty$) of the maximal $s$-polarization constants. In addition, we introduce the notion of "weak separation" for point configurations and prove this property for optimal Riesz $s$-polarization configurations on $A$ for $s>\text{dim}(A)$, and for $d-1\leqslant s < d$ on the sphere $\mathbb{S}^d$.
0
0
1
0
0
0
The Strong Small Index Property for Free Homogeneous Structures
We show that in algebraically locally finite countable homogeneous structures with a free stationary independence relation the small index property implies the strong small index property. We use this and the main result of [15] to deduce that countable free homogeneous structures in a locally finite relational language have the strong small index property. We also exhibit new continuum sized classes of $\aleph_0$-categorical structures with the strong small index property whose automorphism groups are pairwise non-isomorphic.
0
0
1
0
0
0
The Unheralded Value of the Multiway Rendezvous: Illustration with the Production Cell Benchmark
The multiway rendezvous introduced in Theoretical CSP is a powerful paradigm to achieve synchronization and communication among a group of (possibly more than two) processes. We illustrate the advantages of this paradigm on the production cell benchmark, a model of a real metal processing plant, for which we propose a compositional software controller, which is written in LNT and LOTOS, and makes intensive use of the multiway rendezvous.
1
0
0
0
0
0
Sequential Multiple Testing
We study an online multiple testing problem where the hypotheses arrive sequentially in a stream. The test statistics are independent and assumed to have the same distribution under their respective null hypotheses. We investigate two procedures LORD and LOND, proposed by (Javanmard and Montanari, 2015), which are proved to control the FDR in an online manner. In some (static) model, we show that LORD is optimal in some asymptotic sense, in particular as powerful as the (static) Benjamini-Hochberg procedure to first asymptotic order. We also quantify the performance of LOND. Some numerical experiments complement our theory.
0
0
1
1
0
0
On the selection of polynomials for the DLP algorithm
In this paper we characterize the set of polynomials $f\in\mathbb F_q[X]$ satisfying the following property: there exists a positive integer $d$ such that for any positive integer $\ell$ less or equal than the degree of $f$, there exists $t_0$ in $\mathbb F_{q^d}$ such that the polynomial $f-t_0$ has an irreducible factor of degree $\ell$ over $\mathbb F_{q^d}[X]$. This result is then used to progress in the last step which is needed to remove the heuristic from one of the quasi-polynomial time algorithms for discrete logarithm problems (DLP) in small characteristic. Our characterization allows a construction of polynomials satisfying the wanted property.
1
0
1
0
0
0
Existence of global weak solutions to the kinetic Peterlin model
We consider a class of kinetic models for polymeric fluids motivated by the Peterlin dumbbell theories for dilute polymer solutions with a nonlinear spring law for an infinitely extensible spring. The polymer molecules are suspended in an incompressible viscous Newtonian fluid confined to a bounded domain in two or three space dimensions. The unsteady motion of the solvent is described by the incompressible Navier-Stokes equations with the elastic extra stress tensor appearing as a forcing term in the momentum equation. The elastic stress tensor is defined by the Kramers expression through the probability density function that satisfies the corresponding Fokker-Planck equation. In this case, a coefficient depending on the average length of polymer molecules appears in the latter equation. Following the recent work of Barrett and Süli we prove the existence of global-in-time weak solutions to the kinetic Peterlin model in two space dimensions.
0
0
1
0
0
0
Passivation and Cooperative Control of Equilibrium-Independent Passivity-Short Systems
Maximal equilibrium-independent passivity (MEIP) is a recently introduced system property which has acquired special attention in the study of networked dynamical systems. MEIP requires a system to be passive with respect to any forced equilibrium configuration and the associated steady-state input-output map must be maximally monotone. In practice, however, most of the systems are not well behaved and possess shortage of passivity or non-passiveness in their operation. In this paper, we consider a class of passivity-short systems, namely equilibrium-independent passivity-short (EIPS) systems, and presents an input-output transformation based generalized passivation approach to ensure their MEIP properties. We characterize the steady-state input-output relations of the EIPS systems and establish their connection with that of the transformed MEIP systems. We further study the diffusively-coupled networked interactions of such EIPS systems and explore their connection to a pair of dual network optimization problems, under the proposed matrix transformation. A simulation example is given to illustrate the theoretical results.
1
0
0
0
0
0
OpenCluster: A Flexible Distributed Computing Framework for Astronomical Data Processing
The volume of data generated by modern astronomical telescopes is extremely large and rapidly growing. However, current high-performance data processing architectures/frameworks are not well suited for astronomers because of their limitations and programming difficulties. In this paper, we therefore present OpenCluster, an open-source distributed computing framework to support rapidly developing high-performance processing pipelines of astronomical big data. We first detail the OpenCluster design principles and implementations and present the APIs facilitated by the framework. We then demonstrate a case in which OpenCluster is used to resolve complex data processing problems for developing a pipeline for the Mingantu Ultrawide Spectral Radioheliograph. Finally, we present our OpenCluster performance evaluation. Overall, OpenCluster provides not only high fault tolerance and simple programming interfaces, but also a flexible means of scaling up the number of interacting entities. OpenCluster thereby provides an easily integrated distributed computing framework for quickly developing a high-performance data processing system of astronomical telescopes and for significantly reducing software development expenses.
1
1
0
0
0
0
Magnetic order and spin dynamics across a ferromagnetic quantum critical point: $μ$SR investigations of YbNi$_4$(P$_{1-x}$As$_x$)$_2$
In the quasi-1D heavy-fermion system YbNi$_4$(P$_{1-x}$As$_x$)$_2$ the presence of a ferromagnetic (FM) quantum critical point (QCP) at $x_c$ $\approx 0.1$ with unconventional quantum critical exponents in the thermodynamic properties has been recently reported. Here, we present muon-spin relaxation ($\mu$SR) experiments on polycrystals of this series to study the magnetic order and the low energy 4$f$-electronic spin dynamics across the FM QCP. The zero field $\mu$SR measurements on pure YbNi$_4$(P$_{2}$ proved static long range magnetic order and suggested a strongly reduced ordered Yb moment of about 0.04$\mu_B$. With increasing As substitution the ordered moment is reduced by half at $x = 0.04$ and to less than 0.005 $\mu_B$ at $x=0.08$. The dynamic behavior in the $\mu$SR response show that magnetism remains homogeneous upon As substitution, without evidence for disorder effect. In the paramagnetic state across the FM QCP the dynamic muon-spin relaxation rate follows 1/$T_{1}T\propto T^{-n}$ with $1.01 \pm 0.04 \leq n \leq 1.13 \pm 0.06$. The critical fluctuations are very slow and are even becoming slower when approaching the QCP.
0
1
0
0
0
0
Pixel-Level Statistical Analyses of Prescribed Fire Spread
Wildland fire dynamics is a complex turbulent dimensional process. Cellular automata (CA) is an efficient tool to predict fire dynamics, but the main parameters of the method are challenging to estimate. To overcome this challenge, we compute statistical distributions of the key parameters of a CA model using infrared images from controlled burns. Moreover, we apply this analysis to different spatial scales and compare the experimental results to a simple statistical model. By performing this analysis and making this comparison, several capabilities and limitations of CA are revealed.
0
1
0
0
0
0
Deep Reinforcement Learning for Swarm Systems
Recently, deep reinforcement learning (RL) methods have been applied successfully to multi-agent scenarios. Typically, these methods rely on a concatenation of agent states to represent the information content required for decentralized decision making. However, concatenation scales poorly to swarm systems with a large number of homogeneous agents as it does not exploit the fundamental properties inherent to these systems: (i) the agents in the swarm are interchangeable and (ii) the exact number of agents in the swarm is irrelevant. Therefore, we propose a new state representation for deep multi-agent RL based on mean embeddings of distributions. We treat the agents as samples of a distribution and use the empirical mean embedding as input for a decentralized policy. We define different feature spaces of the mean embedding using histograms, radial basis functions and a neural network learned end-to-end. We evaluate the representation on two well known problems from the swarm literature (rendezvous and pursuit evasion), in a globally and locally observable setup. For the local setup we furthermore introduce simple communication protocols. Of all approaches, the mean embedding representation using neural network features enables the richest information exchange between neighboring agents facilitating the development of more complex collective strategies.
1
0
0
1
0
0
Predicting computational reproducibility of data analysis pipelines in large population studies using collaborative filtering
Evaluating the computational reproducibility of data analysis pipelines has become a critical issue. It is, however, a cumbersome process for analyses that involve data from large populations of subjects, due to their computational and storage requirements. We present a method to predict the computational reproducibility of data analysis pipelines in large population studies. We formulate the problem as a collaborative filtering process, with constraints on the construction of the training set. We propose 6 different strategies to build the training set, which we evaluate on 2 datasets, a synthetic one modeling a population with a growing number of subject types, and a real one obtained with neuroinformatics pipelines. Results show that one sampling method, "Random File Numbers (Uniform)" is able to predict computational reproducibility with a good accuracy. We also analyze the relevance of including file and subject biases in the collaborative filtering model. We conclude that the proposed method is able to speedup reproducibility evaluations substantially, with a reduced accuracy loss.
0
0
0
1
0
0
Experiment Segmentation in Scientific Discourse as Clause-level Structured Prediction using Recurrent Neural Networks
We propose a deep learning model for identifying structure within experiment narratives in scientific literature. We take a sequence labeling approach to this problem, and label clauses within experiment narratives to identify the different parts of the experiment. Our dataset consists of paragraphs taken from open access PubMed papers labeled with rhetorical information as a result of our pilot annotation. Our model is a Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM) cells that labels clauses. The clause representations are computed by combining word representations using a novel attention mechanism that involves a separate RNN. We compare this model against LSTMs where the input layer has simple or no attention and a feature rich CRF model. Furthermore, we describe how our work could be useful for information extraction from scientific literature.
1
0
0
0
0
0
Information-Theoretic Analysis of Refractory Effects in the P300 Speller
The P300 speller is a brain-computer interface that enables people with neuromuscular disorders to communicate based on eliciting event-related potentials (ERP) in electroencephalography (EEG) measurements. One challenge to reliable communication is the presence of refractory effects in the P300 ERP that induces temporal dependence in the user's EEG responses. We propose a model for the P300 speller as a communication channel with memory. By studying the maximum information rate on this channel, we gain insight into the fundamental constraints imposed by refractory effects. We construct codebooks based on the optimal input distribution, and compare them to existing codebooks in literature.
1
0
1
0
0
0
Growth, Industrial Externality, Prospect Dynamics and Well-being on Markets
Functions or 'functionnings' enable to give a structure to any economic activity whether they are used to describe a good or a service that is exchanged on a market or they constitute the capability of an agent to provide the labor market with specific work and skills. That structure encompasses the basic law of supply and demand and the conditions of growth within a transaction and of the inflation control. Functional requirements can be followed from the design of a product to the delivery of a solution to a customer needs with different levels of externalities while value is created integrating organizational and technical constraints whereas a budget is allocated to the various entities of the firm involved in the production. Entering the market through that structure leads to designing basic equations of its dynamics and to finding canonical solutions out of particular equilibria. This approach enables to tackle behavioral foundations of Prospect Theory within a generalization of its probability weighting function turned into an operator which applies to Western, Educated, Industrialized, Rich, and Democratic societies as well as to the poorest ones. The nature of reality and well-being appears then as closely related to the relative satisfaction reached on the market, as it can be conceived by an agent, according to business cycles. This reality being the result of the complementary systems that govern human mind as structured by rational psychologists.
0
0
0
0
0
1
Structure Preserving Model Reduction of Parametric Hamiltonian Systems
While reduced-order models (ROMs) have been popular for efficiently solving large systems of differential equations, the stability of reduced models over long-time integration is of present challenges. We present a greedy approach for ROM generation of parametric Hamiltonian systems that captures the symplectic structure of Hamiltonian systems to ensure stability of the reduced model. Through the greedy selection of basis vectors, two new vectors are added at each iteration to the linear vector space to increase the accuracy of the reduced basis. We use the error in the Hamiltonian due to model reduction as an error indicator to search the parameter space and identify the next best basis vectors. Under natural assumptions on the set of all solutions of the Hamiltonian system under variation of the parameters, we show that the greedy algorithm converges with exponential rate. Moreover, we demonstrate that combining the greedy basis with the discrete empirical interpolation method also preserves the symplectic structure. This enables the reduction of the computational cost for nonlinear Hamiltonian systems. The efficiency, accuracy, and stability of this model reduction technique is illustrated through simulations of the parametric wave equation and the parametric Schrodinger equation.
0
0
1
0
0
0
Consensus measure of rankings
A ranking is an ordered sequence of items, in which an item with higher ranking score is more preferred than the items with lower ranking scores. In many information systems, rankings are widely used to represent the preferences over a set of items or candidates. The consensus measure of rankings is the problem of how to evaluate the degree to which the rankings agree. The consensus measure can be used to evaluate rankings in many information systems, as quite often there is not ground truth available for evaluation. This paper introduces a novel approach for consensus measure of rankings by using graph representation, in which the vertices or nodes are the items and the edges are the relationship of items in the rankings. Such representation leads to various algorithms for consensus measure in terms of different aspects of rankings, including the number of common patterns, the number of common patterns with fixed length and the length of the longest common patterns. The proposed measure can be adopted for various types of rankings, such as full rankings, partial rankings and rankings with ties. This paper demonstrates how the proposed approaches can be used to evaluate the quality of rank aggregation and the quality of top-$k$ rankings from Google and Bing search engines.
1
0
0
0
0
0
Possible resonance effect of dark matter axions in SNS Josephson junctions
Dark matter axions can generate peculiar effects in special types of Josephson junctions, so-called SNS junctions. One can show that the axion field equations in a Josephson environment allow for very small oscillating supercurrents, which manifest themselves as a tiny wiggle in the I-V curve, a so-called Shapiro step, which occurs at a frequency given by the axion mass. The effect is very small but perfectly measurable in modern nanotechnological devices. In this paper I will summarize the theory and then present evidence that candidate Shapiro steps of this type have indeed been seen in several independent condensed matter experiments. Assuming the observed tiny Shapiro steps are due to axion flow then these data point to an axion mass of $(106 \pm 6)\mu$eV, consistent with what is expected for the QCD axion. In addition to the above small Shapiro resonance effects at frequencies in the GHz region one also expects to see broad-band noise effects at much lower frequencies. Overall this approach provides a novel pathway for the future design of new types of axionic dark matter detectors. The resonant Josephson data summarized in this paper are consistent with a 'vanilla' axion with a coupling constant $f_a=\sqrt{v_{EW}m_{Pl}}=5.48 \cdot 10^{10}$GeV given by the geometric average of the electroweak symmetry breaking scale $v_{EW}$ and the Planck mass $m_{Pl}$.
0
1
0
0
0
0
A Composition Theorem for Randomized Query Complexity
Let the randomized query complexity of a relation for error probability $\epsilon$ be denoted by $R_\epsilon(\cdot)$. We prove that for any relation $f \subseteq \{0,1\}^n \times \mathcal{R}$ and Boolean function $g:\{0,1\}^m \rightarrow \{0,1\}$, $R_{1/3}(f\circ g^n) = \Omega(R_{4/9}(f)\cdot R_{1/2-1/n^4}(g))$, where $f \circ g^n$ is the relation obtained by composing $f$ and $g$. We also show that $R_{1/3}\left(f \circ \left(g^\oplus_{O(\log n)}\right)^n\right)=\Omega(\log n \cdot R_{4/9}(f) \cdot R_{1/3}(g))$, where $g^\oplus_{O(\log n)}$ is the function obtained by composing the xor function on $O(\log n)$ bits and $g^t$.
1
0
0
0
0
0
Soliton solutions for the elastic metric on spaces of curves
In this article we investigate a first order reparametrization-invariant Sobolev metric on the space of immersed curves. Motivated by applications in shape analysis where discretizations of this infinite-dimensional space are needed, we extend this metric to the space of Lipschitz curves, establish the wellposedness of the geodesic equation thereon, and show that the space of piecewise linear curves is a totally geodesic submanifold. Thus, piecewise linear curves are natural finite elements for the discretization of the geodesic equation. Interestingly, geodesics in this space can be seen as soliton solutions of the geodesic equation, which were not known to exist for reparametrization-invariant Sobolev metrics on spaces of curves.
0
0
1
0
0
0
Influence of Personal Preferences on Link Dynamics in Social Networks
We study a unique network dataset including periodic surveys and electronic logs of dyadic contacts via smartphones. The participants were a sample of freshmen entering university in the Fall 2011. Their opinions on a variety of political and social issues and lists of activities on campus were regularly recorded at the beginning and end of each semester for the first three years of study. We identify a behavioral network defined by call and text data, and a cognitive network based on friendship nominations in ego-network surveys. Both networks are limited to study participants. Since a wide range of attributes on each node were collected in self-reports, we refer to these networks as attribute-rich networks. We study whether student preferences for certain attributes of friends can predict formation and dissolution of edges in both networks. We introduce a method for computing student preferences for different attributes which we use to predict link formation and dissolution. We then rank these attributes according to their importance for making predictions. We find that personal preferences, in particular political views, and preferences for common activities help predict link formation and dissolution in both the behavioral and cognitive networks.
1
0
0
0
0
0
PBW bases and marginally large tableaux in types B and C
We explicitly describe the isomorphism between two combinatorial realizations of Kashiwara's infinity crystal in types B and C. The first realization is in terms of marginally large tableaux and the other is in terms of Kostant partitions coming from PBW bases. We also discuss a stack notation for Kostant partitions which simplifies that realization.
0
0
1
0
0
0
Ensemble Clustering for Graphs
We propose an ensemble clustering algorithm for graphs (ECG), which is based on the Louvain algorithm and the concept of consensus clustering. We validate our approach by replicating a recently published study comparing graph clustering algorithms over artificial networks, showing that ECG outperforms the leading algorithms from that study. We also illustrate how the ensemble obtained with ECG can be used to quantify the presence of community structure in the graph.
1
0
0
1
0
0
Testing for Feature Relevance: The HARVEST Algorithm
Feature selection with high-dimensional data and a very small proportion of relevant features poses a severe challenge to standard statistical methods. We have developed a new approach (HARVEST) that is straightforward to apply, albeit somewhat computer-intensive. This algorithm can be used to pre-screen a large number of features to identify those that are potentially useful. The basic idea is to evaluate each feature in the context of many random subsets of other features. HARVEST is predicated on the assumption that an irrelevant feature can add no real predictive value, regardless of which other features are included in the subset. Motivated by this idea, we have derived a simple statistical test for feature relevance. Empirical analyses and simulations produced so far indicate that the HARVEST algorithm is highly effective in predictive analytics, both in science and business.
0
0
0
1
0
0
Energy Harvesting Enabled MIMO Relaying through PS
This paper considers a multiple-input multiple-output (MIMO) relay system with an energy harvesting relay node. All nodes are equipped with multiple antennas, and the relay node depends on the harvested energy from the received signal to support information forwarding. In particular, the relay node deploys power splitting based energy harvesting scheme. The capacity maximization problem subject to power constraints at both the source and relay nodes is considered for both fixed source covariance matrix and optimal source covariance matrix cases. Instead of using existing software solvers, iterative approaches using dual decomposition technique are developed based on the structures of the optimal relay precoding and source covariance matrices. Simulation results demonstrate the performance gain of the joint optimization against the fixed source covariance matrix case.
1
0
0
0
0
0
X-ray diagnostics of massive star winds
Observations with powerful X-ray telescopes, such as XMM-Newton and Chandra, significantly advance our understanding of massive stars. Nearly all early-type stars are X-ray sources. Studies of their X-ray emission provide important diagnostics of stellar winds. High-resolution X-ray spectra of O-type stars are well explained when stellar wind clumping is taking into account, providing further support to a modern picture of stellar winds as non-stationary, inhomogeneous outflows. X-ray variability is detected from such winds, on time scales likely associated with stellar rotation. High-resolution X-ray spectroscopy indicates that the winds of late O-type stars are predominantly in a hot phase. Consequently, X-rays provide the best observational window to study these winds. X-ray spectroscopy of evolved, Wolf-Rayet type, stars allows to probe their powerful metal enhanced winds, while the mechanisms responsible for the X-ray emission of these stars are not yet understood.
0
1
0
0
0
0
Local asymptotic properties for Cox-Ingersoll-Ross process with discrete observations
In this paper, we consider a one-dimensional Cox-Ingersoll-Ross (CIR) process whose drift coefficient depends on unknown parameters. Considering the process discretely observed at high frequency, we prove the local asymptotic normality property in the subcritical case, the local asymptotic quadraticity in the critical case, and the local asymptotic mixed normality property in the supercritical case. To obtain these results, we use the Malliavin calculus techniques developed recently for CIR process together with the $L^p$-norm estimation for positive and negative moments of the CIR process. In this study, we require the same conditions of high frequency $\Delta_n\rightarrow 0$ and infinite horizon $n\Delta_n\rightarrow\infty$ as in the case of ergodic diffusions with globally Lipschitz coefficients studied earlier by Gobet \cite{G02}. However, in the non-ergodic cases, additional assumptions on the decreasing rate of $\Delta_n$ are required due to the fact that the square root diffusion coefficient of the CIR process is not regular enough. Indeed, we assume $\frac{n\Delta_n^{\frac{3}{2}}}{\log(n\Delta_n)}\to 0$ for the critical case and $n\Delta_n^2\to 0$ for the supercritical case.
0
0
1
1
0
0
JHelioviewer - Time-dependent 3D visualisation of solar and heliospheric data
Context. Solar observatories are providing the world-wide community with a wealth of data, covering large time ranges, multiple viewpoints, and returning large amounts of data. In particular, the large volume of SDO data presents challenges: it is available only from a few repositories, and full-disk, full-cadence data for reasonable durations of scientific interest are difficult to download practically due to their size and download data rates available to most users. From a scientist's perspective this poses three problems: accessing, browsing and finding interesting data as efficiently as possible. Aims. To address these challenges, we have developed JHelioviewer, a visualisation tool for solar data based on the JPEG2000 compression standard and part of the open source ESA/NASA Helioviewer Project. Since the first release of JHelioviewer, the scientific functionality of the software has been extended significantly, and the objective of this paper is to highlight these improvements. Methods. The JPEG2000 standard offers useful new features that facilitate the dissemination and analysis of high-resolution image data and offers a solution to the challenge of efficiently browsing petabyte-scale image archives. The JHelioviewer software is open source, platform independent and extendable via a plug-in architecture. Results. With JHelioviewer, users can visualise the Sun for any time period between September 1991 and today. They can perform basic image processing in real time, track features on the Sun and interactively overlay magnetic field extrapolations. The software integrates solar event data and a time line display. As a first step towards supporting science planning of the upcoming Solar Orbiter mission, JHelioviewer offers a virtual camera model that enables users to set the vantage point to the location of a spacecraft or celestial body at any given time.
0
1
0
0
0
0
Topological Dirac Nodal-net Fermions in AlB$_2$-type TiB$_2$ and ZrB$_2$
Based on first-principles calculations and effective model analysis, a Dirac nodal-net semimetal state is recognized in AlB$_2$-type TiB$_2$ and ZrB$_2$ when spin-orbit coupling (SOC) is ignored. Taking TiB$_2$ as an example, there are several topological excitations in this nodal-net structure including triple point, nexus, and nodal link, which are protected by coexistence of spatial-inversion symmetry and time reversal symmetry. This nodal-net state is remarkably different from that of IrF$_4$, which requires sublattice chiral symmetry. In addition, linearly and quadratically dispersed two-dimensional surface Dirac points are identified as having emerged on the B-terminated and Ti-terminated (001) surfaces of TiB$_2$ respectively, which are analogous to those of monolayer and bilayer graphene.
0
1
0
0
0
0
Model-based Design Evaluation of a Compact, High-Efficiency Neutron Scatter Camera
This paper presents the model-based design and evaluation of an instrument that estimates incident neutron direction using the kinematics of neutron scattering by hydrogen-1 nuclei in an organic scintillator. The instrument design uses a single, nearly contiguous volume of organic scintillator that is internally subdivided only as necessary to create optically isolated pillars. Scintillation light emitted in a given pillar is confined to that pillar by a combination of total internal reflection and a specular reflector applied to the four sides of the pillar transverse to its long axis. The scintillation light is collected at each end of the pillar using a photodetector. In this optically segmented design, the (x, y) position of scintillation light emission (where the x and y coordinates are transverse to the long axis of the pillars) is estimated as the pillar's (x, y) position in the scintillator "block", and the z-position (the position along the pillar's long axis) is estimated from the amplitude and relative timing of the signals produced by the photodetectors at each end of the pillar. For proton recoils greater than 1 MeV, we show that the (x, y, z)-position of neutron-proton scattering can be estimated with < 1 cm root-mean-squared [RMS] error and the proton recoil energy can be estimated with < 50 keV RMS error by fitting the photodetectors' response time history to models of optical photon transport within the scintillator pillars. Finally, we evaluate several alternative designs of this proposed single-volume scatter camera made of pillars of plastic scintillator (SVSC-PiPS), studying the effect of pillar dimensions, scintillator material, and photodetector response vs. time. Specifically, we conclude that an SVSC-PiPS constructed using EJ-204 and an MCP-PM will produce the most precise estimates of incident neutron direction and energy.
0
1
0
0
0
0
Galactic Pal-eontology: Abundance Analysis of the Disrupting Globular Cluster Palomar 5
We present a chemical abundance analysis of the tidally disrupted globular cluster (GC) Palomar 5. By co-adding high-resolution spectra of 15 member stars from the cluster's main body, taken at low signal-to-noise with the Keck/HIRES spectrograph, we were able to measure integrated abundance ratios of 24 species of 20 elements including all major nucleosynthetic channels (namely the light element Na; $\alpha$-elements Mg, Si, Ca, Ti; Fe-peak and heavy elements Sc, V, Cr, Mn, Co, Ni, Cu, Zn; and the neutron-capture elements Y, Zr, Ba, La, Nd, Sm, Eu). The mean metallicity of $-1.56\pm0.02\pm0.06$ dex (statistical and systematic errors) agrees well with the values from individual, low-resolution measurements of individual stars, but it is lower than previous high-resolution results of a small number of stars in the literature. Comparison with Galactic halo stars and other disrupted and unperturbed GCs renders Pal~5 a typical representative of the Milky Way halo population, as has been noted before, emphasizing that the early chemical evolution of such clusters is decoupled from their later dynamical history. We also performed a test as to the detectability of light element variations in this co-added abundance analysis technique and found that this approach is not sensitive even in the presence of a broad range in sodium of $\sim$0.6 dex, a value typically found in the old halo GCs. Thus, while methods of determining the global abundance patterns of such objects are well suited to study their overall enrichment histories, chemical distinctions of their multiple stellar populations is still best obtained from measurements of individual stars.
0
1
0
0
0
0
Anisotropic mechanical and optical response and negative Poissons ratio in Mo2C nanomembranes revealed by first-principles simulations
Transition metal carbides include a wide variety of materials with attractive properties that are suitable for numerous and diverse applications. Most recent experimental advance could provide a path toward successful synthesis of large-area and high-quality ultrathin Mo2C membranes with superconducting properties. In the present study, we used first-principles density functional theory calculations to explore the mechanical and optical response of single-layer and free-standing Mo2C. Uniaxial tensile simulations along the armchair and zigzag directions were conducted and we found that while the elastic properties are close along various loading directions, nonlinear regimes in stress-strain curves are considerably different. We found that Mo2C sheets present negative Poisson's ratio and thus can be categorized as an auxetic material. Our simulations also reveal that Mo2C films retain their metallic electronic characteristic upon the uniaxial loading. We found that for Mo2C nanomembranes the dielectric function becomes anisotropic along in-plane and out-of plane directions. Our findings can be useful for the practical application of Mo2C sheets in nanodevices.
0
1
0
0
0
0