title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Alliance formation with exclusion in the spatial public goods game
Detecting defection and alarming partners about the possible danger could be essential to avoid being exploited. This act, however, may require a huge individual effort from those who take this job, hence such a strategy seems to be unfavorable. But structured populations can provide an opportunity where a largely unselfish excluder strategy can form an effective alliance with other cooperative strategies, hence they can sweep out defection. Interestingly, this alliance is functioning even at the extremely high cost of exclusion where the sole application of an exclusion strategy would be harmful otherwise. These results may explain why the emergence of extreme selfless behavior is not necessarily against individual selection but could be the result of an evolutionary process.
1
1
0
0
0
0
Fast and In Sync: Periodic Swarm Patterns for Quadrotors
This paper aims to design quadrotor swarm performances, where the swarm acts as an integrated, coordinated unit embodying moving and deforming objects. We divide the task of creating a choreography into three basic steps: designing swarm motion primitives, transitioning between those movements, and synchronizing the motion of the drones. The result is a flexible framework for designing choreographies comprised of a wide variety of motions. The motion primitives can be intuitively designed using few parameters, providing a rich library for choreography design. Moreover, we combine and adapt existing goal assignment and trajectory generation algorithms to maximize the smoothness of the transitions between motion primitives. Finally, we propose a correction algorithm to compensate for motion delays and synchronize the motion of the drones to a desired periodic motion pattern. The proposed methodology was validated experimentally by generating and executing choreographies on a swarm of 25 quadrotors.
1
0
0
0
0
0
Nonparametric estimation of locally stationary Hawkes processe
In this paper we consider multivariate Hawkes processes with baseline hazard and kernel functions that depend on time. This defines a class of locally stationary processes. We discuss estimation of the time-dependent baseline hazard and kernel functions based on a localized criterion. Theory on stationary Hawkes processes is extended to develop asymptotic theory for the estimator in the locally stationary model.
0
0
1
1
0
0
Comparison of Flow Scheduling Policies for Mix of Regular and Deadline Traffic in Datacenter Environments
Datacenters are the main infrastructure on top of which cloud computing services are offered. Such infrastructure may be shared by a large number of tenants and applications generating a spectrum of datacenter traffic. Delay sensitive applications and applications with specific Service Level Agreements (SLAs), generate deadline constrained flows, while other applications initiate flows that are desired to be delivered as early as possible. As a result, datacenter traffic is a mix of two types of flows: deadline and regular. There are several scheduling policies for either traffic type with focus on minimizing completion times or deadline miss rate. In this report, we apply several scheduling policies to mix traffic scenario while varying the ratio of regular to deadline traffic. We consider FCFS (First Come First Serve), SRPT (Shortest Remaining Processing Time) and Fair Sharing as deadline agnostic approaches and a combination of Earliest Deadline First (EDF) with either FCFS or SRPT as deadline-aware schemes. In addition, for the latter, we consider both cases of prioritizing deadline traffic (Deadline First) and prioritizing regular traffic (Deadline Last). We study both light-tailed and heavy-tailed flow size distributions and measure mean, median and tail flow completion times (FCT) for regular flows along with Deadline Miss Rate (DMR) and average lateness for deadline flows. We also consider two operation regimes of lightly-loaded (low utilization) and heavily-loaded (high utilization). We find that performance of deadline-aware schemes is highly dependent on fraction of deadline traffic. With light-tailed flow sizes, we find that FCFS performs better in terms of tail times and average lateness while SRPT performs better in average times and deadline miss rate. For heavy-tailed flow sizes, except for tail times, SRPT performs better in all other metrics.
1
0
0
0
0
0
Getting around the Halting Problem
The Halting Theorem establishes that there is no program (or Turing machine) H that can decide in all cases if an arbitrary program n halts on input m. The conjecture of this paper is that nevertheless there exists a sound program H such that if it halts it answers either yes or no, and can also in a certain sense identify all the cases it is unable to decide. The Halting Theorem can be proved by constructing a counterexample, i.e. a program that attempts to assert that it itself does not halt. The thesis is that there exists a program that proves about itself that its own attempt to prove, that the counterexample does not halt, does not halt. This outcome can be interpreted as it is NOT TRUE that the counterexample does not halt as opposed to it is FALSE that the counterexample does not halt. This becomes possible when the Recursion Theorem is reinterpreted as mutual necessitation rather than equivalence.
1
0
0
0
0
0
Molecular Beam Epitaxy Growth of [CrGe/MnGe/FeGe] Superlattices: Toward Artificial B20 Skyrmion Materials with Tunable Interactions
Skyrmions are localized magnetic spin textures whose stability has been shown theoretically to depend on material parameters including bulk Dresselhaus spin orbit coupling (SOC), interfacial Rashba SOC, and magnetic anisotropy. Here, we establish the growth of a new class of artificial skyrmion materials, namely B20 superlattices, where these parameters could be systematically tuned. Specifically, we report the successful growth of B20 superlattices comprised of single crystal thin films of FeGe, MnGe, and CrGe on Si(111) substrates. Thin films and superlattices are grown by molecular beam epitaxy and are characterized through a combination of reflection high energy electron diffraction, x-ray diffraction, and cross-sectional scanning transmission electron microscopy (STEM). X-ray energy dispersive spectroscopy (XEDS) distinguishes layers by elemental mapping and indicates good interface quality with relatively low levels of intermixing in the [CrGe/MnGe/FeGe] superlattice. This demonstration of epitaxial, single-crystalline B20 superlattices is a significant advance toward tunable skyrmion systems for fundamental scientific studies and applications in magnetic storage and logic.
0
1
0
0
0
0
A comparative study of different exchange-correlation functionals in understanding structural, electronic and thermoelectric properties of Fe$_{2}$VAl and Fe$_{2}$TiSn compounds
Fe$_{2}$VAl and Fe$_{2}$TiSn are full Heusler compounds with non-magnetic ground state. The two compouds are good thermoelectric materials. PBE and LDA(PW92) are the two most commonly used density functionals to study the Heusler compounds. Along with these two well studied exchange-correlation functionals, recently developed PBEsol, mBJ and SCAN functionals are employed to study the two compounds. Using the five functionals equilibrium lattice parameter and bulk modulus are calculated. Obtained values are compared with experimental reports wherever available. Electronic structure properties are studied by calculating dispersion curves, total and partial density of states. For Fe$_{2}$VAl, band gap of 0.22 eV is obtained from the mBJ potential which is in reasonable agreement with experimental value while, for Fe$_{2}$TiSn band gap of 0.68 eV is obtained. Fe$_{2}$VAl is predicted to be semimetallic with different values of negative gaps from LDA,PBEsol,PBE and SCAN functionals. Whereas, Fe$_{2}$TiSn is found to be semimetallic(semiconducting) from LDA,PBEsol(PBE,SCAN) functionals employed calculations. From the dispersion curve effective mass values are also computed to see the contribution to the Seebeck coefficient. In Fe$_{2}$TiSn, a flat band is present along the $\Gamma$-X direction with calculated value of effective mass $\sim$36 more than the mass of electron. The improvements or inadequacies among the functionals in explaining the properties of full Heusler alloys for thermoelectric application are thus observed through this study.
0
1
0
0
0
0
On a problem of Pillai with Fibonacci numbers and powers of 2
In this paper, we find all integers c having at least two representations as a difference between a Fibonacci number and a power of 2.
0
0
1
0
0
0
Evidence from web-based dietary search patterns to the role of B12 deficiency in chronic pain
Profound vitamin B12 deficiency is a known cause of disease, but the role of low or intermediate levels of B12 in the development of neuropathy and other neuropsychiatric symptoms as well as the relationship of eating meat and B12 levels is unclear. Here we use food-related internet search patterns from a sample of 8.5 million US-based people as a proxy to B12 intake and correlate these searches with internet searches related to possible effects of B12 deficiency. Food-related search patterns are highly correlated with known consumption and food-related searches (Spearman 0.69). Awareness of B12 deficiency was associated with a higher consumption of B12-rich foods and with queries for B12 supplements. Searches for terms related to neurological disorders were correlated with searches for B12-poor foods, in contrast with control terms. Popular medicines, those having fewer indications, and those which are predominantly used to treat pain are more strongly correlated with the ability to predict neuropathic pain queries using the B12 contents of food. Our findings provide evidence for the utility of using Internet search patterns to investigate health questions in large populations and suggest that low B12 intake may be associated with a broader spectrum of neurological disorders than currently appreciated.
1
0
0
0
0
0
Non-Uniform Attacks Against Pseudoentropy
De, Trevisan and Tulsiani [CRYPTO 2010] show that every distribution over $n$-bit strings which has constant statistical distance to uniform (e.g., the output of a pseudorandom generator mapping $n-1$ to $n$ bit strings), can be distinguished from the uniform distribution with advantage $\epsilon$ by a circuit of size $O( 2^n\epsilon^2)$. We generalize this result, showing that a distribution which has less than $k$ bits of min-entropy, can be distinguished from any distribution with $k$ bits of $\delta$-smooth min-entropy with advantage $\epsilon$ by a circuit of size $O(2^k\epsilon^2/\delta^2)$. As a special case, this implies that any distribution with support at most $2^k$ (e.g., the output of a pseudoentropy generator mapping $k$ to $n$ bit strings) can be distinguished from any given distribution with min-entropy $k+1$ with advantage $\epsilon$ by a circuit of size $O(2^k\epsilon^2)$. Our result thus shows that pseudoentropy distributions face basically the same non-uniform attacks as pseudorandom distributions.
1
0
0
0
0
0
The Swift/BAT AGN Spectroscopic Survey (BASS) -- VI. The Gamma_X - L/L_Edd relation
We study the observed relation between accretion rate (in terms of L/L_Edd) and shape of the hard X-ray spectral energy distribution (namely the photon index Gamma_X) for a large sample of 228 hard X-ray selected, low-redshift active galactic nuclei (AGN), drawn from the Swift/BAT AGN Spectroscopic Survey (BASS). This includes 30 AGN for which black hole mass (and therefore L/L_Edd) is measured directly through masers, spatially resolved gas or stellar dynamics, or reverberation mapping. The high quality and broad energy coverage of the data provided through BASS allow us to examine several alternative determinations of both Gamma_X and L/L_Edd. For the BASS sample as a whole, we find a statistically significant, albeit very weak correlation between Gamma_X and L/L_Edd. The best-fitting relations we find, Gamma_X=0.15 log(L/L_Edd)+const., are considerably shallower than those reported in previous studies. Moreover, we find no corresponding correlations among the subsets of AGN with different M_BH determination methodology. In particular, we find no robust evidence for a correlation when considering only those AGN with direct or single-epoch M_BH estimates. This latter finding is in contrast to several previous studies which focused on z>0.5 broad-line AGN. We discuss this tension and conclude that it can be partially accounted for if one adopts a simplified, power-law X-ray spectral model, combined with L/L_Edd estimates that are based on the continuum emission and on single-epoch broad line spectroscopy in the optical regime. We finally highlight the limitations on using Gamma_X as a probe of supermassive black hole evolution in deep extragalactic X-ray surveys.
0
1
0
0
0
0
Hashing over Predicted Future Frames for Informed Exploration of Deep Reinforcement Learning
In deep reinforcement learning (RL) tasks, an efficient exploration mechanism should be able to encourage an agent to take actions that lead to less frequent states which may yield higher accumulative future return. However, both knowing about the future and evaluating the frequentness of states are non-trivial tasks, especially for deep RL domains, where a state is represented by high-dimensional image frames. In this paper, we propose a novel informed exploration framework for deep RL, where we build the capability for an RL agent to predict over the future transitions and evaluate the frequentness for the predicted future frames in a meaningful manner. To this end, we train a deep prediction model to predict future frames given a state-action pair, and a convolutional autoencoder model to hash over the seen frames. In addition, to utilize the counts derived from the seen frames to evaluate the frequentness for the predicted frames, we tackle the challenge of matching the predicted future frames and their corresponding seen frames at the latent feature level. In this way, we derive a reliable metric for evaluating the novelty of the future direction pointed by each action, and hence inform the agent to explore the least frequent one.
1
0
0
1
0
0
The infrared to X-ray correlation spectra of unobscured type 1 active galactic nuclei
We use new X-ray data obtained with the Nuclear Spectroscopic Telescope Array (NuSTAR), near-infrared (NIR) fluxes, and mid-infrared (MIR) spectra of a sample of 24 unobscured type 1 active galactic nuclei (AGN) to study the correlation between various hard X-ray bands between 3 and 80 keV and the infrared (IR) emission. The IR to X-ray correlation spectrum (IRXCS) shows a maximum at ~15-20 micron, coincident with the peak of the AGN contribution to the MIR spectra of the majority of the sample. There is also a NIR correlation peak at ~2 micron, which we associate with the NIR bump observed in some type 1 AGN at ~1-5 micron and is likely produced by nuclear hot dust emission. The IRXCS shows practically the same behaviour in all the X-ray bands considered, indicating a common origin for all of them. We finally evaluated correlations between the X-ray luminosities and various MIR emission lines. All the lines show a good correlation with the hard X-rays (rho>0.7), but we do not find the expected correlation between their ionization potentials and the strength of the IRXCS.
0
1
0
0
0
0
Quantum ensembles of quantum classifiers
Quantum machine learning witnesses an increasing amount of quantum algorithms for data-driven decision making, a problem with potential applications ranging from automated image recognition to medical diagnosis. Many of those algorithms are implementations of quantum classifiers, or models for the classification of data inputs with a quantum computer. Following the success of collective decision making with ensembles in classical machine learning, this paper introduces the concept of quantum ensembles of quantum classifiers. Creating the ensemble corresponds to a state preparation routine, after which the quantum classifiers are evaluated in parallel and their combined decision is accessed by a single-qubit measurement. This framework naturally allows for exponentially large ensembles in which -- similar to Bayesian learning -- the individual classifiers do not have to be trained. As an example, we analyse an exponentially large quantum ensemble in which each classifier is weighed according to its performance in classifying the training data, leading to new results for quantum as well as classical machine learning.
0
0
1
1
0
0
Recurrent Additive Networks
We introduce recurrent additive networks (RANs), a new gated RNN which is distinguished by the use of purely additive latent state updates. At every time step, the new state is computed as a gated component-wise sum of the input and the previous state, without any of the non-linearities commonly used in RNN transition dynamics. We formally show that RAN states are weighted sums of the input vectors, and that the gates only contribute to computing the weights of these sums. Despite this relatively simple functional form, experiments demonstrate that RANs perform on par with LSTMs on benchmark language modeling problems. This result shows that many of the non-linear computations in LSTMs and related networks are not essential, at least for the problems we consider, and suggests that the gates are doing more of the computational work than previously understood.
1
0
0
0
0
0
Nematic phase with colossal magnetoresistance and orbital polarons in manganite La$_{1-x}$Sr$_x$MnO$_3$
The origin of colossal magnetoresistance (CMR) is still controversial. The spin dynamics of La$_{1-x}$Sr$_x$MnO$_3$ is revisited along the Mn-O-Mn direction at $x\leq 0.5$, $T\leq T_C$ with a new study at $x$=0.4. A new lattice dynamics study is also reported at $x_0$=0.2,representative of the optimal doping for CMR. In large-$q$ wavevector range, typical of the scale of polarons, spin dynamics exhibits a discrete spectrum, $E^n_{\rm mag}$ with $n$ equal to the degeneracy of orbital-pseudospin transitions and energy values in coincidence with the phonon ones. It corresponds to the spin-orbital excitation spectrum of short life-time polarons, in which the orbital pseudospin degeneracy is lift by phonons. For $x\neq x_0$, its q-range reveals a $\ell \approx 1.7a$ size of polarons with a dimension $2d$ at $x=1/8$ partly increasing to $\approx$ $3d$ at $x=0.3$. At $x_0=0.2$ ($T<T_C$) two distinct $q$ and energy ranges appear separated by $\Delta E(q_0\approx 0.35)=3meV$. The same $\Delta E(q_0)$ value separates two unusual transverse acoustic branches ($T>T_C$). Both characterize a nematic-phase defined by chains of "orbital polarons" of $2a$ size, distant from $3a$, typical of $x_0=1/6$. It could explain CMR.
0
1
0
0
0
0
The generalized optical memory effect
The optical memory effect is a well-known type of wave correlation that is observed in coherent fields that scatter through thin and diffusive materials, like biological tissue. It is a fundamental physical property of scattering media that can be harnessed for deep-tissue microscopy or 'through-the-wall' imaging applications. Here we show that the optical memory effect is a special case of a far more general class of wave correlation. Our new theoretical framework explains how waves remain correlated over both space and angle when they are jointly shifted and tilted inside scattering media of arbitrary geometry. We experimentally demonstrate the existence of such coupled correlations and describe how they can be used to optimize the scanning range in adaptive optics microscopes.
0
1
0
0
0
0
Geometry of Policy Improvement
We investigate the geometry of optimal memoryless time independent decision making in relation to the amount of information that the acting agent has about the state of the system. We show that the expected long term reward, discounted or per time step, is maximized by policies that randomize among at most $k$ actions whenever at most $k$ world states are consistent with the agent's observation. Moreover, we show that the expected reward per time step can be studied in terms of the expected discounted reward. Our main tool is a geometric version of the policy improvement lemma, which identifies a polyhedral cone of policy changes in which the state value function increases for all states.
1
0
1
0
0
0
Modelling and characterization of a pneumatically actuated peristaltic micropump
There is an emerging class of microfluidic bioreactors which possess long-term, closed circuit perfusion under sterile conditions with in vivo-like flow parameters. Integrated into microfluidics, peristaltic-like pneumatically actuated displacement micropumps are able to meet these requirements. We present both a theoretical and experimental characterization of such pumps. In order to examine volume flow rate, we have developed a mathemati- cal model describing membrane motion under external pressure. The viscoelasticity of the membrane and hydrodynamic resistance of the microfluidic channel have been taken into account. Unlike other models, the developed model includes only the physical parameters of the pump and allows the estimation of their impact on the resulting flow. The model has been validated experimentally.
0
1
0
0
0
0
Semi-algebraic triangulation over p-adically closed fields
We prove a triangulation theorem for semi-algebraic sets over a p-adically closed field, quite similar to its real counterpart. We derive from it several applications like the existence of flexible retractions and splitting for semi-algebraic sets.
0
0
1
0
0
0
Improving the Performance of OTDOA based Positioning in NB-IoT Systems
In this paper, we consider positioning with observed-time-difference-of-arrival (OTDOA) for a device deployed in long-term-evolution (LTE) based narrow-band Internet-of-things (NB-IoT) systems. We propose an iterative expectation-maximization based successive interference cancellation (EM-SIC) algorithm to jointly consider estimations of residual frequency-offset (FO), fading-channel taps and time-of-arrival (ToA) of the first arrival-path for each of the detected cells. In order to design a low complexity ToA detector and also due to the limits of low-cost analog circuits, we assume an NB-IoT device working at a low-sampling rate such as 1.92 MHz or lower. The proposed EM-SIC algorithm comprises two stages to detect ToA, based on which OTDOA can be calculated. In a first stage, after running the EM-SIC block a predefined number of iterations, a coarse ToA is estimated for each of the detected cells. Then in a second stage, to improve the ToA resolution, a low-pass filter is utilized to interpolate the correlations of time-domain PRS signal evaluated at a low sampling-rate to a high sampling-rate such as 30.72 MHz. To keep low-complexity, only the correlations inside a small search window centered at the coarse ToA estimates are upsampled. Then, the refined ToAs are estimated based on upsampled correlations. If at least three cells are detected, with OTDOA and the locations of detected cell sites, the position of the NB-IoT device can be estimated. We show through numerical simulations that, the proposed EM-SIC based ToA detector is robust against impairments introduced by inter-cell interference, fading-channel and residual FO. Thus significant signal-to-noise (SNR) gains are obtained over traditional ToA detectors that do not consider these impairments when positioning a device.
1
0
0
0
0
0
OGLE-2015-BLG-1459L: The Challenges of Exo-Moon Microlensing
We show that dense OGLE and KMTNet $I$-band survey data require four bodies (sources plus lenses) to explain the microlensing light curve of OGLE-2015-BLG-1459. However, these can equally well consist of three lenses and one source (3L1S), two lenses and two sources (2L2S) or one lens and three sources (1L3S). In the 3L1S and 2L2S interpretations, the host is a brown dwarf and the dominant companion is a Neptune-class planet, with the third body (in the 3L1S case) being a Mars-class object that could have been a moon of the planet. In the 1L3S solution, the light curve anomalies are explained by a tight (five stellar radii) low-luminosity binary source that is offset from the principal source of the event by $\sim 0.17\,\au$. These degeneracies are resolved in favor of the 1L3S solution by color effects derived from comparison to MOA data, which are taken in a slightly different ($R/I$) passband. To enable current and future ($WFIRST$) surveys to routinely characterize exomoons and distinguish among such exotic systems requires an observing strategy that includes both a cadence faster than 9 min$^{-1}$ and observations in a second band on a similar timescale.
0
1
0
0
0
0
Particle-flow reconstruction and global event description with the CMS detector
The CMS apparatus was identified, a few years before the start of the LHC operation at CERN, to feature properties well suited to particle-flow (PF) reconstruction: a highly-segmented tracker, a fine-grained electromagnetic calorimeter, a hermetic hadron calorimeter, a strong magnetic field, and an excellent muon spectrometer. A fully-fledged PF reconstruction algorithm tuned to the CMS detector was therefore developed and has been consistently used in physics analyses for the first time at a hadron collider. For each collision, the comprehensive list of final-state particles identified and reconstructed by the algorithm provides a global event description that leads to unprecedented CMS performance for jet and hadronic tau decay reconstruction, missing transverse momentum determination, and electron and muon identification. This approach also allows particles from pileup interactions to be identified and enables efficient pileup mitigation methods. The data collected by CMS at a centre-of-mass energy of 8 TeV show excellent agreement with the simulation and confirm the superior PF performance at least up to an average of 20 pileup interactions.
0
1
0
0
0
0
Discontinuity-Sensitive Optimal Control Learning by Mixture of Experts
This paper proposes a discontinuity-sensitive approach to learn the solutions of parametric optimal control problems with high accuracy. Many tasks, ranging from model predictive control to reinforcement learning, may be solved by learning optimal solutions as a function of problem parameters. However, nonconvexity, discrete homotopy classes, and control switching cause discontinuity in the parameter-solution mapping, thus making learning difficult for traditional continuous function approximators. A mixture of experts (MoE) model composed of a classifier and several regressors is proposed to address such an issue. The optimal trajectories of different parameters are clustered such that in each cluster the trajectories are continuous function of problem parameters. Numerical examples on benchmark problems show that training the classifier and regressors individually outperforms joint training of MoE. With suitably chosen clusters, this approach not only achieves lower prediction error with less training data and fewer model parameters, but also leads to dramatic improvements in the reliability of trajectory tracking compared to traditional universal function approximation models (e.g., neural networks).
1
0
0
0
0
0
Ergodicity analysis and antithetic integral control of a class of stochastic reaction networks with delays
Delays are an important phenomenon arising in a wide variety of real world systems. They occur in biological models because of diffusion effects or as simplifying modeling elements. We propose here to consider delayed stochastic reaction networks. The difficulty here lies in the fact that the state-space of a delayed reaction network is infinite-dimensional, which makes their analysis more involved. We demonstrate here that a particular class of stochastic time-varying delays, namely those that follow a phase-type distribution, can be exactly implemented in terms of a chemical reaction network. Hence, any delay-free network can be augmented to incorporate those delays through the addition of delay-species and delay-reactions. Hence, for this class of stochastic delays, which can be used to approximate any delay distribution arbitrarily accurately, the state-space remains finite-dimensional and, therefore, standard tools developed for standard reaction network still apply. In particular, we demonstrate that for unimolecular mass-action reaction networks that the delayed stochastic reaction network is ergodic if and only if the non-delayed network is ergodic as well. Bimolecular reactions are more difficult to consider but an analogous result is also obtained. These results tell us that delays that are phase-type distributed, regardless of their distribution, are not harmful to the ergodicity property of reaction networks. We also prove that the presence of those delays adds convolution terms in the moment equation but does not change the value of the stationary means compared to the delay-free case. Finally, the control of a certain class of delayed stochastic reaction network using a delayed antithetic integral controller is considered. It is proven that this controller achieves its goal provided that the delay-free network satisfy the conditions of ergodicity and output-controllability.
0
0
0
0
1
0
The three-dimensional standard solution to the Ricci flow is modeled by the Bryant soliton
It came to my attention after posting this paper that Yu Ding has proved the same result before. I would like to apologize to Yu Ding for the appearance of this paper.
0
0
1
0
0
0
Dynamic classifier chains for multi-label learning
In this paper, we deal with the task of building a dynamic ensemble of chain classifiers for multi-label classification. To do so, we proposed two concepts of classifier chains algorithms that are able to change label order of the chain without rebuilding the entire model. Such modes allows anticipating the instance-specific chain order without a significant increase in computational burden. The proposed chain models are built using the Naive Bayes classifier and nearest neighbour approach as a base single-label classifiers. To take the benefits of the proposed algorithms, we developed a simple heuristic that allows the system to find relatively good label order. The heuristic sort labels according to the label-specific classification quality gained during the validation phase. The heuristic tries to minimise the phenomenon of error propagation in the chain. The experimental results showed that the proposed model based on Naive Bayes classifier the above-mentioned heuristic is an efficient tool for building dynamic chain classifiers.
1
0
0
1
0
0
Big Data Regression Using Tree Based Segmentation
Scaling regression to large datasets is a common problem in many application areas. We propose a two step approach to scaling regression to large datasets. Using a regression tree (CART) to segment the large dataset constitutes the first step of this approach. The second step of this approach is to develop a suitable regression model for each segment. Since segment sizes are not very large, we have the ability to apply sophisticated regression techniques if required. A nice feature of this two step approach is that it can yield models that have good explanatory power as well as good predictive performance. Ensemble methods like Gradient Boosted Trees can offer excellent predictive performance but may not provide interpretable models. In the experiments reported in this study, we found that the predictive performance of the proposed approach matched the predictive performance of Gradient Boosted Trees.
1
0
0
1
0
0
Real-time Road Traffic Information Detection Through Social Media
In current study, a mechanism to extract traffic related information such as congestion and incidents from textual data from the internet is proposed. The current source of data is Twitter. As the data being considered is extremely large in size automated models are developed to stream, download, and mine the data in real-time. Furthermore, if any tweet has traffic related information then the models should be able to infer and extract this data. Currently, the data is collected only for United States and a total of 120,000 geo-tagged traffic related tweets are extracted, while six million geo-tagged non-traffic related tweets are retrieved and classification models are trained. Furthermore, this data is used for various kinds of spatial and temporal analysis. A mechanism to calculate level of traffic congestion, safety, and traffic perception for cities in U.S. is proposed. Traffic congestion and safety rankings for the various urban areas are obtained and then they are statistically validated with existing widely adopted rankings. Traffic perception depicts the attitude and perception of people towards the traffic. It is also seen that traffic related data when visualized spatially and temporally provides the same pattern as the actual traffic flows for various urban areas. When visualized at the city level, it is clearly visible that the flow of tweets is similar to flow of vehicles and that the traffic related tweets are representative of traffic within the cities. With all the findings in current study, it is shown that significant amount of traffic related information can be extracted from Twitter and other sources on internet. Furthermore, Twitter and these data sources are freely available and are not bound by spatial and temporal limitations. That is, wherever there is a user there is a potential for data.
1
0
0
0
0
0
The stable Picard group of $\mathcal{A}(2)$
Using a form of descent in the stable category of $\mathcal{A}(2)$-modules, we show that there are no exotic elements in the stable Picard group of $\mathcal{A}(2)$, \textit{i.e.} that the stable Picard group of $\mathcal{A}(2)$ is free on $2$ generators.
0
0
1
0
0
0
Understanding kernel size in blind deconvolution
Most blind deconvolution methods usually pre-define a large kernel size to guarantee the support domain. Blur kernel estimation error is likely to be introduced, and is proportional to kernel size. In this paper, we experimentally and theoretically show the reason of noises introduction in oversized kernel by demonstrating that sizeable kernels lead to lower optimization cost. To eliminate this adverse effect, we propose a low rank-based regularization on blur kernel by analyzing the structural information in degraded kernels. Compared with the sparsity prior, e.g., $\ell_\alpha$-norm, our regularization term can effectively suppress random noises in oversized kernels. On benchmark test dataset, the proposed method is compared with several state-of-the-art methods, and can achieve better quantitative score. Especially, the improvement margin is much more significant for oversized blur kernels. We also validate the proposed method on real-world blurry images.
1
0
0
0
0
0
Sample and Computationally Efficient Learning Algorithms under S-Concave Distributions
We provide new results for noise-tolerant and sample-efficient learning algorithms under $s$-concave distributions. The new class of $s$-concave distributions is a broad and natural generalization of log-concavity, and includes many important additional distributions, e.g., the Pareto distribution and $t$-distribution. This class has been studied in the context of efficient sampling, integration, and optimization, but much remains unknown about the geometry of this class of distributions and their applications in the context of learning. The challenge is that unlike the commonly used distributions in learning (uniform or more generally log-concave distributions), this broader class is not closed under the marginalization operator and many such distributions are fat-tailed. In this work, we introduce new convex geometry tools to study the properties of $s$-concave distributions and use these properties to provide bounds on quantities of interest to learning including the probability of disagreement between two halfspaces, disagreement outside a band, and the disagreement coefficient. We use these results to significantly generalize prior results for margin-based active learning, disagreement-based active learning, and passive learning of intersections of halfspaces. Our analysis of geometric properties of $s$-concave distributions might be of independent interest to optimization more broadly.
1
0
0
1
0
0
Connectivity Properties of Factorization Posets in Generated Groups
We consider three notions of connectivity and their interactions in partially ordered sets coming from reduced factorizations of an element in a generated group. While one form of connectivity essentially reflects the connectivity of the poset diagram, the other two are a bit more involved: Hurwitz-connectivity has its origins in algebraic geometry, and shellability in topology. We propose a framework to study these connectivity properties in a uniform way. Our main tool is a certain total order of the generators that is compatible with the chosen element.
0
0
1
0
0
0
Boundary-sum irreducible finite order corks
We prove for any positive integer $n$ there exist boundary-sum irreducible ${\mathbb Z}_n$-corks with Stein structure. Here `boundary-sum irreducible' means the manifold is indecomposable with respect to boundary-sum. We also verify that some of the finite order corks admit hyperbolic boundary by HIKMOT.
0
0
1
0
0
0
Deep Learning as a Mixed Convex-Combinatorial Optimization Problem
As neural networks grow deeper and wider, learning networks with hard-threshold activations is becoming increasingly important, both for network quantization, which can drastically reduce time and energy requirements, and for creating large integrated systems of deep networks, which may have non-differentiable components and must avoid vanishing and exploding gradients for effective learning. However, since gradient descent is not applicable to hard-threshold functions, it is not clear how to learn networks of them in a principled way. We address this problem by observing that setting targets for hard-threshold hidden units in order to minimize loss is a discrete optimization problem, and can be solved as such. The discrete optimization goal is to find a set of targets such that each unit, including the output, has a linearly separable problem to solve. Given these targets, the network decomposes into individual perceptrons, which can then be learned with standard convex approaches. Based on this, we develop a recursive mini-batch algorithm for learning deep hard-threshold networks that includes the popular but poorly justified straight-through estimator as a special case. Empirically, we show that our algorithm improves classification accuracy in a number of settings, including for AlexNet and ResNet-18 on ImageNet, when compared to the straight-through estimator.
1
0
0
0
0
0
A Data-Driven Approach for Predicting Vegetation-Related Outages in Power Distribution Systems
This paper presents a novel data-driven approach for predicting the number of vegetation-related outages that occur in power distribution systems on a monthly basis. In order to develop an approach that is able to successfully fulfill this objective, there are two main challenges that ought to be addressed. The first challenge is to define the extent of the target area. An unsupervised machine learning approach is proposed to overcome this difficulty. The second challenge is to correctly identify the main causes of vegetation-related outages and to thoroughly investigate their nature. In this paper, these outages are categorized into two main groups: growth-related and weather-related outages, and two types of models, namely time series and non-linear machine learning regression models are proposed to conduct the prediction tasks, respectively. Moreover, various features that can explain the variability in vegetation-related outages are engineered and employed. Actual outage data, obtained from a major utility in the U.S., in addition to different types of weather and geographical data are utilized to build the proposed approach. Finally, a comprehensive case study is carried out to demonstrate how the proposed approach can be used to successfully predict the number of vegetation-related outages and to help decision-makers to detect vulnerable zones in their systems.
0
0
0
1
0
0
Structural subnetwork evolution across the life-span: rich-club, feeder, seeder
The impact of developmental and aging processes on brain connectivity and the connectome has been widely studied. Network theoretical measures and certain topological principles are computed from the entire brain, however there is a need to separate and understand the underlying subnetworks which contribute towards these observed holistic connectomic alterations. One organizational principle is the rich-club - a core subnetwork of brain regions that are strongly connected, forming a high-cost, high-capacity backbone that is critical for effective communication in the network. Investigations primarily focus on its alterations with disease and age. Here, we present a systematic analysis of not only the rich-club, but also other subnetworks derived from this backbone - namely feeder and seeder subnetworks. Our analysis is applied to structural connectomes in a normal cohort from a large, publicly available lifespan study. We demonstrate changes in rich-club membership with age alongside a shift in importance from 'peripheral' seeder to feeder subnetworks. Our results show a refinement within the rich-club structure (increase in transitivity and betweenness centrality), as well as increased efficiency in the feeder subnetwork and decreased measures of network integration and segregation in the seeder subnetwork. These results demonstrate the different developmental patterns when analyzing the connectome stratified according to its rich-club and the potential of utilizing this subnetwork analysis to reveal the evolution of brain architectural alterations across the life-span.
0
0
0
0
1
0
No Silk Road for Online Gamers!: Using Social Network Analysis to Unveil Black Markets in Online Games
Online game involves a very large number of users who are interconnected and interact with each other via the Internet. We studied the characteristics of exchanging virtual goods with real money through processes called "real money trading (RMT)." This exchange might influence online game user behaviors and cause damage to the reputation of game companies. We examined in-game transactions to reveal RMT by constructing a social graph of virtual goods exchanges in an online game and identifying network communities of users. We analyzed approximately 6,000,000 transactions in a popular online game and inferred RMT transactions by comparing the RMT transactions crawled from an out-game market. Our findings are summarized as follows: (1) the size of the RMT market could be approximately estimated; (2) professional RMT providers typically form a specific network structure (either star-shape or chain) in the trading network, which can be used as a clue for tracing RMT transactions; and (3) the observed RMT market has evolved over time into a monopolized market with a small number of large-sized virtual goods providers.
1
0
0
0
0
0
Parameter-dependent Stochastic Optimal Control in Finite Discrete Time
We prove a general existence result in stochastic optimal control in discrete time where controls take values in conditional metric spaces, and depend on the current state and the information of past decisions through the evolution of a recursively defined forward process. The generality of the problem lies beyond the scope of standard techniques in stochastic control theory such as random sets, normal integrands and measurable selection theory. The main novelty is a formalization in conditional metric space and the use of techniques in conditional analysis. We illustrate the existence result by several examples including wealth-dependent utility maximization under risk constraints with bounded and unbounded wealth-dependent control sets, utility maximization with a measurable dimension, and dynamic risk sharing. Finally, we discuss how conditional analysis relates to random set theory.
0
0
1
0
0
0
VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator
A monocular visual-inertial system (VINS), consisting of a camera and a low-cost inertial measurement unit (IMU), forms the minimum sensor suite for metric six degrees-of-freedom (DOF) state estimation. However, the lack of direct distance measurement poses significant challenges in terms of IMU processing, estimator initialization, extrinsic calibration, and nonlinear optimization. In this work, we present VINS-Mono: a robust and versatile monocular visual-inertial state estimator.Our approach starts with a robust procedure for estimator initialization and failure recovery. A tightly-coupled, nonlinear optimization-based method is used to obtain high accuracy visual-inertial odometry by fusing pre-integrated IMU measurements and feature observations. A loop detection module, in combination with our tightly-coupled formulation, enables relocalization with minimum computation overhead.We additionally perform four degrees-of-freedom pose graph optimization to enforce global consistency. We validate the performance of our system on public datasets and real-world experiments and compare against other state-of-the-art algorithms. We also perform onboard closed-loop autonomous flight on the MAV platform and port the algorithm to an iOS-based demonstration. We highlight that the proposed work is a reliable, complete, and versatile system that is applicable for different applications that require high accuracy localization. We open source our implementations for both PCs and iOS mobile devices.
1
0
0
0
0
0
Equation of State Effects on Gravitational Waves from Rotating Core Collapse
Gravitational waves (GWs) generated by axisymmetric rotating collapse, bounce, and early postbounce phases of a galactic core-collapse supernova will be detectable by current-generation gravitational wave observatories. Since these GWs are emitted from the quadrupole-deformed nuclear-density core, they may encode information on the uncertain nuclear equation of state (EOS). We examine the effects of the nuclear EOS on GWs from rotating core collapse and carry out 1824 axisymmetric general-relativistic hydrodynamic simulations that cover a parameter space of 98 different rotation profiles and 18 different EOS. We show that the bounce GW signal is largely independent of the EOS and sensitive primarily to the ratio of rotational to gravitational energy, and at high rotation rates, to the degree of differential rotation. The GW frequency of postbounce core oscillations shows stronger EOS dependence that can be parameterized by the core's EOS-dependent dynamical frequency $\sqrt{G\bar{\rho}_c}$. We find that the ratio of the peak frequency to the dynamical frequency follows a universal trend that is obeyed by all EOS and rotation profiles and that indicates that the nature of the core oscillations changes when the rotation rate exceeds the dynamical frequency. We find that differences in the treatments of low-density nonuniform nuclear matter, of the transition from nonuniform to uniform nuclear matter, and in the description of nuclear matter up to around twice saturation density can mildly affect the GW signal. We find that approximations and uncertainties in electron capture rates can lead to variations in the GW signal that are of comparable magnitude to those due to different nuclear EOS. This emphasizes the need for reliable nuclear electron capture rates and for self-consistent multi-dimensional neutrino radiation-hydrodynamic simulations of rotating core collapse.
0
1
0
0
0
0
Quantum and thermal fluctuations in a Raman spin-orbit coupled Bose gas
We theoretically study a three-dimensional weakly-interacting Bose gas with Raman-induced spin-orbit coupling at finite temperature. By employing a generalized Hartree-Fock-Bogoliubov theory with Popov approximation, we determine a complete finite-temperature phase diagram of three exotic condensation phases (i.e., the stripe, plane-wave and zero-momentum phases), against both quantum and thermal fluctuations. We find that the plane-wave phase is significantly broadened by thermal fluctuations. The phonon mode and sound velocity at the transition from the plane-wave phase to the zero-momentum phase are thoughtfully analyzed. At zero temperature, we find that quantum fluctuations open an unexpected gap in sound velocity at the phase transition, in stark contrast to the previous theoretical prediction of a vanishing sound velocity. At finite temperature, thermal fluctuations continue to significantly enlarge the gap, and simultaneously shift the critical minimum. For a Bose gas of $^{87}$Rb atoms at the typical experimental temperature, $T=0.3T_{0}$, where $T_{0}$ is the critical temperature of an ideal Bose gas without spin-orbit coupling, our results of gap opening and critical minimum shifting in the sound velocity, are qualitatively consistent with the recent experimental observation {[}S.-C. Ji \textit{et al.}, Phys. Rev. Lett. \textbf{114}, 105301 (2015){]}.
0
1
0
0
0
0
San Pedro Meeting on Wide Field Variability Surveys: Some Concluding Comments
This is a written version of the closing talk at the 22nd Los Alamos Stellar pulsation conference on wide field variability surveys. It comments on some of the issues which arise from the meeting. These include the need for attention to photometric standardization (especially in the infrared) and the somewhat controversial problem of statistical bias in the use of parallaxes (and other methods of distance determination). Some major advances in the use of pulsating variables to study Galactic structure are mentioned. The paper includes a clarification of apparently conflicting results from classical Cepheids and RR Lyrae stars in the inner Galaxy and bulge. The importance of understanding non-periodic phenomena in variable stars,particularly AGB variables and RCB stars is stressed, especially for its relevance to mass-loss, in which pulsation may only play a minor role.
0
1
0
0
0
0
Weighted Community Detection and Data Clustering Using Message Passing
Grouping objects into clusters based on similarities or weights between them is one of the most important problems in science and engineering. In this work, by extending message passing algorithms and spectral algorithms proposed for unweighted community detection problem, we develop a non-parametric method based on statistical physics, by mapping the problem to Potts model at the critical temperature of spin glass transition and applying belief propagation to solve the marginals corresponding to the Boltzmann distribution. Our algorithm is robust to over-fitting and gives a principled way to determine whether there are significant clusters in the data and how many clusters there are. We apply our method to different clustering tasks and use extensive numerical experiments to illustrate the advantage of our method over existing algorithms. In the community detection problem in weighted and directed networks, we show that our algorithm significantly outperforms existing algorithms. In the clustering problem when the data was generated by mixture models in the sparse regime we show that our method works to the theoretical limit of detectability and gives accuracy very close to that of the optimal Bayesian inference. In the semi-supervised clustering problem, our method only needs several labels to work perfectly in classic datasets. Finally, we further develop Thouless-Anderson-Palmer equations which reduce heavily the computation complexity in dense-networks but gives almost the same performance as belief propagation.
1
0
0
1
0
0
Edge Control of Graphene Domains Grown on Hexagonal Boron Nitride
Edge structure of graphene has a significant influence on its electronic properties. However, control over the edge structure of graphene domains on insulating substrates is still challenging. Here we demonstrate edge control of graphene domains on hexagonal boron nitride (h-BN) by modifying ratio of working-gases. Edge directions were determined with the help of both moiré pattern and atomic-resolution image obtained via atomic force microscopy measurement. It is believed that the variation on graphene edges mainly attributes to different growth rates of armchair and zigzag edges. This work demonstrated here points out a potential approach to fabricate graphene ribbons on h-BN.
0
1
0
0
0
0
On MASAs in $q$-deformed von Neumann algebras
We study certain $q$-deformed analogues of the maximal abelian subalgebras of the group von Neumann algebras of free groups. The radial subalgebra is defined for Hecke deformed von Neumann algebras of the Coxeter group $(\mathbb{Z}/{2\mathbb{Z}})^{\star k}$ and shown to be a maximal abelian subalgebra which is singular and with Pukánszky invariant $\{\infty\}$. Further all non-equal generator masas in the $q$-deformed Gaussian von Neumann algebras are shown to be mutually non-unitarily conjugate.
0
0
1
0
0
0
A note on computing range space bases of rational matrices
We discuss computational procedures based on descriptor state-space realizations to compute proper range space bases of rational matrices. The main computation is the orthogonal reduction of the system matrix pencil to a special Kronecker-like form, which allows to extract a full column rank factor, whose columns form a proper rational basis of the range space. The computation of several types of bases can be easily accommodated, such as minimum-degree bases, stable inner minimum-degree bases, etc. Several straightforward applications of the range space basis computation are discussed, such as, the computation of full rank factorizations, normalized coprime factorizations, pseudo-inverses, and inner-outer factorizations.
1
0
0
0
0
0
Directed-Loop Quantum Monte Carlo Method for Retarded Interactions
The directed-loop quantum Monte Carlo method is generalized to the case of retarded interactions. Using the path integral, fermion-boson or spin-boson models are mapped to actions with retarded interactions by analytically integrating out the bosons. This yields an exact algorithm that combines the highly-efficient loop updates available in the stochastic series expansion representation with the advantages of avoiding a direct sampling of the bosons. The application to electron-phonon models reveals that the method overcomes the previously detrimental issues of long autocorrelation times and exponentially decreasing acceptance rates. For example, the resulting dramatic speedup allows us to investigate the Peierls quantum phase transition on chains of up to $1282$ sites.
0
1
0
0
0
0
Towards CNN map representation and compression for camera relocalisation
This paper presents a study on the use of Convolutional Neural Networks for camera relocalisation and its application to map compression. We follow state of the art visual relocalisation results and evaluate the response to different data inputs. We use a CNN map representation and introduce the notion of map compression under this paradigm by using smaller CNN architectures without sacrificing relocalisation performance. We evaluate this approach in a series of publicly available datasets over a number of CNN architectures with different sizes, both in complexity and number of layers. This formulation allows us to improve relocalisation accuracy by increasing the number of training trajectories while maintaining a constant-size CNN.
1
0
0
0
0
0
Connecting Weighted Automata and Recurrent Neural Networks through Spectral Learning
In this paper, we unravel a fundamental connection between weighted finite automata~(WFAs) and second-order recurrent neural networks~(2-RNNs): in the case of sequences of discrete symbols, WFAs and 2-RNNs with linear activation functions are expressively equivalent. Motivated by this result, we build upon a recent extension of the spectral learning algorithm to vector-valued WFAs and propose the first provable learning algorithm for linear 2-RNNs defined over sequences of continuous input vectors. This algorithm relies on estimating low rank sub-blocks of the so-called Hankel tensor, from which the parameters of a linear 2-RNN can be provably recovered. The performances of the proposed method are assessed in a simulation study.
0
0
0
1
0
0
Evaluating Predictive Models of Student Success: Closing the Methodological Gap
Model evaluation -- the process of making inferences about the performance of predictive models -- is a critical component of predictive modeling research in learning analytics. We survey the state of the practice with respect to model evaluation in learning analytics, which overwhelmingly uses only naive methods for model evaluation or statistical tests which are not appropriate for predictive model evaluation. We conduct a critical comparison of both null hypothesis significance testing (NHST) and a preferred Bayesian method for model evaluation. Finally, we apply three methods -- the na{ï}ve average commonly used in learning analytics, NHST, and Bayesian -- to a predictive modeling experiment on a large set of MOOC data. We compare 96 different predictive models, including different feature sets, statistical modeling algorithms, and tuning hyperparameters for each, using this case study to demonstrate the different experimental conclusions these evaluation techniques provide.
0
0
0
1
0
0
A vertex and edge deletion game on graphs
Starting with a graph, two players take turns in either deleting an edge or deleting a vertex and all incident edges. The player removing the last vertex wins. We review the known results for this game and extend the computation of nim-values to new families of graphs. A conjecture of Khandhawit and Ye on the nim-values of graphs with one odd cycle is proved. We also see that, for wheels and their subgraphs, this game exhibits a surprising amount of unexplained regularity.
1
0
0
0
0
0
Determinants of cyclization-decyclization kinetics of short DNA with sticky ends
Cyclization of DNA with sticky ends is commonly used to construct DNA minicircles and to measure DNA bendability. The cyclization probability of short DNA (< 150 bp) has a strong length dependence, but how it depends on the rotational positioning of the sticky ends around the helical axis is less clear. To shed light upon the determinants of the cyclization probability of short DNA, we measured cyclization and decyclization rates of ~100-bp DNA with sticky ends over two helical periods using single-molecule Fluorescence Resonance Energy Transfer (FRET). The cyclization rate increases monotonically with length, indicating no excess twisting, while the decyclization rate oscillates with length, higher at half-integer helical turns and lower at integer helical turns. The oscillation profile is kinetically and thermodynamically consistent with a three-state cyclization model in which sticky-ended short DNA first bends into a torsionally-relaxed teardrop, and subsequently transitions to a more stable loop upon terminal base stacking. We also show that the looping probability density (the J factor) extracted from this study is in good agreement with the worm-like chain model near 100 bp. For shorter DNA, we discuss various experimental factors that prevent an accurate measurement of the J factor.
0
0
0
0
1
0
Remark on a theorem of H. Hauser on textile maps
We give a counter example to the new theorem that appeared in the survey \cite{H} on Artin approximation. We then provide a correct statement and a proof of it.
0
0
1
0
0
0
Faster and Simpler Distributed Algorithms for Testing and Correcting Graph Properties in the CONGEST-Model
In this paper we present distributed testing algorithms of graph properties in the CONGEST-model [Censor-Hillel et al. 2016]. We present one-sided error testing algorithms in the general graph model. We first describe a general procedure for converting $\epsilon$-testers with a number of rounds $f(D)$, where $D$ denotes the diameter of the graph, to $O((\log n)/\epsilon)+f((\log n)/\epsilon)$ rounds, where $n$ is the number of processors of the network. We then apply this procedure to obtain an optimal tester, in terms of $n$, for testing bipartiteness, whose round complexity is $O(\epsilon^{-1}\log n)$, which improves over the $poly(\epsilon^{-1} \log n)$-round algorithm by Censor-Hillel et al. (DISC 2016). Moreover, for cycle-freeness, we obtain a \emph{corrector} of the graph that locally corrects the graph so that the corrected graph is acyclic. Note that, unlike a tester, a corrector needs to mend the graph in many places in the case that the graph is far from having the property. In the second part of the paper we design algorithms for testing whether the network is $H$-free for any connected $H$ of size up to four with round complexity of $O(\epsilon^{-1})$. This improves over the $O(\epsilon^{-2})$-round algorithms for testing triangle freeness by Censor-Hillel et al. (DISC 2016) and for testing excluded graphs of size $4$ by Fraigniaud et al. (DISC 2016). In the last part we generalize the global tester by Iwama and Yoshida (ITCS 2014) of testing $k$-path freeness to testing the exclusion of any tree of order $k$. We then show how to simulate this algorithm in the CONGEST-model in $O(k^{k^2+1}\cdot\epsilon^{-k})$ rounds.
1
0
0
0
0
0
Reflexive Regular Equivalence for Bipartite Data
Bipartite data is common in data engineering and brings unique challenges, particularly when it comes to clustering tasks that impose on strong structural assumptions. This work presents an unsupervised method for assessing similarity in bipartite data. Similar to some co-clustering methods, the method is based on regular equivalence in graphs. The algorithm uses spectral properties of a bipartite adjacency matrix to estimate similarity in both dimensions. The method is reflexive in that similarity in one dimension is used to inform similarity in the other. Reflexive regular equivalence can also use the structure of transitivities -- in a network sense -- the contribution of which is controlled by the algorithm's only free-parameter, $\alpha$. The method is completely unsupervised and can be used to validate assumptions of co-similarity, which are required but often untested, in co-clustering analyses. Three variants of the method with different normalizations are tested on synthetic data. The method is found to be robust to noise and well-suited to asymmetric co-similar structure, making it particularly informative for cluster analysis and recommendation in bipartite data of unknown structure. In experiments, the convergence and speed of the algorithm are found to be stable for different levels of noise. Real-world data from a network of malaria genes are analyzed, where the similarity produced by the reflexive method is shown to out-perform other measures' ability to correctly classify genes.
1
0
0
1
0
0
Structural and magnetic properties of core-shell Au/Fe3O4 nanoparticles
We present a systematic study of core-shell Au/Fe_3O_4 nanoparticles produced by thermal decomposition under mild conditions. The morphology and crystal structure of the nanoparticles revealed the presence of Au core of <d> = (6.9\pm 1.0) nm surrounded by Fe_3O_4 shell with a thickness of ~3.5 nm, epitaxially grown onto the Au core surface. The Au/Fe_3O_4 core-shell structure was demonstrated by high angle annular dark field scanning transmission electron microscopy analysis. The magnetite shell grown on top of the Au nanoparticle displayed a thermal blocking state at temperatures below T_B = 59 K and a relaxed state well above T_B. Remarkably, an exchange bias effect was observed when cooling down the samples below room temperature under an external magnetic field. Moreover, the exchange bias field (H_{EX}) started to appear at T~40 K and its value increased by decreasing the temperature. This effect has been assigned to the interaction of spins located in the magnetically disordered regions (in the inner and outer surface of the Fe_3O_4 shell) and spins located in the ordered region of the Fe_3O_4 shell.
0
1
0
0
0
0
Boron-doped diamond
Boron-doped diamond undergoes an insulator-metal transition at some critical value (around 2.21 at %) of the dopand concentration. Here, we report a simple method for the calculation of its bulk modulus, based on the thermodynamical model, by Varotsos and Alexopoulos, that has been originally suggested for the interconnection between the defect formation parameters in solids and bulk properties. The results obtained at the doping level of 2.6 at %, which was later improved at the level 0.5 at %, are in agreement with the experimental values.
0
1
0
0
0
0
The kinematics of the white dwarf population from the SDSS DR12
We use the Sloan Digital Sky Survey Data Release 12, which is the largest available white dwarf catalog to date, to study the evolution of the kinematical properties of the population of white dwarfs in the Galactic disc. We derive masses, ages, photometric distances and radial velocities for all white dwarfs with hydrogen-rich atmospheres. For those stars for which proper motions from the USNO-B1 catalog are available the true three-dimensional components of the stellar space velocity are obtained. This subset of the original sample comprises 20,247 objects, making it the largest sample of white dwarfs with measured three-dimensional velocities. Furthermore, the volume probed by our sample is large, allowing us to obtain relevant kinematical information. In particular, our sample extends from a Galactocentric radial distance $R_{\rm G}=7.8$~kpc to 9.3~kpc, and vertical distances from the Galactic plane ranging from $Z=-0.5$~kpc to 0.5~kpc. We examine the mean components of the stellar three-dimensional velocities, as well as their dispersions with respect to the Galactocentric and vertical distances. We confirm the existence of a mean Galactocentric radial velocity gradient, $\partial\langle V_{\rm R}\rangle/\partial R_{\rm G}=-3\pm5$~km~s$^{-1}$~kpc$^{-1}$. We also confirm North-South differences in $\langle V_{\rm z}\rangle$. Specifically, we find that white dwarfs with $Z>0$ (in the North Galactic hemisphere) have $\langle V_{\rm z}\rangle<0$, while the reverse is true for white dwarfs with $Z<0$. The age-velocity dispersion relation derived from the present sample indicates that the Galactic population of white dwarfs may have experienced an additional source of heating, which adds to the secular evolution of the Galactic disc.
0
1
0
0
0
0
Geometric tracking control of thrust vectoring UAVs
In this paper a geometric approach to the trajectory tracking control of Unmanned Aerial Vehicles with thrust vectoring capabilities is proposed. The control design is suitable for aerial systems that allow to effectively decouple position and orientation tracking tasks. The control problem is developed within the framework of geometric control theory on the group of rigid displacements SE(3), yielding a control law that is independent of any parametrization of the configuration space. The proposed design works seamlessy when the thrust vectoring capability is limited, by prioritizing position over orientation tracking. A characterization of the region of attraction and of the convergence properties is explicitly derived. Finally, a numerical example is presented to test the proposed control law. The generality of the control scheme can be exploited for a broad class of aerial vehicles.
1
0
1
0
0
0
Two scenarios of advective washing-out of localized convective patterns under frozen parametric disorder
The effect of spatial localization of states in distributed parameter systems under frozen parametric disorder is well known as the Anderson localization and thoroughly studied for the Schrödinger equation and linear dissipation-free wave equations. Some similar (or mimicking) phenomena can occur in dissipative systems such as the thermal convection ones. Specifically, many of these dissipative systems are governed by a modified Kuramoto-Sivashinsky equation, where the frozen spatial disorder of parameters has been reported to lead to excitation of localized patterns. Imposed advection in the modified Kuramoto-Sivashinsky equation can affect the localized patterns in a nontrivial way; it changes the localization properties and suppresses the pattern. The latter effect is considered in this paper by means of both numerical simulation and model reduction, which turns out to be useful for a comprehensive understanding of the bifurcation scenarios in the system. Two possible bifurcation scenarios of advective suppression ("washing-out") of localized patterns are revealed and characterised.
0
1
0
0
0
0
Neural Models for Key Phrase Detection and Question Generation
We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-to-sequence question-generation model with a copy mechanism. Empirically, our key-phrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This two-stage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.
1
0
0
0
0
0
Radio Tomography for Roadside Surveillance
Radio tomographic imaging (RTI) has recently been proposed for tracking object location via radio waves without requiring the objects to transmit or receive radio signals. The position is extracted by inferring which voxels are obstructing a subset of radio links in a dense wireless sensor network. This paper proposes a variety of modeling and algorithmic improvements to RTI for the scenario of roadside surveillance. These include the use of a more physically motivated weight matrix, a method for mitigating negative (aphysical) data due to noisy observations, and a method for combining frames of a moving vehicle into a single image. The proposed approaches are used to show improvement in both imaging (useful for human-in-the-loop target recognition) and automatic target recognition in a measured data set.
1
0
0
0
0
0
But How Does It Work in Theory? Linear SVM with Random Features
We prove that, under low noise assumptions, the support vector machine with $N\ll m$ random features (RFSVM) can achieve the learning rate faster than $O(1/\sqrt{m})$ on a training set with $m$ samples when an optimized feature map is used. Our work extends the previous fast rate analysis of random features method from least square loss to 0-1 loss. We also show that the reweighted feature selection method, which approximates the optimized feature map, helps improve the performance of RFSVM in experiments on a synthetic data set.
0
0
0
1
0
0
Adaptive Bayesian nonparametric regression using kernel mixture of polynomials with application to partial linear model
We propose a kernel mixture of polynomials prior for Bayesian nonparametric regression. The regression function is modeled by local averages of polynomials with kernel mixture weights. We obtain the minimax-optimal rate of contraction of the full posterior distribution up to a logarithmic factor that adapts to the smoothness level of the true function by estimating metric entropies of certain function classes. We also provide a frequentist sieve maximum likelihood estimator with a near-optimal convergence rate. We further investigate the application of the kernel mixture of polynomials to the partial linear model and obtain both the near-optimal rate of contraction for the nonparametric component and the Bernstein-von Mises limit (i.e., asymptotic normality) of the parametric component. The proposed method is illustrated with numerical examples and shows superior performance in terms of computational efficiency, accuracy, and uncertainty quantification compared to the local polynomial regression, DiceKriging, and the robust Gaussian stochastic process.
0
0
1
1
0
0
Follow Me at the Edge: Mobility-Aware Dynamic Service Placement for Mobile Edge Computing
Mobile edge computing is a new computing paradigm, which pushes cloud computing capabilities away from the centralized cloud to the network edge. However, with the sinking of computing capabilities, the new challenge incurred by user mobility arises: since end-users typically move erratically, the services should be dynamically migrated among multiple edges to maintain the service performance, i.e., user-perceived latency. Tackling this problem is non-trivial since frequent service migration would greatly increase the operational cost. To address this challenge in terms of the performance-cost trade-off, in this paper we study the mobile edge service performance optimization problem under long-term cost budget constraint. To address user mobility which is typically unpredictable, we apply Lyapunov optimization to decompose the long-term optimization problem into a series of real-time optimization problems which do not require a priori knowledge such as user mobility. As the decomposed problem is NP-hard, we first design an approximation algorithm based on Markov approximation to seek a near-optimal solution. To make our solution scalable and amenable to future 5G application scenario with large-scale user devices, we further propose a distributed approximation scheme with greatly reduced time complexity, based on the technique of best response update. Rigorous theoretical analysis and extensive evaluations demonstrate the efficacy of the proposed centralized and distributed schemes.
1
0
0
0
0
0
Assessing student's achievement gap between ethnic groups in Brazil
Achievement gaps refer to the difference in the performance on examinations of students belonging to different social groups. Achievement gaps between ethnic groups have been observed in several countries with heterogeneous populations. In this paper, we analyze achievement gaps between ethnic populations in Brazil by studying the performance of a large cohort of senior high-school students in a standardized national exam. We separate ethnic groups into the Brazilian states to remove potential biases associated to infrastructure and financial resources, cultural background and ethnic clustering. We focus on the disciplines of mathematics and writing that involve different cognitive functions. We estimate the gaps and their statistical significance through the Welch's t-test and study key socio-economic variables that may explain the existence or absence of gaps. We identify that gaps between ethnic groups are either statistically insignificant (p<.01) or small (2%-6%) if statistically significant, for students living in households with low income. Increasing gaps however may be observed for higher income. On the other hand, while higher parental education is associated to higher performance, it may either increase, decrease or maintain the gaps between White and Black, and between White and Pardo students. Our results support that socio-economic variables have major impact on student's performance in both mathematics and writing examinations irrespectively of ethnic backgrounds, giving evidence that genetic factors have little or no effect on ethnic group performance when students are exposed to similar cultural and financial contexts.
0
0
0
1
0
0
SONS: The JCMT legacy survey of debris discs in the submillimetre
Debris discs are evidence of the ongoing destructive collisions between planetesimals, and their presence around stars also suggests that planets exist in these systems. In this paper, we present submillimetre images of the thermal emission from debris discs that formed the SCUBA-2 Observations of Nearby Stars (SONS) survey, one of seven legacy surveys undertaken on the James Clerk Maxwell telescope between 2012 and 2015. The overall results of the survey are presented in the form of 850 microns (and 450 microns, where possible) images and fluxes for the observed fields. Excess thermal emission, over that expected from the stellar photosphere, is detected around 49 stars out of the 100 observed fields. The discs are characterised in terms of their flux density, size (radial distribution of the dust) and derived dust properties from their spectral energy distributions. The results show discs over a range of sizes, typically 1-10 times the diameter of the Edgeworth-Kuiper Belt in our Solar System. The mass of a disc, for particles up to a few millimetres in size, is uniquely obtainable with submillimetre observations and this quantity is presented as a function of the host stars' age, showing a tentative decline in mass with age. Having doubled the number of imaged discs at submillimetre wavelengths from ground-based, single dish telescope observations, one of the key legacy products from the SONS survey is to provide a comprehensive target list to observe at high angular resolution using submillimetre/millimetre interferometers (e.g., ALMA, SMA).
0
1
0
0
0
0
Visual Search at eBay
In this paper, we propose a novel end-to-end approach for scalable visual search infrastructure. We discuss the challenges we faced for a massive volatile inventory like at eBay and present our solution to overcome those. We harness the availability of large image collection of eBay listings and state-of-the-art deep learning techniques to perform visual search at scale. Supervised approach for optimized search limited to top predicted categories and also for compact binary signature are key to scale up without compromising accuracy and precision. Both use a common deep neural network requiring only a single forward inference. The system architecture is presented with in-depth discussions of its basic components and optimizations for a trade-off between search relevance and latency. This solution is currently deployed in a distributed cloud infrastructure and fuels visual search in eBay ShopBot and Close5. We show benchmark on ImageNet dataset on which our approach is faster and more accurate than several unsupervised baselines. We share our learnings with the hope that visual search becomes a first class citizen for all large scale search engines rather than an afterthought.
1
0
0
0
0
0
Isomorphism and classification for countable structures
We introduce a topology on the space of all isomorphism types represented in a given class of countable models, and use this topology as an aid in classifying the isomorphism types. This mixes ideas from effective descriptive set theory and computable structure theory, extending concepts from the latter beyond computable structures to examine the isomorphism problem on arbitrary countable structures. We give examples using specific classes of fields and of trees, illustrating how the new concepts can yield classifications that reveal differences between seemingly similar classes. Finally, we use a computable homeomorphism to define a measure on the space of isomorphism types of algebraic fields, and examine the prevalence of relative computable categoricity under this measure.
0
0
1
0
0
0
Complex Networks Unveiling Spatial Patterns in Turbulence
Numerical and experimental turbulence simulations are nowadays reaching the size of the so-called big data, thus requiring refined investigative tools for appropriate statistical analyses and data mining. We present a new approach based on the complex network theory, offering a powerful framework to explore complex systems with a huge number of interacting elements. Although interest on complex networks has been increasing in the last years, few recent studies have been applied to turbulence. We propose an investigation starting from a two-point correlation for the kinetic energy of a forced isotropic field numerically solved. Among all the metrics analyzed, the degree centrality is the most significant, suggesting the formation of spatial patterns which coherently move with similar vorticity over the large eddy turnover time scale. Pattern size can be quantified through a newly-introduced parameter (i.e., average physical distance) and varies from small to intermediate scales. The network analysis allows a systematic identification of different spatial regions, providing new insights into the spatial characterization of turbulent flows. Based on present findings, the application to highly inhomogeneous flows seems promising and deserves additional future investigation.
0
1
0
0
0
0
Convexity in scientific collaboration networks
Convexity in a network (graph) has been recently defined as a property of each of its subgraphs to include all shortest paths between the nodes of that subgraph. It can be measured on the scale [0, 1] with 1 being assigned to fully convex networks. The largest convex component of a graph that emerges after the removal of the least number of edges is called a convex skeleton. It is basically a tree of cliques, which has been shown to have many interesting features. In this article the notions of convexity and convex skeletons in the context of scientific collaboration networks are discussed. More specifically, we analyze the co-authorship networks of Slovenian researchers in computer science, physics, sociology, mathematics, and economics and extract convex skeletons from them. We then compare these convex skeletons with the residual graphs (remainders) in terms of collaboration frequency distributions by various parameters such as the publication year and type, co-authors' birth year, status, gender, discipline, etc. We also show the top-ranked scientists by four basic centrality measures as calculated on the original networks and their skeletons and conclude that convex skeletons may help detect influential scholars that are hardly identifiable in the original collaboration network. As their inherent feature, convex skeletons retain the properties of collaboration networks. These include high-level structural properties but also the fact that the same authors are highlighted by centrality measures. Moreover, the most important ties and thus the most important collaborations are retained in the skeletons.
1
0
0
0
0
0
Contact Localization through Spatially Overlapping Piezoresistive Signals
Achieving high spatial resolution in contact sensing for robotic manipulation often comes at the price of increased complexity in fabrication and integration. One traditional approach is to fabricate a large number of taxels, each delivering an individual, isolated response to a stimulus. In contrast, we propose a method where the sensor simply consists of a continuous volume of piezoresistive elastomer with a number of electrodes embedded inside. We measure piezoresistive effects between all pairs of electrodes in the set, and count on this rich signal set containing the information needed to pinpoint contact location with high accuracy using regression algorithms. In our validation experiments, we demonstrate submillimeter median accuracy in locating contact on a 10mm by 16mm sensor using only four electrodes (creating six unique pairs). In addition to extracting more information from fewer wires, this approach lends itself to simple fabrication methods and makes no assumptions about the underlying geometry, simplifying future integration on robot fingers.
1
0
0
0
0
0
On the Parallel Parameterized Complexity of the Graph Isomorphism Problem
In this paper, we study the parallel and the space complexity of the graph isomorphism problem (\GI{}) for several parameterizations. Let $\mathcal{H}=\{H_1,H_2,\cdots,H_l\}$ be a finite set of graphs where $|V(H_i)|\leq d$ for all $i$ and for some constant $d$. Let $\mathcal{G}$ be an $\mathcal{H}$-free graph class i.e., none of the graphs $G\in \mathcal{G}$ contain any $H \in \mathcal{H}$ as an induced subgraph. We show that \GI{} parameterized by vertex deletion distance to $\mathcal{G}$ is in a parameterized version of $\AC^1$, denoted $\PL$-$\AC^1$, provided the colored graph isomorphism problem for graphs in $\mathcal{G}$ is in $\AC^1$. From this, we deduce that \GI{} parameterized by the vertex deletion distance to cographs is in $\PL$-$\AC^1$. The parallel parameterized complexity of \GI{} parameterized by the size of a feedback vertex set remains an open problem. Towards this direction we show that the graph isomorphism problem is in $\PL$-$\TC^0$ when parameterized by vertex cover or by twin-cover. Let $\mathcal{G}'$ be a graph class such that recognizing graphs from $\mathcal{G}'$ and the colored version of \GI{} for $\mathcal{G}'$ is in logspace ($\L$). We show that \GI{} for bounded vertex deletion distance to $\mathcal{G}'$ is in $\L$. From this, we obtain logspace algorithms for \GI{} for graphs with bounded vertex deletion distance to interval graphs and graphs with bounded vertex deletion distance to cographs.
1
0
0
0
0
0
Direct measurement of superdiffusive and subdiffusive energy transport in disordered granular chains
The study of energy transport properties in heterogeneous materials has attracted scientific interest for more than a century, and it continues to offer fundamental and rich questions. One of the unanswered challenges is to extend Anderson theory for uncorrelated and fully disordered lattices in condensed-matter systems to physical settings in which additional effects compete with disorder. Specifically, the effect of strong nonlinearity has been largely unexplored experimentally, partly due to the paucity of testbeds that can combine the effect of disorder and nonlinearity in a controllable manner. Here we present the first systematic experimental study of energy transport and localization properties in simultaneously disordered and nonlinear granular crystals. We demonstrate experimentally that disorder and nonlinearity --- which are known from decades of studies to individually favor energy localization --- can in some sense "cancel each other out", resulting in the destruction of wave localization. We also report that the combined effect of disorder and nonlinearity can enable the manipulation of energy transport speed in granular crystals from subdiffusive to superdiffusive ranges.
0
1
0
0
0
0
Reading the Sky and The Spiral of Teaching and Learning in Astronomy
This theoretical paper introduces a new way to view and characterize teaching and learning astronomy. It describes a framework, based on results from empirical data, analyzed through standard qualitative research methodology, in which a theoretical model for vital competencies of learning astronomy is proposed: Reading the Sky. This model takes into account not only disciplinary knowledge but also disciplinary discernment and extrapolating three-dimensionality. Together, these constitute the foundation for the competency referred to as Reading the Sky. In this paper, I describe these concepts and how I see them being connected and intertwined to form a new competency model for learning astronomy and how this can be used to inform astronomy education to better match the challenges students face when entering the discipline of astronomy: The Spiral of Teaching and Learning. Two examples are presented to highlight how this model can be used in teaching situations.
0
1
0
0
0
0
Emergence of grid-like representations by training recurrent neural networks to perform spatial localization
Decades of research on the neural code underlying spatial navigation have revealed a diverse set of neural response properties. The Entorhinal Cortex (EC) of the mammalian brain contains a rich set of spatial correlates, including grid cells which encode space using tessellating patterns. However, the mechanisms and functional significance of these spatial representations remain largely mysterious. As a new way to understand these neural representations, we trained recurrent neural networks (RNNs) to perform navigation tasks in 2D arenas based on velocity inputs. Surprisingly, we find that grid-like spatial response patterns emerge in trained networks, along with units that exhibit other spatial correlates, including border cells and band-like cells. All these different functional types of neurons have been observed experimentally. The order of the emergence of grid-like and border cells is also consistent with observations from developmental studies. Together, our results suggest that grid cells, border cells and others as observed in EC may be a natural solution for representing space efficiently given the predominant recurrent connections in the neural circuits.
0
0
0
1
1
0
Random walks on the discrete affine group
We introduce the discrete affine group of a regular tree as a finitely generated subgroup of the affine group. We describe the Poisson boundary of random walks on it as a space of configurations. We compute isoperimetric profile and Hilbert compression exponent of the group. We also discuss metric relationship with some lamplighter groups and lamplighter graphs.
0
0
1
0
0
0
Spectroscopic evidence of odd frequency superconducting order
Spin filter superconducting S/I/N tunnel junctions (NbN/GdN/TiN) show a robust and pronounced zero bias conductance peak at low temperatures, the magnitude of which is several times the normal state conductance of the junction. Such a conductance anomaly is representative of unconventional superconductivity and is interpreted as a direct signature of an odd frequency superconducting order.
0
1
0
0
0
0
Double spend races
We correct the double spend race analysis given in Nakamoto's foundational Bitcoin article and give a closed-form formula for the probability of success of a double spend attack using the Regularized Incomplete Beta Function. We give a proof of the exponential decay on the number of confirmations, often cited in the literature, and find an asymptotic formula. Larger number of confirmations are necessary compared to those given by Nakamoto. We also compute the probability conditional to the known validation time of the blocks. This provides a finer risk analysis than the classical one.
1
0
1
0
0
0
An Asymptotic Analysis of Queues with Delayed Information and Time Varying Arrival Rates
Understanding how delayed information impacts queueing systems is an important area of research. However, much of the current literature neglects one important feature of many queueing systems, namely non-stationary arrivals. Non-stationary arrivals model the fact that customers tend to access services during certain times of the day and not at a constant rate. In this paper, we analyze two two-dimensional deterministic fluid models that incorporate customer choice behavior based on delayed queue length information with time varying arrivals. In the first model, customers receive queue length information that is delayed by a constant Delta. In the second model, customers receive information about the queue length through a moving average of the queue length where the moving average window is Delta. We analyze the impact of the time varying arrival rate and show using asymptotic analysis that the time varying arrival rate does not impact the critical delay unless the frequency of the time varying arrival rate is twice that of the critical delay. When the frequency of the arrival rate is twice that of the critical delay, then the stability is enlarged by a wedge that is determined by the model parameters. As a result, this problem allows us to combine the theory of nonlinear dynamics, parametric excitation, delays, and time varying queues together to provide insight on the impact of information in queueing systems.
0
0
1
0
0
0
Effect of mixed pinning landscapes produced by 6 MeV Oxygen irradiation on the resulting critical current densities J$_c$ in 1.3 $μ$m thick GdBa$_2$Cu$_3$O$_{7-d}$ coated conductors grown by co-evaporation
We report the influence of crystalline defects introduced by 6 MeV $^{16}$O$^{3+}$ irradiation on the critical current densities J$_c$ and flux creep rates in 1.3 $\mu$m thick GdBa$_2$Cu$_3$O$_{7-d}$ coated conductor produced by co-evaporation. Pristine films with pinning produced mainly by random nanoparticles with diameter close to 50 nm were irradiated with doses between 2x10$^{13}$ cm$^{-2}$ and 4x10$^{14}$ cm$^{-2}$. At temperatures below 40 K with the magnetic field applied parallel (H//c) and at 45° (H//45°) to the c-axis, the in-field J$_c$ dependences can be significantly improved by irradiation. For doses of 1x10$^{14}$ cm$^{-2}$ the J$_c$ values at $\mu$$_0$H = 5 T are doubled without affecting significantly the J$_c$ at small fields. Analyzing the flux creep rates as function of the temperature in both magnetic field configurations, it can be observed that the irradiation suppresses the peak associated with double-kink relaxation and increases the flux creep rates at intermediate and high temperatures. Under 0.5 T, the flux relaxation for H//c and H//45° in pristine films presents characteristic glassy exponents $\mu$ = 1.63 and $\mu$ = 1.45, respectively. For samples irradiated with 1x10$^{14}$ cm$^{-2}$, these values drop to $\mu$ = 1.45 and $\mu$ =1.24, respectively.
0
1
0
0
0
0
Merge or Not? Learning to Group Faces via Imitation Learning
Given a large number of unlabeled face images, face grouping aims at clustering the images into individual identities present in the data. This task remains a challenging problem despite the remarkable capability of deep learning approaches in learning face representation. In particular, grouping results can still be egregious given profile faces and a large number of uninteresting faces and noisy detections. Often, a user needs to correct the erroneous grouping manually. In this study, we formulate a novel face grouping framework that learns clustering strategy from ground-truth simulated behavior. This is achieved through imitation learning (a.k.a apprenticeship learning or learning by watching) via inverse reinforcement learning (IRL). In contrast to existing clustering approaches that group instances by similarity, our framework makes sequential decision to dynamically decide when to merge two face instances/groups driven by short- and long-term rewards. Extensive experiments on three benchmark datasets show that our framework outperforms unsupervised and supervised baselines.
1
0
0
0
0
0
Entropic Causality and Greedy Minimum Entropy Coupling
We study the problem of identifying the causal relationship between two discrete random variables from observational data. We recently proposed a novel framework called entropic causality that works in a very general functional model but makes the assumption that the unobserved exogenous variable has small entropy in the true causal direction. This framework requires the solution of a minimum entropy coupling problem: Given marginal distributions of m discrete random variables, each on n states, find the joint distribution with minimum entropy, that respects the given marginals. This corresponds to minimizing a concave function of nm variables over a convex polytope defined by nm linear constraints, called a transportation polytope. Unfortunately, it was recently shown that this minimum entropy coupling problem is NP-hard, even for 2 variables with n states. Even representing points (joint distributions) over this space can require exponential complexity (in n, m) if done naively. In our recent work we introduced an efficient greedy algorithm to find an approximate solution for this problem. In this paper we analyze this algorithm and establish two results: that our algorithm always finds a local minimum and also is within an additive approximation error from the unknown global optimum.
1
0
0
1
0
0
When few survive to tell the tale: thymus and gonad as auditioning organs: historical overview
Unlike other organs, the thymus and gonads generate non-uniform cell populations, many members of which perish, and a few survive. While it is recognized that thymic cells are 'audited' to optimize an organism's immune repertoire, whether gametogenesis could be orchestrated similarly to favour high quality gametes is uncertain. Ideally, such quality would be affirmed at early stages before the commitment of extensive parental resources. A case is here made that, along the lines of a previously proposed lymphocyte quality control mechanism, gamete quality can be registered indirectly through detection of incompatibilities between proteins encoded by the grandparental DNA sequences within the parent from which haploid gametes are meiotically derived. This 'stress test' is achieved in the same way that thymic screening for potential immunological incompatibilities is achieved - by 'promiscuous' expression, under the influence of the AIRE protein, of the products of genes that are not normally specific for that organ. Consistent with this, the Aire gene is expressed in both thymus and gonads, and AIRE deficiency impedes function in both organs. While not excluding the subsequent emergence of hybrid incompatibilities due to the intermixing of genomic sequences from parents (rather than grandparents), many observations, such as the number of proteins that are aberrantly expressed during gametogenesis, can be explained on this basis. Indeed, promiscuous expression could have first evolved in gamete-forming cells where incompatible proteins would be manifest as aberrant protein aggregates that cause apoptosis. This mechanism would later have been co-opted by thymic epithelial cells which display peptides from aggregates to remove potentially autoreactive T cells.
0
0
0
0
1
0
SfMLearner++: Learning Monocular Depth & Ego-Motion using Meaningful Geometric Constraints
Most geometric approaches to monocular Visual Odometry (VO) provide robust pose estimates, but sparse or semi-dense depth estimates. Off late, deep methods have shown good performance in generating dense depths and VO from monocular images by optimizing the photometric consistency between images. Despite being intuitive, a naive photometric loss does not ensure proper pixel correspondences between two views, which is the key factor for accurate depth and relative pose estimations. It is a well known fact that simply minimizing such an error is prone to failures. We propose a method using Epipolar constraints to make the learning more geometrically sound. We use the Essential matrix, obtained using Nister's Five Point Algorithm, for enforcing meaningful geometric constraints on the loss, rather than using it as labels for training. Our method, although simplistic but more geometrically meaningful, using lesser number of parameters, gives a comparable performance to state-of-the-art methods which use complex losses and large networks showing the effectiveness of using epipolar constraints. Such a geometrically constrained learning method performs successfully even in cases where simply minimizing the photometric error would fail.
1
0
0
0
0
0
Approximate Bayesian inference as a gauge theory
In a published paper [Sengupta, 2016], we have proposed that the brain (and other self-organized biological and artificial systems) can be characterized via the mathematical apparatus of a gauge theory. The picture that emerges from this approach suggests that any biological system (from a neuron to an organism) can be cast as resolving uncertainty about its external milieu, either by changing its internal states or its relationship to the environment. Using formal arguments, we have shown that a gauge theory for neuronal dynamics -- based on approximate Bayesian inference -- has the potential to shed new light on phenomena that have thus far eluded a formal description, such as attention and the link between action and perception. Here, we describe the technical apparatus that enables such a variational inference on manifolds. Particularly, the novel contribution of this paper is an algorithm that utlizes a Schild's ladder for parallel transport of sufficient statistics (means, covariances, etc.) on a statistical manifold.
1
0
0
0
0
0
DROPWAT: an Invisible Network Flow Watermark for Data Exfiltration Traceback
Watermarking techniques have been proposed during the last 10 years as an approach to trace network flows for intrusion detection purposes. These techniques aim to impress a hidden signature on a traffic flow. A central property of network flow watermarking is invisibility, i.e., the ability to go unidentified by an unauthorized third party. Although widely sought after, the development of an invisible watermark is a challenging task that has not yet been accomplished. In this paper we take a step forward in addressing the invisibility problem with DROPWAT, an active network flow watermarking technique developed for tracing Internet flows directed to the staging server that is the final destination in a data exfiltration attack, even in the presence of several intermediate stepping stones or an anonymous network. DROPWAT is a timing-based technique that indirectly modifies interpacket delays by exploiting network reaction to packet loss. We empirically demonstrate that the watermark embedded by means of DROPWAT is invisible to a third party observing the watermarked traffic. We also validate DROPWAT and analyze its performance in a controlled experimental framework involving the execution of a series of experiments on the Internet, using Web proxy servers as stepping stones executed on several instances in Amazon Web Services, as well as the TOR anonymous network in the place of the stepping stones. Our results show that the detection algorithm is able to identify an embedded watermark achieving over 95% accuracy while being invisible.
1
0
0
0
0
0
Outage Analysis of Offloading in Heterogeneous Networks: Composite Fading Channels
Small cells deployment is one of the most significant long-term strategic policies of the mobile network operators. In heterogeneous networks (HetNets), small cells serve as offloading spots in the radio access network to offload macro users (MUs) and their associated traffic from congested macrocells. In this paper, we perform analytical analysis and investigate how the radio propagation effects such as multipath and shadowing and small cell base station density affect MUs' offloading to small cell network (SCN). In particular, we exploit composite fading channels in our evaluation when an MU is offloaded to SCN with varying small and macro cell densities in the stochastic HetNets framework. We derive the expressions for outage probability (equivalently success probability) of the MU in macro network and SCN for two different cases, viz.: i) Nakagami-lognormal channel fading; ii) time-shared (combined) shadowed/unshadowed channel fading. We propose efficient approximations for the probability density functions of the channel fading (power) for the above-mentioned fading distributions that do not have closed-form expressions employing Gauss-Hermite integration and finite exponential series, respectively. Finally, the outage probability performance of MU with and without offloading options/services is analyzed for various settings of fading channels.
1
0
0
0
0
0
StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks
Although Generative Adversarial Networks (GANs) have shown remarkable success in various tasks, they still face challenges in generating high quality images. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) aiming at generating high-resolution photo-realistic images. First, we propose a two-stage generative adversarial network architecture, StackGAN-v1, for text-to-image synthesis. The Stage-I GAN sketches the primitive shape and colors of the object based on given text description, yielding low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. Second, an advanced multi-stage generative adversarial network architecture, StackGAN-v2, is proposed for both conditional and unconditional generative tasks. Our StackGAN-v2 consists of multiple generators and discriminators in a tree-like structure; images at multiple scales corresponding to the same scene are generated from different branches of the tree. StackGAN-v2 shows more stable training behavior than StackGAN-v1 by jointly approximating multiple distributions. Extensive experiments demonstrate that the proposed stacked generative adversarial networks significantly outperform other state-of-the-art methods in generating photo-realistic images.
1
0
0
1
0
0
Towards Secure and Safe Appified Automated Vehicles
The advancement in Autonomous Vehicles (AVs) has created an enormous market for the development of self-driving functionalities,raising the question of how it will transform the traditional vehicle development process. One adventurous proposal is to open the AV platform to third-party developers, so that AV functionalities can be developed in a crowd-sourcing way, which could provide tangible benefits to both automakers and end users. Some pioneering companies in the automotive industry have made the move to open the platform so that developers are allowed to test their code on the road. Such openness, however, brings serious security and safety issues by allowing untrusted code to run on the vehicle. In this paper, we introduce the concept of an Appified AV platform that opens the development framework to third-party developers. To further address the safety challenges, we propose an enhanced appified AV design schema called AVGuard, which focuses primarily on mitigating the threats brought about by untrusted code, leveraging theory in the vehicle evaluation field, and conducting program analysis techniques in the cybersecurity area. Our study provides guidelines and suggested practice for the future design of open AV platforms.
1
0
0
0
0
0
Equidimensional adic eigenvarieties for groups with discrete series
We extend Urban's construction of eigenvarieties for reductive groups $G$ such that $G(\mathbb{R})$ has discrete series to include characteristic $p$ points at the boundary of weight space. In order to perform this construction, we define a notion of "locally analytic" functions and distributions on a locally $\mathbb{Q}_p$-analytic manifold taking values in a complete Tate $\mathbb{Z}_p$-algebra in which $p$ is not necessarily invertible. Our definition agrees with the definition of locally analytic distributions on $p$-adic Lie groups given by Johansson and Newton.
0
0
1
0
0
0
Latent Association Mining in Binary Data
We consider the problem of identifying groups of mutually associated variables in moderate or high dimensional data. In many cases, ordinary Pearson correlation provides useful information concerning the linear relationship between variables. However, for binary data, ordinary correlation may lose power and may lack interpretability. In this paper, we develop and investigate a new method called Latent Association Mining in Binary Data (LAMB). The LAMB method is built on the assumption that the binary observations represent a random thresholding of a latent continuous variable that may have a complex correlation structure. We consider a new measure of association, latent correlation, that is designed to assess association in the underlying continuous variable, without bias due to the mediating effects of the thresholding procedure. The full LAMB procedure makes use of iterative hypothesis testing to identify groups of latently correlated variables. LAMB is shown to improve power over existing methods in simulated settings, to be computationally efficient for large datasets, and to uncover new meaningful results from common real data types.
0
0
0
1
0
0
The Supernova -- Supernova Remnant Connection
Many aspects of the progenitor systems, environments, and explosion dynamics of the various subtypes of supernovae are difficult to investigate at extragalactic distances where they are observed as unresolved sources. Alternatively, young supernova remnants in our own galaxy and in the Large and Small Magellanic Clouds offer opportunities to resolve, measure, and track expanding stellar ejecta in fine detail, but the handful that are known exhibit widely different properties that reflect the diversity of their parent explosions and local circumstellar and interstellar environments. A way of complementing both supernova and supernova remnant research is to establish strong empirical links between the two separate stages of stellar explosions. Here we briefly review recent progress in the development of supernova---supernova remnant connections, paying special attention to connections made through the study of "middle-aged" (10-100 yr) supernovae and young (< 1000 yr) supernova remnants. We highlight how this approach can uniquely inform several key areas of supernova research, including the origins of explosive mixing, high-velocity jets, and the formation of dust in the ejecta.
0
1
0
0
0
0
DSVO: Direct Stereo Visual Odometry
This paper proposes a novel approach to stereo visual odometry without stereo matching. It is particularly robust in scenes of repetitive high-frequency textures. Referred to as DSVO (Direct Stereo Visual Odometry), it operates directly on pixel intensities, without any explicit feature matching, and is thus efficient and more accurate than the state-of-the-art stereo-matching-based methods. It applies a semi-direct monocular visual odometry running on one camera of the stereo pair, tracking the camera pose and mapping the environment simultaneously; the other camera is used to optimize the scale of monocular visual odometry. We evaluate DSVO in a number of challenging scenes to evaluate its performance and present comparisons with the state-of-the-art stereo visual odometry algorithms.
1
0
0
0
0
0
Urban Analytics: Multiplexed and Dynamic Community Networks
In the past decade, cities have experienced rapid growth, expansion, and changes in their community structure. Many aspects of critical urban infrastructure are closely coupled with the human communities that they serve. Urban communities are composed of a multiplex of overlapping factors which can be distinguished into cultural, religious, social-economic, political, and geographical layers. In this paper, we review how increasingly available heterogeneous mobile big data sets can be leveraged to detect the community interaction structure using natural language processing and machine learning techniques. A number of community layer and interaction detection algorithms are then reviewed, with a particular focus on robustness, stability, and causality of evolving communities. The better understanding of the structural dynamics and multiplexed relationships can provide useful information to inform both urban planning policies and shape the design of socially coupled urban infrastructure systems.
1
1
0
0
0
0
A quantum phase transition induced by a microscopic boundary condition
Quantum phase transitions are sudden changes in the ground-state wavefunction of a many-body system that can occur as a control parameter such as a concentration or a field strength is varied. They are driven purely by the competition between quantum fluctuations and mutual interactions among constituents of the system, not by thermal fluctuations; hence they can occur even at zero temperature. Examples of quantum phase transitions in many-body physics may be found in systems ranging from high-temperature superconductors to topological insulators. A quantum phase transition usually can be characterized by nonanalyticity/discontinuity in certain order parameters or divergence of the ground state energy eigenvalue and/or its derivatives with respect to certain physical quantities. Here in a circular one-dimensional spin model with Heisenberg XY interaction and no magnetic field, we observe critical phenomena for the $n_0=1/N\rightarrow0$ Mott insulator caused by a qualitative change of the boundary condition. We demonstrate in the vicinity of the transition point a sudden change in ground-state properties accompanied by an avoided level-crossing between the ground and the first excited states. Notably, our result links conventional quantum phase transitions to microscopic boundary conditions, with significant implications for quantum information, quantum control, and quantum computing.
0
1
0
0
0
0
Asymptotic Goodness-of-Fit Tests for Point Processes Based on Scaled Empirical K-Functions
We study sequences of scaled edge-corrected empirical (generalized) K-functions (modifying Ripley's K-function) each of them constructed from a single observation of a $d$-dimensional fourth-order stationary point process in a sampling window W_n which grows together with some scaling rate unboundedly as n --> infty. Under some natural assumptions it is shown that the normalized difference between scaled empirical and scaled theoretical K-function converges weakly to a mean zero Gaussian process with simple covariance function. This result suggests discrepancy measures between empirical and theoretical K-function with known limit distribution which allow to perform goodness-of-fit tests for checking a hypothesized point process based only on its intensity and (generalized) K-function. Similar test statistics are derived for testing the hypothesis that two independent point processes in W_n have the same distribution without explicit knowledge of their intensities and K-functions.
0
0
1
1
0
0
Effects of Arrival Type and Degree of Saturation on Queue Length Estimation at Signalized Intersections
Purpose of this study is evaluation of the relationship between different arrival types and degree of saturation (X) with overestimations of HCM 2010 procedure for estimating the back of queue within a study area. Further analysis is performed to establish the relationship between queue length and delay and also between each of them individually and X in cases with overestimation. The analyses are based on the 50th percentile queue lengths for data collected at four signalized intersections along a corridor in 4 time periods (off peak period and AM, Noon and PM peak periods). Based on the statistical test results, arrival type did not play a role in overestimations. However, there is a significant relationship between the overestimations on minor and major street and different ranges of X. On minor streets, about 59% of the overestimations are at X values less than half; while near 23% of the overestimations are at oversaturation condition with X values greater than 1. The relationship between amount of overestimations and degree of saturation should be established based on the numerical amount of overestimations versus X values rather than the relative amounts; since the statistical comparison between the relative amount of overestimations and X values, resulted in a wrong idea of the real world condition. There was a significant correlation between field queue and delay data of the cases with overestimated queue length in all cases on major and minor streets. Also, field queue is correlated to X, in all cases on minor and major streets.
0
0
0
1
0
0
Sourcerer's Apprentice and the study of code snippet migration
On the worldwide web, not only are webpages connected but source code is too. Software development is becoming more accessible to everyone and the licensing for software remains complicated. We need to know if software licenses are being maintained properly throughout their reuse and evolution. This motivated the development of the Sourcerer's Apprentice, a webservice that helps track clone relicensing, because software typically employ software licenses to describe how their software may be used and adapted. But most developers do not have the legal expertise to sort out license conflicts. In this paper we put the Apprentice to work on empirical studies that demonstrate there is much sharing between StackOverflow code and Python modules and Python documentation that violates the licensing of the original Python modules and documentation: software snippets shared through StackOverflow are often being relicensed improperly to CC-BY-SA 3.0 without maintaining the appropriate attribution. We show that many snippets on StackOverflow are inappropriately relicensed by StackOverflow users, jeopardizing the status of the software built by companies and developers who reuse StackOverflow snippets.
1
0
0
0
0
0