text
stringlengths
6
128k
The multiparty key exchange introduced in Steiner et al.\@ and presented in more general form by the authors is known to be secure against passive attacks. In this paper, an active attack is presented assuming malicious control of the communications of the last two users for the duration of only the key exchange.
In the context of the evacuation of populations, some citizens/volunteers may want and be able to participate in the evacuation of populations in difficulty by coming to lend a hand to emergency/evacuation vehicles with their own vehicles. One way of framing these impulses of solidarity would be to be able to list in real-time the citizens/volunteers available with their vehicles (land, sea, air, etc.), to be able to geolocate them according to the risk areas to be evacuated, and adding them to the evacuation/rescue vehicles. Because it is difficult to propose an effective real-time operational system on the field in a real crisis situation, in this work, we propose to add a module for recommending driver/vehicle pairs (with their specificities) to a system of crisis management simulation. To do that, we chose to model and develop an ontology-supported constraint-based recommender system for crisis management simulations.
This study examines the performance of a volatility-based strategy using Chinese equity index ETF options. Initially successful, the strategy's effectiveness waned post-2018. By integrating GARCH models for volatility forecasting, the strategy's positions and exposures are dynamically adjusted. The results indicate that such an approach can enhance returns in volatile markets, suggesting potential for refined trading strategies in China's evolving derivatives landscape. The research underscores the importance of adaptive strategies in capturing market opportunities amidst changing trading dynamics.
We introduce a new algorithm and software for solving linear equations in symmetric diagonally dominant matrices with non-positive off-diagonal entries (SDDM matrices), including Laplacian matrices. We use pre-conditioned conjugate gradient (PCG) to solve the system of linear equations. Our preconditioner is a variant of the Approximate Cholesky factorization of Kyng and Sachdeva (FOCS 2016). Our factorization approach is simple: we eliminate matrix rows/columns one at a time and update the remaining matrix using sampling to approximate the outcome of complete Cholesky factorization. Unlike earlier approaches, our sampling always maintains a connectivity in the remaining non-zero structure. Our algorithm comes with a tuning parameter that upper bounds the number of samples made per original entry. We implement our algorithm in Julia, providing two versions, AC and AC2, that respectively use 1 and 2 samples per original entry. We compare their single-threaded performance to that of current state-of-the-art solvers Combinatorial Multigrid (CMG), BoomerAMG-preconditioned Krylov solvers from HyPre and PETSc, Lean Algebraic Multigrid (LAMG), and MATLAB's with Incomplete Cholesky Factorization (ICC). Our evaluation uses a broad class of problems, including all large SDDM matrices from the SuiteSparse collection and diverse programmatically generated instances. Our experiments suggest that our algorithm attains a level of robustness and reliability not seen before in SDDM solvers, while retaining good performance across all instances. Our code and data are public, and we provide a tutorial on how to replicate our tests. We hope that others will adopt this suite of tests as a benchmark, which we refer to as SDDM2023. Our solver code is available at: https://github.com/danspielman/Laplacians.jl/ Our benchmarking data and tutorial are available at: https://rjkyng.github.io/SDDM2023/
We study the kinematics of the recently discovered Corona Australis (CrA) chain of clusters by examining the 3D space motion of its young stars using Gaia DR3 and APOGEE-2 data. While we observe linear expansion between the clusters in the Cartesian XY directions, the expansion along Z exhibits a curved pattern. To our knowledge, this is the first time such a nonlinear velocity-position relation has been observed for stellar clusters. We propose a scenario to explain our findings, in which the observed gradient is caused by stellar feedback, accelerating the gas away from the Galactic plane. A traceback analysis confirms that the CrA star formation complex was located near the central clusters of the Scorpius Centaurus (Sco-Cen) OB association 10-15 Myr ago. It contains massive stars and thus offers a natural source of feedback. Based on the velocity of the youngest unbound CrA cluster, we estimate that a median number of about two supernovae would have been sufficient to inject the present-day kinetic energy of the CrA molecular cloud. This number agrees with that of recent studies. The head-tail morphology of the CrA molecular cloud further supports the proposed feedback scenario, in which a feedback force pushed the primordial cloud from the Galactic north, leading to the current separation of 100 pc from the center of Sco-Cen. The formation of spatially and temporally well-defined star formation patterns, such as the CrA chain of clusters, is likely a common process in massive star-forming regions.
Using a uniformization map we determine the holographic entanglement entropy for states of a Warped Conformal Field Theory dual to a generic vacuum metric in AdS$_3$ gravity with Comp\`ere--Song--Strominger boundary conditions. We point out how that expression could lead to inequalities that can be interpreted as quantum energy conditions for Warped Conformal Field Theories.
We study the correspondence assigning the vertices of a certain quotient of the local Bruhat-Tits tree for the general linear group over a global function field, to conjugacy classes of maximal orders in some quaternion algebras. The interplay between quotient graphs and orders can be used to study representation of orders if the quotient graphs are known and conversely. We use this converse to find a reciprocity law between quotient graph at diferent places that suffices to compute, recursively, all local quotient graphs for a matrix algebra over a rational function field.
Dark matter (comprising a quarter of the Universe) is usually assumed to be due to one and only one weakly interacting particle which is neutral and absolutely stable. We consider the possibility that there are several coexisting dark-matter particles, and explore in some detail the generic case where there are two. We discuss how the second dark-matter particle may relax the severe constraints on the parameter space of the Minimal Supersymmetric Standard Model, as well as other verifiable predictions in both direct and indirect search experiments.
This paper exemplifies the implementation of an efficient Information Retrieval (IR) System to compute the similarity between a dataset and a query using Fuzzy Logic. TREC dataset has been used for the same purpose. The dataset is parsed to generate keywords index which is used for the similarity comparison with the user query. Each query is assigned a score value based on its fuzzy similarity with the index keywords. The relevant documents are retrieved based on the score value. The performance and accuracy of the proposed fuzzy similarity model is compared with Cosine similarity model using Precision-Recall curves. The results prove the dominance of Fuzzy Similarity based IR system.
Ride-sharing is a modern urban-mobility paradigm with tremendous potential in reducing congestion and pollution. Demand-aware design is a promising avenue for addressing a critical challenge in ride-sharing systems, namely joint optimization of request-vehicle assignment and routing for a fleet of vehicles. In this paper, we develop a probabilistic demand-aware framework to tackle the challenge. We focus on maximizing the expected number of passenger pickups, given the probability distributions of future demands. The key idea of our approach is to assign requests to vehicles in a probabilistic manner. It differentiates our work from existing ones and allows us to explore a richer design space to tackle the request-vehicle assignment puzzle with a performance guarantee but still keeping the final solution practically implementable. The optimization problem is non-convex, combinatorial, and NP-hard in nature. As a key contribution, we explore the problem structure and propose an elegant approximation of the objective function to develop a dual-subgradient heuristic. We characterize a condition under which the heuristic generates a $\left(1-1/e\right)$ approximation solution. Our solution is simple and scalable, amendable for practical implementation. Results of numerical experiments based on real-world traces in Manhattan show that, as compared to a conventional demand-oblivious scheme, our demand-aware solution improves the passenger pickups by up to 46%. The results also show that joint optimization at the fleet level leads to 19% more pickups than that by separate optimizations at individual vehicles.
In this paper, we propose a new coding scheme for the general relay channel. This coding scheme is in the form of a block Markov code. The transmitter uses a superposition Markov code. The relay compresses the received signal and maps the compressed version of the received signal into a codeword conditioned on the codeword of the previous block. The receiver performs joint decoding after it has received all of the B blocks. We show that this coding scheme can be viewed as a generalization of the well-known Compress-And-Forward (CAF) scheme proposed by Cover and El Gamal. Our coding scheme provides options for preserving the correlation between the channel inputs of the transmitter and the relay, which is not possible in the CAF scheme. Thus, our proposed scheme may potentially yield a larger achievable rate than the CAF scheme.
We study the classical problem of identifying the structure of $P^2(\mu)$, the closure of analytic polynomials in the Lebesgue space $L^2(\mu)$ of a compactly supported Borel measure $\mu$ living in the complex plane. In his influential work, Thomson showed that the space decomposes into a full $L^2$-space and other pieces which are essentially spaces of analytic functions on domains in the plane. For a family of measures $\mu$ supported on the closed unit disk $\overline{\mathbb{D}}$ which have a part on the open disk $\mathbb{D}$ which is similar to the Lebesgue area measure, and a part on the unit circle $\mathbb{T}$ which is the restriction of the Lebesgue linear measure to a general measurable subset $E$ of $\mathbb{T}$, we extend the ideas of Khrushchev and calculate the exact form of the Thomson decomposition of the space $P^2(\mu)$. It turns out that the space splits according to a certain decomposition of measurable subsets of $\mathbb{T}$ which we introduce. We highlight applications to the theory of the Cauchy integral operator and de Branges-Rovnyak spaces.
In quantum field theory, the decay of an extended metastable state into the real ground state is known as ``false vacuum decay'' and it takes place via the nucleation of spatially localized bubbles. Despite the large theoretical effort to estimate the nucleation rate, experimental observations were still missing. Here, we observe bubble nucleation in isolated and highly controllable superfluid atomic systems, and we find good agreement between our results, numerical simulations and instanton theory opening the way to the emulation of out-of-equilibrium quantum field phenomena in atomic systems.
We study in more detail the dynamics of chiral primaries of the D1/D5 system. From the CFT given by the $S_{n}$ orbifold a study of correlators resulted in an interacting (collective) theory of chiral operators. In $AdS_{3}\times S^{3}$ SUGRA we concentrate on general 1/2 BPS configurations described in terms of a fundamental string .We first establish a correspondence with the linerized field fluctuations and then present the nonlinear analysis. We evaluate in detail the symplectic form of the general degrees of freedom in Sugra and confirm the appearance of chiral bosons. We then discuss the apearance of interactions and the cubic vertex,in correspondence with the $S_{N}$ collective field theory representation.
Case law retrieval is the retrieval of judicial decisions relevant to a legal question. Case law retrieval comprises a significant amount of a lawyer's time, and is important to ensure accurate advice and reduce workload. We survey methods for case law retrieval from the past 20 years and outline the problems and challenges facing evaluation of case law retrieval systems going forward. Limited published work has focused on improving ranking in ad-hoc case law retrieval. But there has been significant work in other areas of case law retrieval, and legal information retrieval generally. This is likely due to legal search providers being unwilling to give up the secrets of their success to competitors. Most evaluations of case law retrieval have been undertaken on small collections and focus on related tasks such as question-answer systems or recommender systems. Work has not focused on Cranfield style evaluations and baselines of methods for case law retrieval on publicly available test collections are not present. This presents a major challenge going forward. But there are reasons to question the extent of this problem, at least in a commercial setting. Without test collections to baseline approaches it cannot be known whether methods are promising. Works by commercial legal search providers show the effectiveness of natural language systems as well as query expansion for case law retrieval. Machine learning is being applied to more and more legal search tasks, and undoubtedly this represents the future of case law retrieval.
The digital correlator is a crucial element in a modern radio telescope. In this paper we describe a scalable design of the correlator system for the Tianlai pathfinder array, which is an experiment dedicated to test the key technologies for conducting 21cm intensity mapping survey. The correlator is of the FX design, which firstly performs Fast Fourier Transform (FFT) including Polyphase Filter Bank (PFB) computation using a Collaboration for Astronomy Signal Processing and Electronics Research (CASPER) Reconfigurable Open Architecture Computing Hardware-2 (ROACH2) board, then computes cross-correlations using Graphical Processing Units (GPUs). The design has been tested both in laboratory and in actual observation.
We study the feasibility of nonlocally compensating for polarization mode dispersion (PMD), when polarization entangled photons are distributed in fiber-optic channels.We quantify the effectiveness of nonlocal compensation while taking into account the possibility that entanglement is generated through the use of a pulsed optical pump signal.
We consider acoustic scattering in heterogeneous media with piecewise constant wave number. The discretization is carried out using a Galerkin boundary element method in space and Runge-Kutta convolution quadrature in time. We prove well-posedness of the scheme and provide a priori estimates for the convergence in space and time.
Repetitiveness in projective and injective resolutions and its influence on homological dimensions are studied. Some variations on the theme of repetitiveness are introduced, and it is shown that the corresponding invariants lead to very good -- and quite accessible -- upper bounds on various finitistic dimensions in terms of individual modules. These invariants are the `repetition index' and the `syzygy type' of a module $M$ over an artinian ring $\Lambda$. The repetition index measures the degree of repetitiveness among non-projective direct summands of the syzygies of $M$, while the syzygy type of $M$ measures the number of indecomposable modules among direct summands of the syzygies of $M$. It is proved that if $T$ is a right $\Lambda$-module which contains an isomorphic copy of $\Lambda/J(\Lambda)$, then the left big finitistic dimension of $\Lambda$ is bounded above by the repetition index of $T$, which in turn is bounded above by the syzygy type of $T$. The finite dimensional $K$-algebras $\Lambda = {\cal O}/\pi{\cal O}$, where $\cal O$ is a classical order over a discrete valuation ring $D$ with uniformizing parameter $\pi$ and residue class field $K$, are investigated. It is proved that, if $\text{gl.dim.}\, {\cal O} =d<\infty$, then the global repetition index of $\Lambda$ is $d-1$ and all finitely generated $\Lambda$-modules have finite syzygy type. Various examples illustrating the results are presented.
The charged lepton flavour violating (LFV) radiative decays, $\mu\to e+\gamma$, $\tau\to \mu+\gamma$ and $\tau\to e +\gamma$ are investigated in a class of supersymmetric $A_4$ models with three heavy right-handed (RH) Majorana neutrinos, in which the lepton (neutrino) mixing is predicted to leading order (LO) to be tri-bimaximal. The light neutrino masses are generated via the type I see-saw mechanism. The analysis is done within the framework of the minimal supergravity (mSUGRA) scenario, which provides flavour universal boundary conditions at the scale of grand unification $M_X \approx 2 \times 10^{16}$ GeV. Detailed predictions for the rates of the three LFV decays are obtained in two explicit realisations of the $A_4$ models due to Altarelli and Feruglio and Altarelli and Meloni, respectively.
The experimentally observed spectra of heavy vector meson radial excitations show a dependence on two different energy parameters. One is associated with the quark mass and the other with the binding energy levels of the quark anti-quark pair. The first is present in the large mass of the first state while the other corresponds to the small mass splittings between radial excitations. In this article we show how to reproduce such a behavior with reasonable precision using a holographic model. In the dual picture, the large energy scale shows up from a bulk mass and the small scale comes from the position of anti-de Sitter (AdS) space where field correlators are calculated. The model determines the masses of four observed S-wave states of charmonium and six S-wave states of bottomonium with , 6.1 % rms error. In consistency with the physical picture, the large energy parameter is flavor dependent, while the small parameter, associated with quark anti-quark interaction is the same for charmonium and bottomonium states.
We study the effect of ferromagnetic metals (FM) on the circularly polarized modes of an electromagnetic cavity and show that broken time-reversal symmetry leads to a dichroic response of the cavity modes. With one spin-split band, the Zeeman coupling between the FM electrons and cavity modes leads to an anticrossing for mode frequencies comparable to the spin splitting. However, this is only the case for one of the circularly polarized modes, while the other is unaffected by the FM, allowing for the determination of the spin-splitting of the FM using polarization-dependent transmission experiments. Moreover, we show that for two spin-split bands, also the lifetimes of the cavity modes display a polarization-dependent response. The change in photon lifetimes can be understood as a suppression due to level attraction with a continuum of Stoner modes with the same wavevector. The reduced lifetime of modes of only one polarization could potentially be used to engineer and control circularly polarized cavities.
The study of biharmonic submanifolds, initiated by B. Y. Chen and G. Y. Jiang independently, has received a great attention in the past 30 years with many important progress. This note attempts to give a short survey on the study of biharmonic Riemannian submersions which are a dual concept of biharmonic submanifolds (i.e., biharmonic isometric immersions).
We investigate spatial correlations of strain fluctuations in sheared colloidal glasses and simulations of sheared amorphous solids. The correlations reveal a quadrupolar symmetry reminiscent of the strain field due to an Eshelby's inclusion. However, they display an algebraic decay $1/r^{\alpha}$, where the exponent $\alpha$ is close to $1$ in the steady state, unlike the Eshelby field, for which $\alpha=3$ . The exponent takes values between $3$ to $1$ in the transient stages of deformation. We explain these observations using a simple model based on interacting Eshelby inclusions. As the system is sheared beyond the linear response to plastic flow, the density correlations of inclusions are enhanced and it emerges as key to understanding the elastoplastic response of the system to applied shear.
We observe several interesting phenomena in a technicolor-like model of electroweak symmetry breaking based on the D4-D8-D8bar system of Sakai and Sugimoto. The benefit of holographic models based on D-brane configurations is that both sides of the holographic duality are well understood. We find that the lightest technicolor resonances contribute negatively to the Peskin-Takeuchi S-parameter, but heavy resonances do not decouple and lead generically to large, positive values of S, consistent with standard estimates in QCD-like theories. We study how the S parameter and the masses and decay constants of the vector and axial-vector techni-resonances vary over a one-parameter family of D8-brane configurations. We discuss possibilities for the consistent truncation of the theory to the first few resonances and suggest some generic predictions of stringy holographic technicolor models.
We show that for the Lindblad evolution defined using (at most) quadratically growing classical Hamiltonians and (at most) linearly growing classical jump functions (quantized into jump operators assumed to satisfy certain ellipticity conditions and modeling interaction with a larger system), the evolution of a quantum observable remains close to the classical Fokker--Planck evolution in the Hilbert--Schmidt norm for times vastly exceeding the Ehrenfest time (the limit of such agreement with no jump operators). The time scale is the same as in the recent papers by Hern\'andez--Ranard--Riedel but the statement and methods are different. The appendix presents numerical experiments illustrating the classical/quantum correspondence in Lindblad evolution and comparing it to the mathematical results.
Virtual reality (VR) over wireless is expected to be one of the killer applications in next-generation communication networks. Nevertheless, the huge data volume along with stringent requirements on latency and reliability under limited bandwidth resources makes untethered wireless VR delivery increasingly challenging. Such bottlenecks, therefore, motivate this work to seek the potential of using semantic communication, a new paradigm that promises to significantly ease the resource pressure, for efficient VR delivery. To this end, we propose a novel framework, namely WIreless SEmantic deliveRy for VR (WiserVR), for delivering consecutive 360{\deg} video frames to VR users. Specifically, deep learning-based multiple modules are well-devised for the transceiver in WiserVR to realize high-performance feature extraction and semantic recovery. Among them, we dedicatedly develop a concept of semantic location graph and leverage the joint-semantic-channel-coding method with knowledge sharing to not only substantially reduce communication latency, but also to guarantee adequate transmission reliability and resilience under various channel states. Moreover, implementation of WiserVR is presented, followed by corresponding initial simulations for performance evaluation compared with benchmarks. Finally, we discuss several open issues and offer feasible solutions to unlock the full potential of WiserVR.
This paper has been withdrawn by the author
We consider protocols to generate quantum entanglement between two remote qubits, through joint time-continuous detection of their spontaneous emission. We demonstrate that schemes based on homodyne detection, leading to diffusive quantum trajectories, lead to identical average entanglement yield as comparable photodetection strategies; this is despite substantial differences in the two-qubit state dynamics between these schemes, which we explore in detail. The ability to use different measurements to achieve the same ends may be of practical significance; the less-well-known diffusive scheme appears far more feasible on superconducting qubit platforms in the near term.
IceCube has observed 80 astrophysical neutrino candidates in the energy range 0.02 < E_\nu/PeV < 2. Deep inelastic scattering of these neutrinos with nucleons on Antarctic ice sheet probe center-of-mass energies $\sqrt{s} \sim$ 1 TeV. By comparing the rates for two classes of observable events, any departure from the benchmark (perturbative QCD) neutrino-nucleon cross section can be constrained. Using the projected sensitivity of South Pole next generation neutrino telescope we show that this facility will provide a unique probe of strong interaction dynamics. In particular, we demonstrate that the high-energy high-statistics data sample to be recorded by IceCube-Gen2 in the very near future will deliver a direct measurement of the neutrino-nucleon cross section at $\sqrt{s} \sim 1$ TeV, with a precision comparable to perturbative QCD informed by HERA data. We also use IceCube data to extract the neutrino-nucleon cross section at $\sqrt{s} \sim 1$ TeV through a likelihood analysis, considering (for the first time) both the charged-current and neutral-current contributions as free parameters of the likelihood function.
This text reports in detail how SEAL, a modeling framework for the economy based on individual agents and firms, works. Thus, it aims to be an usage manual for those wishing to use SEAL or SEAL's results. As a reference work, theoretical and research studies are only cited. SEAL is thought as a Lab that enables the simulation of the economy with spatially bounded microeconomic-based computational agents. Part of the novelty of SEAL comes from the possibility of simulating the economy in space and the instantiation of different public offices, i.e. government institutions, with embedded markets and actual data. SEAL is designed for Public Policy analysis, specifically those related to Public Finance, Taxes and Real Estate.
We consider the problem of estimating the causal effect of a treatment on an outcome in linear structural causal models (SCM) with latent confounders when we have access to a single proxy variable. Several methods (such as difference-in-difference (DiD) estimator or negative outcome control) have been proposed in this setting in the literature. However, these approaches require either restrictive assumptions on the data generating model or having access to at least two proxy variables. We propose a method to estimate the causal effect using cross moments between the treatment, the outcome, and the proxy variable. In particular, we show that the causal effect can be identified with simple arithmetic operations on the cross moments if the latent confounder in linear SCM is non-Gaussian. In this setting, DiD estimator provides an unbiased estimate only in the special case where the latent confounder has exactly the same direct causal effects on the outcomes in the pre-treatment and post-treatment phases. This translates to the common trend assumption in DiD, which we effectively relax. Additionally, we provide an impossibility result that shows the causal effect cannot be identified if the observational distribution over the treatment, the outcome, and the proxy is jointly Gaussian. Our experiments on both synthetic and real-world datasets showcase the effectiveness of the proposed approach in estimating the causal effect.
In order to meet the needs of Fermilabs planned post-collider experimental program, the total proton throughput of the 8 GeV Booster accelerator must be nearly doubled within the next two years. A system of 48 ramped corrector magnets has recently been installed in the Booster to help improve efficiency and allow for higher beam intensity without exceeding safe radiation levels. We present the preliminary results of beta function measurements made using these corrector magnets. Our goal is to use the correctors to reduce irregularities in the beta function, and ultimately to introduce localized beta bumps to reduce beam loss or direct losses towards collimators.
We study a contactless target probing based on stimulation by a radio frequency (RF) signal. The transmit signal is dispatched from a transmitter equipped with a two-dimensional antenna array. Then, the reflected signal from the targets are received at multiple distributed sensors. The observation at the sensors are amplified and forwarded to the fusion center. Afterwards, the fusion center performs space-time post processing to extract the maximum common information between the received signal and the targets impulse responses. Optimal power allocation at the transmitter and amplification at the sensors is investigated. The sum-power minimization problem turns out to be a non-convex problem. We propose an efficient algorithm to solve this problem iteratively. By exploiting maximum-ratio transmission (MRT), maximum-ratio combining (MRC) of space-time received signal vector is the optimal receiver at sufficiently low signal-to-interference-plus-noise-ratio (SINR). However, zero-forcing (ZF) at the fusion center outperforms MRC at higher SINR demands.
A standard way to move particles in a SMC sampler is to apply several steps of a MCMC (Markov chain Monte Carlo) kernel. Unfortunately, it is not clear how many steps need to be performed for optimal performance. In addition, the output of the intermediate steps are discarded and thus wasted somehow. We propose a new, waste-free SMC algorithm which uses the outputs of all these intermediate MCMC steps as particles. We establish that its output is consistent and asymptotically normal. We use the expression of the asymptotic variance to develop various insights on how to implement the algorithm in practice. We develop in particular a method to estimate, from a single run of the algorithm, the asymptotic variance of any particle estimate. We show empirically, through a range of numerical examples, that waste-free SMC tends to outperform standard SMC samplers, and especially so in situations where the mixing of the considered MCMC kernels decreases across iterations (as in tempering or rare event problems).
The ability of bumblebee gravity models to explain dark energy, which is the phenomenon responsible for the universe's observed accelerated expansion, is one of their most significant applications. An effect that causes faster expansion can be linked to how much the Lorentz symmetry of our universe is violated. Moreover, since we do not know what generates dark energy, the bumblebee gravity theory seems highly plausible. By utilizing the physical changes happening around a rotating bumblebee black hole (RBBH), we aim to obtain more specific details about the bumblebee black hole's spacetime and our universe. However, as researched in the literature, slow-spinning RBBH (SRBBH) spacetime, which has a higher accuracy, will be considered instead of general RBBH. To this end, we first employ the Rindler--Ishak method (RIM), which enables us to study how light is bent in the vicinity of a gravitational lens. We evaluate the deflection angle of null geodesics in the equatorial plane of the SRBBH spacetime. Then, we use astrophysical data to see the effect of the Lorentz symmetry breaking (LSB) parameter on the bending angle of light for numerous astrophysical stars and black holes. We also acquire the analytical greybody factors (GFs) and quasinormal modes (QNMs) of the SRBBH. Finally, we visualize and discuss the results obtained in the conclusion section.
By leveraging the power of Large Language Models(LLMs) and speech foundation models, state of the art speech-text bimodal works can achieve challenging tasks like spoken translation(ST) and question answering(SQA) altogether with much simpler architectures. In this paper, we utilize the capability of Whisper encoder and pre-trained Yi-6B. Empirical results reveal that modal alignment can be achieved with one layer module and hundred hours of speech-text multitask corpus. We further swap the Yi-6B with human preferences aligned version of Yi-6B-Chat during inference, and discover that the alignment capability is applicable as well. In addition, the alignment subspace revealed by singular value decomposition(SVD) also implies linear alignment subspace is sparse, which leaves the possibility to concatenate other features like voice-print or video to expand modality.
Chern-Simons theory can be defined on a cell complex, such as a network of bubbles, which is not a (Hausdorff) manifold. Requiring gauge invariance determines the action, including interaction terms at the intersections, and imposes a relation between the coupling constants of the CS terms on adjacent cell walls. We also find simple conservation laws for charges at the intersections.
We have produced a new conformal map of the universe illustrating recent discoveries, ranging from Kuiper belt objects in the Solar system, to the galaxies and quasars from the Sloan Digital Sky Survey. This map projection, based on the logarithm map of the complex plane, preserves shapes locally, and yet is able to display the entire range of astronomical scales from the Earth's neighborhood to the cosmic microwave background. The conformal nature of the projection, preserving shapes locally, may be of particular use for analyzing large scale structure. Prominent in the map is a Sloan Great Wall of galaxies 1.37 billion light years long, 80% longer than the Great Wall discovered by Geller and Huchra and therefore the largest observed structure in the universe.
Neutron correlation spectroscopy can exceed direct spectroscopy in the incoming beam intensity by up to two orders of magnitude at the same energy resolution. However, the propagation of the counting noise in the correlation algorithm of data reduction is disadvantageous for the lowest intensity parts of the observed spectrum. To mitigate this effect at pulsed neutron sources we propose two dimensional time-of-flight recording of each neutron detection event: with respect to both the neutron source pulses and to the rotation phase of the pseudo-random beam modulation statistical chopper. We have identified a formulation of the data reduction algorithm by matching the data processing time channel width to the inherent time resolution of this chopper, which makes the reconstruction of the direct time-of-flight spectra exact and independent of all other contributions to instrumental resolution. Two ways are proposed for most flexible choice of intensity vs. resolution without changing the statistical chopper or its speed: changing the size of the beam window during the experiment or varying intensity and resolution options in the data reduction algorithm after the experiment. This latter is a unique and very promising new potential offered by the correlation method. Furthermore, it displays reduced sensitivity to external background, also due to the much higher signal intensity. This is particularly advantageous for extending the operational range to higher neutron energies. In hot and thermal neutron vibrational spectroscopy, the statistical chopper approach allows us to achieve very significant gains in spectrometer efficiency compared to using conventional choppers. High intensity for the most intense features in the spectra and the reduced sensitivity to sample independent background make correlation spectroscopy a most powerful choice for studying small samples.
We introduce a new characteristic of jets called mass area. It is defined so as to measure the susceptibility of the jet's mass to contamination from soft background. The mass area is a close relative of the recently introduced catchment area of jets. We define it also in two variants: passive and active. As a preparatory step, we generalise the results for passive and active areas of two-particle jets to the case where the two constituent particles have arbitrary transverse momenta. As a main part of our study, we use the mass area to analyse a range of modern jet algorithms acting on simple one and two-particle systems. We find a whole variety of behaviours of passive and active mass areas depending on the algorithm, relative hardness of particles or their separation. We also study mass areas of jets from Monte Carlo simulations as well as give an example of how the concept of mass area can be used to correct jets for contamination from pileup. Our results show that the information provided by the mass area can be very useful in a range of jet-based analyses.
We present a method to register individual members of a robotic swarm in an augmented reality display while showing relevant information about swarm dynamics to the user that would be otherwise hidden. Individual swarm members and clusters of the same group are identified by their color, and by blinking at a specific time interval that is distinct from the time interval at which their neighbors blink. We show that this problem is an instance of the graph coloring problem, which can be solved in a distributed manner in O(log(n)) time. We demonstrate our approach using a swarm chemistry simulation in which robots simulate individual atoms that form molecules following the rules of chemistry. Augmented reality is then used to display information about the internal state of individual swarm members as well as their topological relationship, corresponding to molecular bonds.
In the present work we discuss higher order multi-term partial differential equation (PDE) with the Caputo-Fabrizio fractional derivative in time. We investigate a boundary value problem for fractional heat equation involving higher order Caputo-Fabrizio derivatives in time-variable. Using method of separation of variables and integration by parts, we reduce fractional order PDE to the integer order. We represent explicit solution of formulated problem in particular case by Fourier series.
Generating realistic tissue images with annotations is a challenging task that is important in many computational histopathology applications. Synthetically generated images and annotations are valuable for training and evaluating algorithms in this domain. To address this, we propose an interactive framework generating pairs of realistic colorectal cancer histology images with corresponding glandular masks from glandular structure layouts. The framework accurately captures vital features like stroma, goblet cells, and glandular lumen. Users can control gland appearance by adjusting parameters such as the number of glands, their locations, and sizes. The generated images exhibit good Frechet Inception Distance (FID) scores compared to the state-of-the-art image-to-image translation model. Additionally, we demonstrate the utility of our synthetic annotations for evaluating gland segmentation algorithms. Furthermore, we present a methodology for constructing glandular masks using advanced deep generative models, such as latent diffusion models. These masks enable tissue image generation through a residual encoder-decoder network.
Let H be a differential graded Hopf algebra over a field k. This paper gives an explicit construction of a triple cochain complex that defines the Hochschild-Cartier cohomology of H. A certain truncation of this complex is the appropriate setting for deforming H as an H(q)-structure. The direct limit of all such truncations is the appropriate setting for deforming H as a strongly homotopy associative structure. Sign complications are systematically controlled. The connection between rational perturbation theory and the deformation theory of certain free commutative differential graded algebras is clarified.
We have searched for a T/CP violation signature arising from an electric dipole form factor (d_tau) of the tau lepton in the e+e- -> tau+tau- reaction. Using an optimal observable method for 29.5 fb^{-1} of data collected with the Belle detector at the KEKB collider at sqrt{s}=10.58 GeV, we obtained the preliminary result Re(d_tau) = (1.15 +- 1.70) x 10^{-17} ecm and Im(d_tau) = (-0.83 +- 0.86) x 10^{-17} ecm for the real and imaginary parts of d_tau, respectively, and set the 95% confidence level limits -2.2 < Re(d_tau) < 4.5 (10^{-17} ecm) and -2.5 < Im(d_tau) < 0.8 (10^{-17} ecm).
In this paper we introduce the nullity of signed graphs, and give some results on the nullity of signed graphs with pendant trees. We characterize the unicyclic signed graphs of order n with nullity n-2; n-3; n-4; n-5 respectively.
In this paper, we apply the NetFV and NetVLAD layers for the end-to-end language identification task. NetFV and NetVLAD layers are the differentiable implementations of the standard Fisher Vector and Vector of Locally Aggregated Descriptors (VLAD) methods, respectively. Both of them can encode a sequence of feature vectors into a fixed dimensional vector which is very important to process those variable-length utterances. We first present the relevances and differences between the classical i-vector and the aforementioned encoding schemes. Then, we construct a flexible end-to-end framework including a convolutional neural network (CNN) architecture and an encoding layer (NetFV or NetVLAD) for the language identification task. Experimental results on the NIST LRE 2007 close-set task show that the proposed system achieves significant EER reductions against the conventional i-vector baseline and the CNN temporal average pooling system, respectively.
We use a QCD relativistic potential model to compute the strong coupling constant $g$ appearing in the effective Lagrangian which describes the interaction of $0^-$ and $1^-$ $\bar q Q$ states with soft pions in the limit $m_Q \to \infty$. We compare our results with other approaches; in particular, in the non relativistic limit, we are able to reproduce the constituent quark model result: $g=1$, while the inclusion of relativistic effects due to the light quark gives $g={1 \over 3}$, in agreement with QCD sum rules. We also estimate heavy meson radiative decay rates, with results in agreement with available experimental data.
We define multifractional Hermite processes which generalize and extend both multifractional Brownian motion and Hermite processes. It is done by substituting the Hurst parameter in the definition of Hermite processes as a multiple Wiener-It\^o integral by a Hurst function. Then, we study the pointwise regularity of these processes, their local asymptotic self-similarity and some fractal dimensions of their graph. Our results show that the fundamental properties of multifractional Hermite processes are, as desired, governed by the Hurst function. Complements are given in the second order Wiener chaos, using facts from Malliavin calculus.
We explore the influence of non-geodesic pressure forces that are present in an accretion disk on the frequencies of its axisymmetric and non-axisymmetric epicyclic oscillation modes. {We discuss its implications for models of high-frequency quasi-periodic oscillations (QPOs) that have been observed in the X-ray flux of accreting black holes (BHs) in the three Galactic microquasars, GRS 1915+105, GRO J1655$-$40 and XTE J1550$-$564. We focus on previously considered QPO models that deal with low azimuthal number epicyclic modes, $\lvert m \rvert \leq 2$, and outline the consequences for the estimations of BH spin, $a\in[0,1]$.} For four out of six examined models, we find only small, rather insignificant changes compared to the geodesic case. For the other two models, on the other hand, there is a fair increase of the estimated upper limit on the spin. Regarding the QPO model's falsifiability, we find that one particular model from the examined set is incompatible with the data. If the microquasar's spectral spin estimates that point to $a>0.65$ were fully confirmed, two more QPO models would be ruled out. Moreover, if two very different values of the spin, such as $a\approx 0.65$ in GRO J1655$-$40 vs. $a\approx 1$ in GRS 1915+105, were confirmed, all the models except one would remain unsupported by our results. Finally, we discuss the implications for a model recently proposed in the context of neutron star (NS) QPOs as a disk-oscillation-based modification of the relativistic precession model. This model provides overall better fits of the NS data and predicts more realistic values of the NS mass compared to the relativistic precession model. We conclude that it also implies a significantly higher upper limit on the microquasar's BH spin ($a\sim 0.75$ vs. $a\sim 0.55$).
While computer modeling and simulation are crucial for understanding scientometrics, their practical use in literature remains somewhat limited. In this study, we establish a joint coauthorship and citation network using preferential attachment. As papers get published, we update the coauthorship network based on each paper's author list, representing the collaborative team behind it. This team is formed considering the number of collaborations each author has, and we introduce new authors at a fixed probability, expanding the coauthorship network. Simultaneously, as each paper cites a specific number of references, we add an equivalent number of citations to the citation network upon publication. The likelihood of a paper being cited depends on its existing citations, fitness value, and age. Then we calculate the journal impact factor and h-index, using them as examples of scientific impact indicators. After thorough validation, we conduct case studies to analyze the impact of different parameters on the journal impact factor and h-index. The findings reveal that increasing the reference number N or reducing the paper's lifetime {\theta} significantly boosts the journal impact factor and average h-index. On the other hand, enlarging the team size m without introducing new authors or decreasing the probability of newcomers p notably increases the average h-index. In conclusion, it is evident that various parameters influence scientific impact indicators, and their interpretation can be manipulated by authors. Thus, exploring the impact of these parameters and continually refining scientific impact indicators are essential. The modeling and simulation method serves as a powerful tool in this ongoing process, and the model can be easily extended to include other scientific impact indicators and scenarios.
The discriminants of certain polynomials related to Chebyshev polynomials factor into the product of two polynomials, one of which has coefficients that are much larger than the other's. Remarkably, these polynomials of dissimilar size have "almost" the same roots, and their discriminants involve exactly the same prime factors.
This paper presents a deep learning approach for image retrieval and pattern spotting in digital collections of historical documents. First, a region proposal algorithm detects object candidates in the document page images. Next, deep learning models are used for feature extraction, considering two distinct variants, which provide either real-valued or binary code representations. Finally, candidate images are ranked by computing the feature similarity with a given input query. A robust experimental protocol evaluates the proposed approach considering each representation scheme (real-valued and binary code) on the DocExplore image database. The experimental results show that the proposed deep models compare favorably to the state-of-the-art image retrieval approaches for images of historical documents, outperforming other deep models by 2.56 percentage points using the same techniques for pattern spotting. Besides, the proposed approach also reduces the search time by up to 200x and the storage cost up to 6,000x when compared to related works based on real-valued representations.
While there has been growing interest for noncommutative spaces in recent times, most examples have been based on the simplest noncommutative algebra: [x_i,x_j]=i theta_{ij}. Here we present new classes of (non-formal) deformed products associated to linear Lie algebras of the kind [x_i,x_j]=ic_{ij}^k x_k. For all possible three-dimensional cases, we define a new star product and discuss its properties. To complete the analysis of these novel noncommutative spaces, we introduce noncompact spectral triples, and the concept of star triple, a specialization of the spectral triple to deformations of the algebra of functions on a noncompact manifold. We examine the generalization to the noncompact case of Connes' conditions for noncommutative spin geometries, and, in the framework of the new star products, we exhibit some candidates for a Dirac operator. On the technical level, properties of the Moyal multiplier algebra M(R_\theta^{2n) are elucidated.
We show that the problem of constructing tree-structured descriptions of data layouts that are optimal with respect to space or other criteria from given sequences of displacements, can be solved in polynomial time. The problem is relevant for efficient compiler and library support for communication of noncontiguous data, where tree-structured descriptions with low-degree nodes and small index arrays are beneficial for the communication soft- and hardware. An important example is the Message-Passing Interface (MPI) which has a mechanism for describing arbitrary data layouts as trees using a set of increasingly general constructors. Our algorithm shows that the so-called MPI datatype reconstruction problem by trees with the full set of MPI constructors can be solved optimally in polynomial time, refuting previous conjectures that the problem is NP-hard. Our algorithm can handle further, natural constructors, currently not found in MPI. Our algorithm is based on dynamic programming, and requires the solution of a series of shortest path problems on an incrementally built, directed, acyclic graph. The algorithm runs in $O(n^4)$ time steps and requires $O(n^2)$ space for input displacement sequences of length $n$.
We observed broken-symmetry quantum Hall effects and level crossings between spin- and valley- resolved Landau levels (LLs) in Bernal stacked trilayer graphene. When the magnetic field was tilted with respect to sample normal from $0^{\circ}$ to $66^\circ$, the LL crossings formed at intersections of zeroth and second LLs from monolayer-graphene-like and bilayer-graphene-like subbands, respectively, exhibited a sequence of transitions. The results indicate the LLs from different subbands are coupled by in-plane magnetic fields ($B_{\parallel}$), which was explained by developing the tight-binding model Hamiltonian of trilayer graphene under $B_{\parallel}$.
We have studied electron-induced ion-pair dissociation dynamics of CO using the state-of-art velocity map imaging technique in combination with a time-of-flight-based two-field mass spectrometer. Extracting the characteristics for O$^-$/CO nascent atomic anionic fragments from the low energy (25 - 45 eV) electron-molecule scattering, first-time, we have directly detected the existence of S-wave resonated Trilobite resembling a novel molecular binding energy mechanism, as predicted by Greene \textit{et al.} \cite{greene2000creation}. The energy balance demands ion-pair dissociation (IPD) lie within a long-range (<1000 Bohr radius) heavy Rydberg system. Modified Van Brunt expression capturing the deflection of dipole-Born approximation is used to model the angular distributions (AD) for the anionic atomic fragments. The AD fits reveal that the final states are dominantly associated with $\Sigma$ symmetries and a minor contribution from $\Pi$ symmetric states that maps the three-dimensional unnatural oscillation of Born-Oppenheimer's potential.
In this paper, we study the dynamics of the Chern-Simons Inflation Model proposed by Alexander, Marciano and Spergel. According to this model, inflation begins when a fermion current interacts with a turbulent gauge field in a space larger than some critical size. This mechanism appears to work by driving energy from the initial random spectrum into a narrow band of frequencies, similar to the inverse energy cascade seen in MHD turbulence. In this work we focus on the dynamics of the interaction using phase diagrams and a thorough analysis of the evolution equations. We show that in this model inflation is caused by an over-damped harmonic oscillator driving waves in the gauge field at their resonance frequency.
We present VLA 3.5 cm continuum observations of the Serpens cloud core. 22 radio continuum sources are detected. 16 out of the 22 cm sources are suggested to be associated with young stellar objects (Class 0, Class I, flat-spectrum, and Class II) of the young Serpens cluster. The rest of the VLA sources plausibly are background objects. Most of the Serpens cm sources likely represent thermal radio jets; on the other hand, the radio continuum emission of some sources could be due to a gyrosynchroton mechanism arising from coronally active young stars. The Serpens VLA sources are spatially distributed into two groups; one of them located towards the NW clump of the Serpens core, where only Class 0 and Class I protostars are found to present cm emission, and a second group located towards the SE clump, where radio continuum sources are associated with objects in evolutionary classes from Class 0 to Class II. This subgrouping is similar to that found in the near IR, mid-IR and mm wavelength regimes.
Semantic role labeling (SRL) is dedicated to recognizing the semantic predicate-argument structure of a sentence. Previous studies in terms of traditional models have shown syntactic information can make remarkable contributions to SRL performance; however, the necessity of syntactic information was challenged by a few recent neural SRL studies that demonstrate impressive performance without syntactic backbones and suggest that syntax information becomes much less important for neural semantic role labeling, especially when paired with recent deep neural network and large-scale pre-trained language models. Despite this notion, the neural SRL field still lacks a systematic and full investigation on the relevance of syntactic information in SRL, for both dependency and both monolingual and multilingual settings. This paper intends to quantify the importance of syntactic information for neural SRL in the deep learning framework. We introduce three typical SRL frameworks (baselines), sequence-based, tree-based, and graph-based, which are accompanied by two categories of exploiting syntactic information: syntax pruning-based and syntax feature-based. Experiments are conducted on the CoNLL-2005, 2009, and 2012 benchmarks for all languages available, and results show that neural SRL models can still benefit from syntactic information under certain conditions. Furthermore, we show the quantitative significance of syntax to neural SRL models together with a thorough empirical survey using existing models.
Strong nonlinearity at the single photon level represents a crucial enabling tool for optical quantum technologies. Here we report on experimental implementation of a strong Kerr nonlinearity by measurement-induced quantum operations on weak quantum states of light. Our scheme coherently combines two sequences of single photon addition and subtraction to induce a nonlinear phase shift at the single photon level. We probe the induced nonlinearity with weak coherent states and characterize the output non-Gaussian states with quantum state tomography. The strong nonlinearity is clearly witnessed as a change of sign of specific off-diagonal density matrix elements in Fock basis.
Recent work on evaluating grammatical knowledge in pretrained sentence encoders gives a fine-grained view of a small number of phenomena. We introduce a new analysis dataset that also has broad coverage of linguistic phenomena. We annotate the development set of the Corpus of Linguistic Acceptability (CoLA; Warstadt et al., 2018) for the presence of 13 classes of syntactic phenomena including various forms of argument alternations, movement, and modification. We use this analysis set to investigate the grammatical knowledge of three pretrained encoders: BERT (Devlin et al., 2018), GPT (Radford et al., 2018), and the BiLSTM baseline from Warstadt et al. We find that these models have a strong command of complex or non-canonical argument structures like ditransitives (Sue gave Dan a book) and passives (The book was read). Sentences with long distance dependencies like questions (What do you think I ate?) challenge all models, but for these, BERT and GPT have a distinct advantage over the baseline. We conclude that recent sentence encoders, despite showing near-human performance on acceptability classification overall, still fail to make fine-grained grammaticality distinctions for many complex syntactic structures.
Over the years, the directional distribution functions of wind-generated wave field have been assumed to be unimodal. While details of various functional forms differ, these directional models suggest that waves of all spectral components propagate primarily in the wind direction. The beamwidth of the directional distribution is narrowest near the spectral peak frequency, and increases toward both higher and lower frequencies. Recent advances in global positioning, laser ranging and computer technologies have made it possible to acquire high-resolution 3D topography of ocean surface waves. Directional spectral analysis of the ocean surface topography clearly shows that in a young wave field, two dominant wave systems travel at oblique angles to the wind and the ocean surface display a crosshatched pattern. One possible mechanism generating this bimodal directional wave field is resonant propagation as suggested by Phillips resonance theory of wind wave generation. For a more mature wave field, wave components shorter than the peak wavelength also show bimodal directional distributions symmetric to the dominant wave direction. The latter bimodal directionality is produced by Hasselmann nonlinear wave-wave interaction mechanism. The implications of these directional observations on remote sensing (directional characteristics of ocean surface roughness) and air-sea interaction studies (directional properties of mass, momentum and energy transfers) are significant.
We consider (local) parametrizations of Teichmuller space $T_{g,n}$ (of genus $g$ hyperbolic surfaces with $n$ boundary components) by lengths of $6g-6+3n$ geodesics. We find a large family of suitable sets of $6g-6+3n$ geodesics, each set forming a special structure called "admissible double pants decomposition". For admissible double pants decompositions containing no double curves we show that the lengths of curves contained in the decomposition determine the point of $T_{g,n}$ up to finitely many choices. Moreover, these lengths provide a local coordinate in a neighborhood of all points of $T_{g,n}\setminus X$, where $X$ is a union of $3g-3+n$ hypersurfaces. Furthermore, there exists a groupoid acting transitively on admissible double pants decompositions and generated by transformations exchanging only one curve of the decomposition. The local charts arising from different double pants decompositions compose an atlas covering the Teichmuller space. The gluings of the adjacent charts are coming from the elementary transformations of the decompositions, the gluing functions are algebraic. The same charts provide an atlas for a large part of the boundary strata in Deligne-Mumford compactification of the moduli space.
We consider the fundamental problem of constructing fast and small circuits for binary addition. We propose a new algorithm with running time $\mathcal O(n \log_2 n)$ for constructing linear-size $n$-bit adder circuits with a significantly better depth guarantee compared to previous approaches: Our circuits have a depth of at most $\log_2 n + \log_2 \log_2 n + \log_2 \log_2 \log_2 n + \text{const}$, improving upon the previously best circuits by [12] with a depth of at most $\log_2 n + 8 \sqrt{\log_2 n} + 6 \log_2 \log_2 n + \text{const}$. Hence, we decrease the gap to the lower bound of $\log_2 n + \log_2 \log_2 n + \text{const}$ by [5] significantly from $\mathcal O (\sqrt{\log_2 n})$ to $\mathcal O(\log_2 \log_2 \log_2 n)$. Our core routine is a new algorithm for the construction of a circuit for a single carry bit, or, more generally, for an And-Or path, i.e., a Boolean function of type $t_0 \lor ( t_1 \land (t_2 \lor ( \dots t_{m-1}) \dots ))$. We compute linear-size And-Or path circuits with a depth of at most $\log_2 m + \log_2 \log_2 m + 0.65$ in time $\mathcal O(m \log_2 m)$. These are the first And-Or path circuits known that, up to an additive constant, match the lower bound by [5] and at the same time have a linear size. The previously fastest And-Or path circuits are only by an additive constant worse in depth, but have a much higher size in the order of $\mathcal O (m \log_2 m)$.
The vacuum expectation value of the fermionic current is evaluated for a massive spinor field in spacetimes with an arbitrary number of toroidally compactified spatial dimensions in presence of a constant gauge field. By using the Abel-Plana type summation formula and the zeta function technique we present the fermionic current in two different forms. Non-trivial topology of the background spacetime leads to the Aharonov-Bohm effect on the fermionic current induced by the gauge field. The current is a periodic function of the magnetic flux with the period equal to the flux quantum. In the absence of the gauge field it vanishes for special cases of untwisted and twisted fields. Applications of the general formulae to Kaluz-Klein type models and to cylindrical and toroidal carbon nanotubes are given. In the absence of magnetic flux the total fermionic current in carbon nanotubes vanishes, due to the cancellation of contributions from two different sublattices of the graphene hexagonal lattice.
Semiconductor dopant profiling using secondary electron imaging in a scanning electron microscope (SEM) has been developed in recent years. In this paper, we show that the mechanism behind it also allows mapping of the electric potential of undoped regions. By using an unbiased GaAs/AlGaAs heterostructure, this article demonstrates the direct observation of the electrostatic potential variation inside a 90nm wide undoped GaAs channel surrounded by ionized dopants. The secondary electron emission intensities are compared with two-dimensional numerical solutions of the electric potential.
The complete QCD evolution equation for factorial moments in quark and gluon jets is numerically solved with initial conditions at threshold by fully taking into account the energy-momentum conservation law. Within the picture of Local Parton Hadron Duality, the perturbative QCD predictions can successfully describe the available experimental data.
In this letter we present results from intra-night monitoring in three colors (BRI) of the blazar S4 0954+65 during its recent (2015 February) unprecedented high state. We find violent variations on very short time scales, reaching magnitude change levels of 0.1-0.2 mag/h. On some occasions, changes of ~0.1 mag are observed even within ~10 min. During the night of 14.02.2015 an exponential drop of ~0.7 magnitudes is detected for about 5 hours. Cross-correlation between the light curves does not reveal any detectable wavelength-dependent time delays, larger than ~5 min. Color changes "bluer-when-brighter" are observed on longer time scales. Possible variability mechanisms to explain the observations are discussed and a preference to the geometrical one is given.
Covariant structure of the self-force of a particle in a general curved background has been made clear in the cases of scalar [Quinn], electromagnetic [DeWittBrehme], and gravitational charges [QuinnWald]. Namely, what we need is the part of the self-field that is non-vanishing off and within the past light-cone of particle's location, the so-called tail. The radiation reaction force in the absence of external fields is entirely contained in the tail. In this paper, we develop mathematical tools for the regularization and propose a practical method to calculate the self-force of a particle orbiting a Schwarzschild black hole.
Green and Wald have presented a mathematically rigorous framework to study, within general relativity, the effect of small scale inhomogeneities on the global structure of space-time. The framework relies on the existence of a one-parameter family of metrics that approaches the effective background metric in a certain way. Although it is not necessary to know this family in an exact form to predict properties of the backreaction effect, it would be instructive to find explicit examples. In this paper, we provide the first example of such a family of exact non-vacuum solutions to the Einstein's equations. It belongs to the Wainwright-Marshman class and satisfies all of the assumptions of the Green-Wald framework.
We extend molecular bootstrap embedding to make it appropriate for implementation on a quantum computer. This enables solution of the electronic structure problem of a large molecule as an optimization problem for a composite Lagrangian governing fragments of the total system, in such a way that fragment solutions can harness the capabilities of quantum computers. By employing state-of-art quantum subroutines including the quantum SWAP test and quantum amplitude amplification, we show how a quadratic speedup can be obtained over the classical algorithm, in principle. Utilization of quantum computation also allows the algorithm to match -- at little additional computational cost -- full density matrices at fragment boundaries, instead of being limited to 1-RDMs. Current quantum computers are small, but quantum bootstrap embedding provides a potentially generalizable strategy for harnessing such small machines through quantum fragment matching.
In this paper, we have used the state finder hierarchy for Barrow holographic dark energy (BHDE) model in the system of the FLRW Universe. We apply two DE diagnostic tools to determine $\Lambda BHDE$ model to have various estimations of $\triangle$. The first diagnostic tool is the state finder hierarchy in which we have studied $S^{(1)}_{3}$ $S^{(1)}_{4}$, $S^{(2)}_{3}$ $S^{(2)}_{4}$ and second is the composite null diagnostic (CND) in which the trajectories ( $S^{(1)}_{3} - \epsilon$), ($S^{(1)}_{4}-\epsilon$), ($S^{(2)}_{3}-\epsilon$)($S^{(2)}_{4}-\epsilon$) are discussed, where $\epsilon$ is fractional growth parameter. The infrared cut-off here is taken from the Hubble horizon. Finally, together with the growth rate of matter perturbation, we plot the state finder hierarchy that determines a composite null diagnosis that can differentiate emerging dark energy from $\Lambda CDM$. In addition, the combination of the hierarchy of the state finder and the fractional growth parameter has been shown to be a useful tool for diagnosing BHDE, particularly for breaking the degeneracy with different parameter values of the model.
Most approaches to Open-Domain Question Answering consist of a light-weight retriever that selects a set of candidate passages, and a computationally expensive reader that examines the passages to identify the correct answer. Previous works have shown that as the number of retrieved passages increases, so does the performance of the reader. However, they assume all retrieved passages are of equal importance and allocate the same amount of computation to them, leading to a substantial increase in computational cost. To reduce this cost, we propose the use of adaptive computation to control the computational budget allocated for the passages to be read. We first introduce a technique operating on individual passages in isolation which relies on anytime prediction and a per-layer estimation of an early exit probability. We then introduce SkylineBuilder, an approach for dynamically deciding on which passage to allocate computation at each step, based on a resource allocation policy trained via reinforcement learning. Our results on SQuAD-Open show that adaptive computation with global prioritisation improves over several strong static and adaptive methods, leading to a 4.3x reduction in computation while retaining 95% performance of the full model.
In this paper we introduce a notion of the Gromov-Hausdorff distance with boundary, denoted by $d_{GHB}$, to construct a framework of convergence of noncomplete metric spaces. We show that a class of bounded $A$-uniform spaces with diameter bounded from below is a complete metric space with respect to $d_{GHB}$. As an application we show the stability of Gromov hyperbolicity, roughly starlike property, uniformization, quasihyperbolization, and boundary of Gromov hyperbolic spaces under appropriate notions of convergence and assumptions.
We have studied the photometric variability of very young brown dwarfs and very low-mass stars (masses well below 0.2 M_sun) in the ChaI star forming region. We have determined photometric periods in the Gunn i and R band for the three M6.5-M7 type brown dwarf candidates ChaHa2, ChaHa3 and ChaHa6 of 2.2 to 3.4 days. These are the longest photometric periods found for any brown dwarf so far. If interpreted as rotationally induced they correspond to moderately fast rotational velocities, which is fully consistent with their v sini values and their relatively large radii. We have also determined periods for the two M5-M5.5 type very low-mass stars B34 and CHXR78C. In addition to the Gunn i and R band data, we have analysed JHK_s monitoring data of the targets, which have been taken a few weeks earlier and confirm the periods found in the optical data. Upper limits for the errors in the period determination are between 2 and 9 hours. The observed periodic variations of the brown dwarf candidates as well as of the T Tauri stars are interpreted as modulation of the flux at the rotation period by magnetically driven surface features, on the basis of a consistency with v sini values as well as (R-i) color variations typical for spots. Furthermore, the temperatures even for the brown dwarfs in the sample are relatively high (>2800K) because the objects are very young. Therefore, the atmospheric gas should be sufficiently ionized for the formation of spots on one hand and the temperatures are too high for significant dust condensation and hence variabilities due to clouds on the other hand.
In many real-world applications, data is not collected as one batch, but sequentially over time, and often it is not possible or desirable to wait until the data is completely gathered before analyzing it. Thus, we propose a framework to sequentially update a maximum margin classifier by taking advantage of the Maximum Entropy Discrimination principle. Our maximum margin classifier allows for a kernel representation to represent large numbers of features and can also be regularized with respect to a smooth sub-manifold, allowing it to incorporate unlabeled observations. We compare the performance of our classifier to its non-sequential equivalents in both simulated and real datasets.
Understanding the encoded representation of Deep Neural Networks (DNNs) has been a fundamental yet challenging objective. In this work, we focus on two possible directions for analyzing representations of DNNs by studying simple image classification tasks. Specifically, we consider \textit{On-Off pattern} and \textit{PathCount} for investigating how information is stored in deep representations. On-off pattern of a neuron is decided as `on' or `off' depending on whether the neuron's activation after ReLU is non-zero or zero. PathCount is the number of paths that transmit non-zero energy from the input to a neuron. We investigate how neurons in the network encodes information by replacing each layer's activation with On-Off pattern or PathCount and evaluating its effect on classification performance. We also examine correlation between representation and PathCount. Finally, we show a possible way to improve an existing DNN interpretation method, Class Activation Map (CAM), by directly utilizing On-Off or PathCount.
The UKIDSS Galactic Plane Survey (GPS) began in 2005 as a 7 year effort to survey ~1800 square degrees of the northern Galactic plane in the J, H, and K passbands. The survey included a second epoch of K band data, with a baseline of 2 to 8 years, for the purpose of investigating variability and measuring proper motions. We have calculated proper motions for 167 Million sources in a 900 square degree area located at l > 60 degrees in order to search for new high proper motion objects. Visual inspection has verified 617 high proper motion sources (> 200 mas/yr) down to K=17, of which 153 are new discoveries. Among these we have a new spectroscopically confirmed T5 dwarf, an additional T dwarf with estimated type T6, 13 new L dwarf candidates, and two new common proper motion systems containing ultracool dwarf candidates. We provide improved proper motions for an additional 12 high proper motion stars that were independently discovered in the WISE dataset during the course of this investigation.
Recent advances in cell biology and experimental techniques using reconstituted cell extracts have generated significant interest in understanding how geometry and topology influence active fluid dynamics. In this work, we present a comprehensive continuous theory and computational method to explore the dynamics of active nematic fluids on arbitrary surfaces without topological constraints. The fluid velocity and nematic order parameter are represented as the sections of the complex line bundle of a 2-manifold. We introduce the Levi-Civita connection and surface curvature form within the framework of complex line bundles. By adopting this geometric approach, we introduce a gauge-invariant discretization method that preserves the continuous local-to-global theorems in differential geometry. We establish a nematic Laplacian on complex functions that can accommodate fractional topological charges through the covariant derivative on the complex nematic representation. We formulate advection of the nematic field based on a unifying definition of the Lie derivative, resulting in a stable geometric semi-Lagrangian discretization scheme for transport by the flow. In general, the proposed surface-based method offers an efficient and stable means to investigate the influence of local curvature and global topology on the 2D hydrodynamics of active nematic systems. Moreover, the complex line representation of the nematic field and the unifying Lie advection present a systematic approach for generalizing our method to active $k$-atic systems.
The notion of experiment precision quantifies the variance of user ratings in a subjective experiment. Although there exist measures that assess subjective experiment precision, there are no systematic analyses of these measures available in the literature. To the best of our knowledge, there is also no systematic framework in the Multimedia Quality Assessment field for comparing subjective experiments in terms of their precision. Therefore, the main idea of this paper is to propose a framework for comparing subjective experiments in the field of MQA based on appropriate experiment precision measures. We present three experiment precision measures and three related experiment precision comparison methods. We systematically analyse the performance of the measures and methods proposed. We do so both through a simulation study (varying user rating variance and bias) and by using data from four real-world Quality of Experience (QoE) subjective experiments. In the simulation study we focus on crowdsourcing QoE experiments, since they are known to generate ratings with higher variance and bias, when compared to traditional subjective experiment methodologies. We conclude that our proposed measures and related comparison methods properly capture experiment precision (both when tested on simulated and real-world data). One of the measures also proves capable of dealing with even significantly biased responses. We believe our experiment precision assessment framework will help compare different subjective experiment methodologies. For example, it may help decide which methodology results in more precise user ratings. This may potentially inform future standardisation activities.
In this paper we demonstrate a computational method to solve the inverse scattering problem for a star-shaped, smooth, penetrable obstacle in 2D. Our method is based on classical ideas from computational geometry. First, we approximate the support of a scatterer by a point cloud. Secondly, we use the Bayesian paradigm to model the joint conditional probability distribution of the non-convex hull of the point cloud and the constant refractive index of the scatterer given near field data. Of note, we use the non-convex hull of the point cloud as spline control points to evaluate, on a finer mesh, the volume potential arising in the integral equation formulation of the direct problem. Finally, in order to sample the arising posterior distribution, we propose a probability transition kernel that commutes with affine transformations of space. Our findings indicate that our method is reliable to retrieve the support and constant refractive index of the scatterer simultaneously. Indeed, our sampling method is robust to estimate a quantity of interest such as the area of the scatterer. We conclude pointing out a series of generalizations of our method.
The Quantum Approximate Optimization Algorithm (QAOA) is an extensively studied variational quantum algorithm utilized for solving optimization problems on near-term quantum devices. A significant focus is placed on determining the effectiveness of training the $n$-qubit QAOA circuit, i.e., whether the optimization error can converge to a constant level as the number of optimization iterations scales polynomially with the number of qubits. In realistic scenarios, the landscape of the corresponding QAOA objective function is generally non-convex and contains numerous local optima. In this work, motivated by the favorable performance of Bayesian optimization in handling non-convex functions, we theoretically investigate the trainability of the QAOA circuit through the lens of the Bayesian approach. This lens considers the corresponding QAOA objective function as a sample drawn from a specific Gaussian process. Specifically, we focus on two scenarios: the noiseless QAOA circuit and the noisy QAOA circuit subjected to local Pauli channels. Our first result demonstrates that the noiseless QAOA circuit with a depth of $\tilde{\mathcal{O}}\left(\sqrt{\log n}\right)$ can be trained efficiently, based on the widely accepted assumption that either the left or right slice of each block in the circuit forms a local 1-design. Furthermore, we show that if each quantum gate is affected by a $q$-strength local Pauli channel with the noise strength range of $1/{\rm poly} (n)$ to 0.1, the noisy QAOA circuit with a depth of $\mathcal{O}\left(\log n/\log(1/q)\right)$ can also be trained efficiently. Our results offer valuable insights into the theoretical performance of quantum optimization algorithms in the noisy intermediate-scale quantum era.
The automatic clinical caption generation problem is referred to as proposed model combining the analysis of frontal chest X-Ray scans with structured patient information from the radiology records. We combine two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records. The proposed combination of these models generates a textual summary with the essential information about pathologies found, their location, and the 2D heatmaps localizing each pathology on the original X-Ray scans. The proposed model is tested on two medical datasets, the Open-I, MIMIC-CXR, and the general-purpose MS-COCO. The results measured with the natural language assessment metrics prove their efficient applicability to the chest X-Ray image captioning.
We present the first cosmology results from large-scale structure in the Dark Energy Survey (DES) spanning 5000 deg$^2$. We perform an analysis combining three two-point correlation functions (3$\times$2pt): (i) cosmic shear using 100 million source galaxies, (ii) galaxy clustering, and (iii) the cross-correlation of source galaxy shear with lens galaxy positions. The analysis was designed to mitigate confirmation or observer bias; we describe specific changes made to the lens galaxy sample following unblinding of the results. We model the data within the flat $\Lambda$CDM and $w$CDM cosmological models. We find consistent cosmological results between the three two-point correlation functions; their combination yields clustering amplitude $S_8=0.776^{+0.017}_{-0.017}$ and matter density $\Omega_{\mathrm{m}} = 0.339^{+0.032}_{-0.031}$ in $\Lambda$CDM, mean with 68% confidence limits; $S_8=0.775^{+0.026}_{-0.024}$, $\Omega_{\mathrm{m}} = 0.352^{+0.035}_{-0.041}$, and dark energy equation-of-state parameter $w=-0.98^{+0.32}_{-0.20}$ in $w$CDM. This combination of DES data is consistent with the prediction of the model favored by the Planck 2018 cosmic microwave background (CMB) primary anisotropy data, which is quantified with a probability-to-exceed $p=0.13$ to $0.48$. When combining DES 3$\times$2pt data with available baryon acoustic oscillation, redshift-space distortion, and type Ia supernovae data, we find $p=0.34$. Combining all of these data sets with Planck CMB lensing yields joint parameter constraints of $S_8 = 0.812^{+0.008}_{-0.008}$, $\Omega_{\mathrm{m}} = 0.306^{+0.004}_{-0.005}$, $h=0.680^{+0.004}_{-0.003}$, and $\sum m_{\nu}<0.13 \;\mathrm{eV\; (95\% \;CL)}$ in $\Lambda$CDM; $S_8 = 0.812^{+0.008}_{-0.008}$, $\Omega_{\mathrm{m}} = 0.302^{+0.006}_{-0.006}$, $h=0.687^{+0.006}_{-0.007}$, and $w=-1.031^{+0.030}_{-0.027}$ in $w$CDM. (abridged)
We present a compact physics-based model of the current-voltage characteristics of graphene field-effect transistors, of especial interest for analog and radio-frequency applications where bandgap engineering of graphene could be not needed. The physical framework is a field-effect model and drift-diffusion carrier transport. Explicit closed-form expressions have been derived for the drain current covering continuosly all operation regions. The model has been benchmarked with measured prototype devices, demonstrating accuracy and predictive behavior. Finally, we show an example of projection of the intrinsic gain as a figure of merit commonly used in RF /analog applications.
To a compact Lie group $G$ one can associate a space $E(2,G)$ akin to the poset of cosets of abelian subgroups of a discrete group. The space $E(2,G)$ was introduced by Adem, F. Cohen and Torres-Giese, and subsequently studied by Adem and G\'omez, and other authors. In this short note, we prove that $G$ is abelian if and only if $\pi_i(E(2,G))=0$ for $i=1,2,4$. This is a Lie group analogue of the fact that the poset of cosets of abelian subgroups of a discrete group is simply--connected if and only if the group is abelian.
We investigate analytically and numerically Bloch waves for a Bose--Einstein condensate in a sinusoidal external potential. At low densities the dependence of the energy on the quasimomentum is similar to that for a single particle, but at densities greater than a critical one the lowest band becomes triple-valued near the boundary of the first Brillouin zone and develops the structure characteristic of the swallow-tail catastrophe. We comment on the experimental consequences of this behavior.
The performance of holographic multiple-input multiple-output (MIMO) communications, employing two-dimensional (2-D) planar antenna arrays, is typically compromised by finite degrees-of-freedom (DOF) stemming from limited array size. The DOF constraint becomes significant when the element spacing approaches approximately half a wavelength, thereby restricting the overall performance of MIMO systems. To break this inherent limitation, we propose a novel three-dimensional (3-D) antenna array that strategically explores the untapped vertical dimension. We investigate the performance of MIMO systems utilizing 3-D arrays across different multi-path scenarios, encompassing Rayleigh channels with varying angular spreads and the 3rd generation partnership project (3GPP) channels. We subsequently showcase the advantages of these 3-D arrays over their 2-D counterparts with the same aperture sizes. As a proof of concept, a practical dipole-based 3-D array, facilitated by an electromagnetic band-gap (EBG) reflecting surface, is conceived, constructed, and evaluated. The experimental results align closely with full-wave simulations, and channel simulations substantiate that the DOF and capacity constraints of traditional holographic MIMO systems can be surpassed by adopting such a 3-D array configuration.
Many researchers have used machine learning models to control artificial hands, walking aids, assistance suits, etc., using the biological signal of electromyography (EMG). The use of such devices requires high classification accuracy of machine learning models. One method for improving the classification performance of machine learning models is normalization, such as z-score. However, normalization is not used in most EMG-based motion prediction studies, because of the need for calibration and fluctuation of reference value for calibration (cannot re-use). Therefore, in this study, we proposed a normalization method that combines sliding-window analysis and z-score normalization, that can be implemented in real-time processing without need for calibration. The effectiveness of this normalization method was confirmed by conducting a single-joint movement experiment of the elbow and predicting its rest, flexion, and extension movements from the EMG signal. The proposed normalization method achieved a mean accuracy of 64.6%, an improvement of 15.0% compared to the non-normalization case (mean of 49.8%). Furthermore, to improve practical applications, recent research has focused on reducing the user data required for model learning and improving classification performance in models learned from other people's data. Therefore, we investigated the classification performance of the model learned from other's data. Results showed a mean accuracy of 56.5% when the proposed method was applied, an improvement of 11.1% compared to the non-normalization case (mean of 44.1%). These two results showed the effectiveness of the simple and easy-to-implement method, and that the classification performance of the machine learning model could be improved.
A $c$-crossing-critical graph is one that has crossing number at least $c$ but each of its proper subgraphs has crossing number less than $c$. Recently, a set of explicit construction rules was identified by Bokal, Oporowski, Richter, and Salazar to generate all large $2$-crossing-critical graphs (i.e., all apart from a finite set of small sporadic graphs). They share the property of containing a generalized Wagner graph $V_{10}$ as a subdivision. In this paper, we study these graphs and establish their order, simple crossing number, edge cover number, clique number, maximum degree, chromatic number, chromatic index, and treewidth. We also show that the graphs are linear-time recognizable and that all our proofs lead to efficient algorithms for the above measures.
The goal of this note is to give a systematic method of constructing zero-free regions for the permanent in the sense of A. Barvinok, i.e. regions in the complex plane such that the permanent of a square matrix of any size with entries from this region is nonzero. We do so by refining the approach of Barvinok, which is based on his clever observation that a certain restriction on a set S involving angles implies zero-freeness; we call sets satisfying this requirement angle-restricted. This allows us to reduce the question to a low-dimensional geometry problem (notably, independent of the size of the matrix!), which can then be solved more or less explicitly. We give a number of examples, improving some results of Barvinok.
We consider a network composed of two interfering point-to-point links where the two transmitters can exploit one common relay node to improve their individual transmission rate. Communications are assumed to be multi-band and transmitters are assumed to selfishly allocate their resources to optimize their individual transmission rate. The main objective of this paper is to show that this conflicting situation (modeled by a non-cooperative game) has some stable outcomes, namely Nash equilibria. This result is proved for three different types of relaying protocols: decode-and-forward, estimate-and-forward, and amplify-and-forward. We provide additional results on the problems of uniqueness, efficiency of the equilibrium, and convergence of a best-response based dynamics to the equilibrium. These issues are analyzed in a special case of the amplify-and-forward protocol and illustrated by simulations in general.
We discuss classical integrable structure of two-dimensional sigma models which have three-dimensional Schrodinger spacetimes as target spaces. The Schrodinger spacetimes are regarded as null-like deformations of AdS_3. The original AdS_3 isometry SL(2,R)_L x SL(2,R)_R is broken to SL(2,R)_L x U(1)_R due to the deformation. According to this symmetry, there are two descriptions to describe the classical dynamics of the system, 1) the SL(2,R)_L description and 2) the enhanced U(1)_R description. In the former 1), we show that the Yangian symmetry is realized by improving the SL(2,R)_L Noether current. Then a Lax pair is constructed with the improved current and the classical integrability is shown by deriving the r/s-matrix algebra. In the latter 2), we find a non-local current by using a scaling limit of warped AdS_3 and that it enhances U(1)_R to a q-deformed Poincare algebra. Then another Lax pair is presented and the corresponding r/s-matrices are also computed. The two descriptions are equivalent via a non-local map.
We give an arithmetic criterion which is sufficient to imply the discreteness of various two-generator subgroups of $PSL(2,{\bold C})$. We then examine certain two-generator groups which arise as extremals in various geometric problems in the theory of Kleinian groups, in particular those encountered in efforts to determine the smallest co-volume, the Margulis constant and the minimal distance between elliptic axes. We establish the discreteness and arithmeticity of a number of these extremal groups, the associated minimal volume arithmetic group in the commensurability class and we study whether or not the axis of a generator is simple.
The random antiferromagnetic spin-1/2 XX and XXZ chain is studied numerically for varying strength of the disorder, using exact diagonalization and stochastic series expansion methods. The spin-spin correlation function as well as the stiffness display a clear crossover from the pure behavior (no disorder) to the infinite randomness fixed point or random singlet behavior predicted by the the real space renormalization group. The crossover length scale is shown to diverge as $\xi\sim{\mathcal D}^{-\gamma}$, where ${\mathcal D}$ is the variance of the random bonds. Our estimates for the exponent $\gamma$ agrees well within the error bars with the one for the localization length exponent emerging within an analytical bosonization calculation. Exact diagonalization and stochastic series expansion results for the string correlation function are also presented.
Experimental studies indicate that optical cavities can affect chemical reactions, through either vibrational or electronic strong coupling and the quantized cavity modes. However, the current understanding of the interplay between molecules and confined light modes is incomplete. Accurate theoretical models, that take into account inter-molecular interactions to describe ensembles, are therefore essential to understand the mechanisms governing polaritonic chemistry. We present an ab-initio Hartree-Fock ansatz in the framework of the cavity Born-Oppenheimer approximation and study molecules strongly interacting with an optical cavity. This ansatz provides a non-perturbative, self-consistent description of strongly coupled molecular ensembles taking into account the cavity-mediated dipole self-energy contributions. To demonstrate the capability of the cavity Born-Oppenheimer Hartree-Fock ansatz, we study the collective effects in ensembles of strongly coupled diatomic hydrogen fluoride molecules. Our results highlight the importance of the cavity-mediated inter-molecular dipole-dipole interactions, which lead to energetic changes of individual molecules in the coupled ensemble.
This paper investigates the experimental performance of a discrete portfolio optimization problem relevant to the financial services industry on the gate-model of quantum computing. We implement and evaluate a portfolio rebalancing use case on an idealized simulator of a gate-model quantum computer. The characteristics of this exemplar application include trading in discrete lots, non-linear trading costs, and the investment constraint. We design a novel problem encoding and hard constraint mixers for the Quantum Alternating Operator Ansatz, and compare to its predecessor the Quantum Approximate Optimization Algorithm. Experimental analysis demonstrates the potential tractability of this application on Noisy Intermediate-Scale Quantum (NISQ) hardware, identifying portfolios within 5% of the optimal adjusted returns and with the optimal risk for a small eight-stock portfolio.
The study of many body physics has provided a scientific playground of surprise and continuing revolution over the past half century. The serendipitous discovery of new states and properties of matter, phenomena such as superfluidity, the Meissner, the Kondo and the fractional quantum hall effect, have driven the development of new conceptual frameworks for our understanding about collective behavior, the ramifications of which have spread far beyond the confines of terrestrial condensed matter physics- to cosmology, nuclear and particle physics. Here I shall selectively review some of the developments in this field, from the cold-war period, until the present day. I describe how, with the discovery of new classes of collective order, the unfolding puzzles of high temperature superconductivity and quantum criticality, the prospects for major conceptual discoveries remain as bright today as they were more than half a century ago.