content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
The Voice of Young Science (VOYS) is a unique and dynamic network of early career researchers committed to playing an active role in public discussions about science. By responding to public misconceptions about science and evidence and engaging with the media, this active community of 2,000+ researchers is changing the way the public and the media view science and scientists.
VOYS organise workshops to encourage early career researchers to make their voices heard in public debates about science. During these full-day events, participants meet scientists who have engaged with the media and learn from respected science journalists about how the media works, how to respond and comment, and what journalists want and expect from scientists.
As part of their aim to cultivate a network across the island of Ireland, VOYS are holding two Standing up for Science workshops in February 2018:
Dublin City University, Dublin, Ireland - Thursday 8 February, 9am-4pm
Black Box, Belfast, Northern Ireland - Friday 23 February, 9am-4pm (part of the NI Science Festival)
Both full day events are FREE for early career researchers and scientists in all sciences, engineering, medicine and social sciences (PhD students, post-docs or equivalent in first job). | https://www.britishcouncil.ie/events/standing-science-workshops-dublin-belfast |
A friend of mine recently asked me to read Carolyn Baker’s article When facing reality is not ‘negative thinking’. This article has finally helped me nail down some thoughts I’ve been having about the way I’ve been often asked to look at the “collapse” of civilization and the idea that we need to “face reality.”
To begin with, I’d like to affirm my agreement with Dr. Baker and others that the ways our world is currently structured, from how we use people and energy, to how we feed our selves, to our political and financial forms, are completely unsustainable and are destined to be radically changed. That much is certain to me.
But I’d like to suggest that the very act of framing this process of change in terms of “collapse” and “facing reality” is in itself part of remaining within those same sets of unsustainable structures.
Some people have described the current era as on par with what happens in a caterpillar before it turns into a butterfly. This process is not a simple transformation in which the caterpillar’s body shrinks and then sprouts wings and legs. Instead the body of the caterpillar completely “collapses” into a a blob of ooze, and then re-grows itself into its new form starting from what are called imaginal cells, which are a kind of new butterfly stem cells that are even attacked by the body of the caterpillar before it has completely dissolved because they are at first not even recognized as “self.” This process has been written about elsewhere, so I won’t belabor it, but if you haven’t read about it before, it’s worth Googling.
However, I actually don’t think that this caterpillar-butterfly process really is on par with what’s happening now, for one main reason: the outcome of the metamorphosis of the caterpillar is pretty darn certain, but the outcome of our future is not at all certain. But this makes it more clear to me that to describe the change we are facing now as “collapse” is even more of a mistake than it is for me to use the word “collapse” in describing what happens to the body of the caterpillar.
The word collapse comes from the roots “fall” and “together” and evokes the idea of the falling down of a building, and its breaking apart, shattering to pieces. The reason why that’s not an appropriate description in the case of the caterpillar is that its body’s dissolution is part of an active living process that’s going somewhere, that though it is a destruction or death of sorts, is fully energized, with nodes of self-organization that are part of the living process that will take it to the next step. Collapse is a word for the falling apart of a mechanical system, not for the transformation of a living one. And there’s the rub: at the heart of our world’s current structures (the very ones that led to our current forms of government, finance, and politics that are unsustainable) is the underlying assumption that the universe is a mechanical system that we are separate from rather than a living system of which we are intimately a part. So to call what’s happening now a “collapse” I think keeps us from stepping into the very “myriad opportunities it presents,” because it keeps us thinking in the old way. If instead we conceive ourselves as part of a living system, then how we conceive of what’s happening right now might be vastly different.
Here is what I see: for the last 5,000 years (since the advent of the three big inventions: agriculture, writing, and money) we have been on a massive journey of increasing consciousness and liberating potential that those three inventions are the foundation of. I’m making no claim as to the universality, value, goodness, or evil of this journey–I just know that it has happened. At the end of this 5,000-year journey our consciousness of how the natural world (including ourselves) works, and the pure liberation of potential (both social, physical, and technological) is simply awesome. But we are at a nexus. On one hand there are millions if not billions of fully empowered humans on the planet; there is a vast quantity of energy that is available to be put to use; there is an even greater quantity of information and knowledge to organize that use; and there is an astounding set of information-processing tools coordinate that use. On the other, there are millions, if not billions of disempowered and enslaved humans on the planet; there is vast energy need as well as waste; and there is great disinformation and lies spread and all kinds of machinations in place to prevent the free spread and coordination of information. I see these two “hands” (the one hand and the other hand) as fully living and dynamic tensions in a vast living earth of which humanity, with its budding consciousness, is now a significant part.
This gets us to the next phrase: “facing reality.” Both of these “hands” are real. The power and awesomeness of where we are now, the pure raw potential, is massive and unprecedented, and is proven by the enormous amount of waste, of all kinds, that we are generating. This power and potential is real. The limits of peak oil, and the limited capacity of our biosphere to accept fossil carbon dioxide without massive climate change, are real. Which hand should I face? And I’m sure there is also a third, and fourth hand that are also just as real. The problem with this phrasing of “facing reality” is that it is also built on the same dualistic, mechanical world-view that underlies the phrasing of collapse. The idea of a single reality out there, that I as an individual have to “face,” works because of how separated we have allowed ourselves to become from that very reality that we are supposed to face. It works because we have swallowed the idea of ourselves as isolated point subjects in some way outside of, and viewing, a mechanically objective and “real” world.
But we are not passive observers of a single reality. The very place we have gotten to is because of a particular constructed view of what reality is. If we had constructed a different understanding of reality, one based not on the idea that we are separate from nature and have dominion over it, then we would not be in the same place we are now. We have built the reality of our political and social structures for ourselves. For sure we are embedded in a deeper reality, one not only of our construction, but the lion’s share of the reality we experience is the one we are willing to experience, the one we have constructed for ourselves.
So, I refuse to “face” the “reality” of “collapse.” Instead I promise to explore the reality I have built for myself so far, and try to see how it is inaccurate and does not match my actual experience, so that I can change it. I can build new realities that are more accurate to my experience, in which I and my fellow journeyers are more empowered, more alive, and that creates greater possibility. For you see, we usually think about language as a tool that we use to describe reality, but language also creates the reality we experience. What makes makes our situation different from caterpillar–>butterfly is that for us a butterfly is not a certain outcome. Our outcome is completely unknown. To me this means that our deepest responsibility is to envision a reality we want and then do our best to build it. We can’t do this based on projections of how the current order is destined to fall apart. Instead, I want to take what life has miraculously made available to us now (our caterpillar body’s goo, if you will) and figure out what even more miraculous and precious reality we can build out of it. | https://eric.harris-braun.com/blog/2009/05/11/id-74 |
Aria Consulting Engineers philosophy is to obtain a deep understanding of our clients' objectives in conjunction with identifying any potential development challenges and constraints early on in the project. Our philosophy is to present options and solutions that enable seamless development and construction while saving time and money.
Our Focus
Aria’s goal is to provide our client with high quality customer service and value driven architectural and engineering. We strive to provide a level of personal attention and service that exceeds that offered by larger firms at consistently lower pricing. Our engineering and consulting services are delivered on-time and within budget, the key to the success of any project.
Our Team
We have assembled a team of highly qualified professionals who fully espouse this culture and actively promote the advancement of innovative designs. Aria promotes continuing education programs for the professional staff, disseminating information to peers and clients, and attending meetings and symposia, so that our resources, designs, and philosophy remain on the cutting edge of technology and provide a foundation for sustainable growth for the firm. | https://www.ariaconsultingeng.com/aria-profile |
SPECT brain imaging of dopamine transporter with 99mTc TRODAT-1 is useful for diagnosis and evaluation of Parkinson's disease, and improvements to its quantification are expected to be able to increase its diagnostic power. In previous studies, we have employed fan-beam collimation to achieve enhanced spatial resolution and counting statistics. Further improvements can be made by use of reconstruction methods that can accurately account for the physics of SPECT imaging. Many reconstruction algorithms are available for this purpose. These algorithms have their respective strengths and weaknesses, and it remains unclear which method is more suitable for clinical use. To a large extent, this situation is due to the unavailability of gold standards when working with real data, thereby making it extremely difficult to obtain an objective performance comparison of reconstruction algorithms. Recently, Hoppin et al. has proposed a promising technique that may mitigate this difficulty. In this work, we examine the impact of various reconstruction algorithms on quantification of SPECT 99mTc TRODAT-1 brain imaging, and investigate whether Hoppin's method can achieve an objective performance comparison for these algorithms. Results obtained in our studies are quite encouraging. | https://mayoclinic.pure.elsevier.com/en/publications/an-evaluation-of-spect-imaging-for-quantitative-assessment-of-par |
Adsorption and Black Cotton Soil
Water molecules consist of two hydrogen atoms sharing electrons with a single oxygen atom. The water molecule is electrically balanced but within the molecule, the offsetting charges are not evenly distributed. The two positively charged hydrogen atoms are grouped together on one side of the larger oxygen atom. The result is that the water molecule itself is an electrical “dipole”, having a positive charge where the two hydrogen atoms are situated and a negative charge on the opposite or bare oxygen side of the molecule.
The electrical structure of water molecules enable them to interact with other charged particles. The mechanism by which water molecules become attached to the microscopic clay crystals of black cotton soil is called “adsorption”. Because of their shape, composition and resulting electrical charge, the thin clay crystals or “sheets” have an electro-chemical attraction for the water dipoles. The clay mineral “montmorillonite”, which is the most notorious and rich component of black cotton soil, can adsorb very large amounts of water molecules between its crystalline sheets and therefore has a large shrink-swell potential.
Dipolarity of Water Bonds
When potentially expansive soil becomes saturated, more and more water dipoles are gathered between the crystalline clay sheets, causing the bulk volume of the soil to increase or swell. The incorporation of the water into the chemical structure of the clay will also cause a reduction in the capacity or strength of the soil.
Black Cotton Soil in Shrinkage
During periods when the moisture in the expansive soil is being removed, either by gravitational forces or by evaporation, the water between the clay sheets is released, causing the overall volume of the soil to decrease or shrink. As the moisture is removed from the soil, the shrinking soil can develop gross features such as voids or desiccation crack. These shrinkage cracks can be readily observed on the surface of bare soils and provide an important indication of expansive black cotton soil activity at the property.
| |
The notion of retreating to become well isn’t only a Native American idea. Prior to pharmaceutical therapies, bed rest was one of the most commonly prescribed treatments by both conventional and traditional practitioners. In Japan, “quiet” therapies, involving isolating patients completely for days or weeks are a regular practice. Patients are left alone in a room for seven days without television, radio, or social interaction. They are permitted only basic necessities like food and bathroom privileges. After one week they are gradually introduced back into society by engaging in menial tasks. The third and fourth weeks are times of intensive spiritual and emotional therapy. This isolation allows patients to have not only time for serious rest, but also the opportunity for serious introspection and life review. These practices have their roots in Shinto philosophy and were developed by the Japanese physician, Morita. These practices teach patients that their emotions don’t have to rule their lives or seriously affect their health since they are fleeting experiences of the mind.
True traditional medicine takes into account the body, mind and spirit. Native American Healing techniques and Japanese “quiet” therapies offer strong paradigms for treatment of the mind and spirit. Functional Medicine, a contemporary integrative approach founded by John Bland, is a science-based health care system that assesses and treats the underlying causes of illness through individually-tailored therapies. Three basic tenets are involved in Functional Medicine: 1) Biochemical Individuality, 2) Health as Positive Vitality, 3) Function as Homeodynamics.
Biochemical Individuality is the concept that each individual has a unique set of characteristics. Unlike conventional medicine, Functional Medicine contends that individuals respond differently to environmental toxins, medications and foods. Each person has his/her own unique biochemical patterns including how information is processed between cells and body systems, and metabolism of nutrients.
Health as Positive Vitality offers an innovative approach for practitioners to interact with patients. Instead of focusing on the illness, practitioners are encouraged to take “wellness” histories to discover what patients were doing when they were healthy and what they have done in the past that has made them feel their “best.”
Function as Homeodynamics examines how homeostasis works in the body. Conventional belief contends that homeostasis is a system of interconnected components which function to keep physical and chemical components like temperature or blood sugar relatively constant. The Homeodynamics theory of Functional Medicine believes that instead of homeostasis a similar system exists that functions to maintain not physio-chemical constancy, but biochemical individuality. A Functional Medical Assessment would include laboratory studies that examine immunofunction, metabolism and the level of environmental toxins in the body.
While these approaches in no way replace conventional therapies, they add new dimensions to therapies that afford patients the opportunity to explore their wellness. Most importantly, they provide the time and individualized attention that is sorely lacking in our current system. | https://www.catanyc.com/functional-healing/ |
Subterranean Termites:
Swarmers Soldiers & Workers
Description:
Eastern subterranean termites are found from Ontario southward and from the eastern United States seaboard as far west as Mexico, Arizona and Utah.
Termites are social insects, which live in large colonies. There are three castes: swarmers or reproducers, workers, and soldiers. Termite antennae have bead-like segments. The winged swarmers have a pair of equally sized long wings that are attached to the last two thoracic segments. The wings break off after swarming. The abdomen is broadly joined at the thorax unlike the narrow abdominal attachment found on ants.
The winged swarmers are dark brown to almost black and about 3/8-inch long. The wings are brownish gray with a few hairs and two dark veins on the leading edge. They have a very small pore on their heads. The soldiers are wingless with white bodies, rectangular yellow-brown heads that are two times longer than their width, and large mandibles, which lack teeth.
Biology:
Subterranean termite colonies usually are located in the soil from which the workers build mud tubes to structural wood where they then feed. Subterranean termite colonies are always connected to the soil and/or close to a moisture source.
Termites digest cellulose in wood with the aid of special organisms within their digestive system. The workers prefer to feed on fungus-infected wood but readily feed on undamaged wood as well. The foraging workers feed immature workers, swarmers, and soldiers with food materials from their mouths and anuses.
A mature queen produces 5,000 to 10,000 eggs per year. An average colony consists of 60,000 to 250,000 individuals by colonies numbering in the millions are possible. A queen might live for up to 30 years and workers as long as five years.
Habits:
Subterranean termite colonies are established by winged swarmers/reproducers, which usually appear in the spring. Swarms usually occur in the morning after a warm rain. A male and female that have swarmed from an established colony lose their wings and seek a dark cavity inside which they mate and raise the first group of workers. Both of these swarmers/reproducers feed on wood, tend to the eggs, and build the initial nest.
After the workers mature, they take over expanding the colony and feeding the swarmers, as the colony becomes larger, light colored supplementary reproducers are produced to lay eggs, which then become workers. The soldiers, which are also produced as the colony increases in size, are responsible for repelling invading ants and other predators. | https://www.magicexterminating.com/pests/Termites-90.asp |
The Association of American Physicians and Surgeons (AAPS), a non-profit organization founded in 1943, is dedicated to fostering private medicine, ethical medicine, and the patient-physician relationship and protecting them from third-party encroachment. Through thousands of member physicians and surgeons, AAPS represents virtually all medical specialties nationwide, primarily in small and solo practices. AAPS is funded almost entirely by physicians, reflecting its representation of its members and their patients, in contrast with many other medical organizations that rely on funding from outside sources. Justices of the United States Supreme Court have cited legal submissions by AAPS in multiple cases, most recently in 2008.
AAPS objects to the overly rigid IDSA Lyme Guidelines ("Guidelines") that were published in 2006. For example, on page 1090, the Guidelines mandate laboratory confirmation of an observed condition (extracutaneous Lyme disease) in order to diagnose and treat it. On pages 1089-90, the Guidelines prohibit clinical diagnosis and treatment of particular conditions associated with Lyme Disease if based on "clinical findings alone."
These Guidelines should be revised to recognize that the physician must retain full flexibility in the diagnosis and treatment of Lyme disease. Medical societies do not practice medicine; physicians do. The mandate for specific laboratory confirmation is particularly objectionable, as testing for Lyme disease is notoriously insensitive and unreliable. Patients who do not meet this criterion would often be denied treatment that could mitigate severe chronic disability. In some cases, long-term treatment is required. Physicians must be able to exercise their professional judgment concerning the best treatment for each individual patient, without restraint by one-size-fits-all Guidelines, which amount to mandates and prohibitions.
The sine qua non of good medical practice is individualized care for individual patients. Guidelines should not usurp this in any way. It is each physician, and often only the physician, who knows the patient's history, course of illness, severity of presentation, and responsiveness to treatment. AAPS objects to any curtailment of individualized treatment of patients by competent physicians, and no Guidelines should be adopted that infringe on such treatment.
Challenge to Lab Diagnostic Test Requirement--Page 1090: "Diagnostic testing performed in laboratories with excellent quality-control procedures is required for confirmation of extracutaneous Lyme disease..." (emphasis added).
Challenge to Restrictions on the Use of Clinical Judgment—Pages 1089-90: "Clinical findings are sufficient for the diagnosis of erythema migrans, but clinical findings alone are not sufficient for diagnosis of extracutaneous manifestations of Lyme disease or for diagnosis of HGA or babesiosis. Diagnostic testing performed in laboratories with excellent quality-control procedures is required for confirmation of extracutaneous Lyme disease, HGA, and babesiosis." (emphasis added). | https://www.personalconsult.com/articles/aapsandidsa.html |
[Polymorphism of nucleotide sequences of human genomic DNA linked to a mucoviscidosis locus].
Polymorphism in the restriction fragments length of human DNA sequences linked to mucoviscidosis locus was studied in the healthy control group and in the families affected by mucoviscidosis. The plasmid clones metH, pJ3.11,XV-2c and pKM.19 were used as hybridization probes. The allelic frequencies of the polymorphic loci were determined for total population and for affected families. The linkage disequilibrium between the disease locus and linked polymorphic loci detectable with XV-2c (TaqI endonuclease) and pKM.19 (PstI endonuclease) was demonstrated. The high informational value of DNA-diagnosis of mucoviscidosis in the family studies with the use of four DNA probes combination has been demonstrated.
| |
Abstract:
Zero-knowledge proofs are a core building block for a broad range of cryptographic protocols. This paper introduces a generic zero-knowledge proof system capable of proving the correct computation of any circuit. Our protocol draws on recent advancements in multiparty computation and its security relies only on the underlying commitment scheme. Furthermore, we optimize this protocol for use with multivariate quadratic systems of polynomials, leading to provably secure signatures from multivariate quadratic systems, with keys that scale linearly and signatures that scale quadratically with the security parameter.
Category / Keywords:
public-key cryptography / zero-knowledge proof, post-quantum, signature, multivariate quadratic, provable security, multi-party computation
Date:
received 20 Dec 2016, withdrawn 17 Jan 2017
Contact author:
alan szepieniec at esat kuleuven be
Available format(s):
(-- withdrawn --)
Version:
20170117:111609
(
All versions of this report
)
Short URL: | https://eprint.iacr.org/2016/1168 |
On December 2, 2016, six music fans who use wheelchairs filed a class action complaint against the City and County of Denver claiming disability discrimination for failure to make reasonable accommodations to allow people who use wheelchairs to access and enjoy Red Rocks Amphitheatre. CREEC and co-counsel Colorado Cross-Disability Coalition (CCDC) and Disability Law Colorado (DLC) represent the plaintiffs.
The only accessible seats at Red Rocks are in the front row or at the very top and back of the theater (Row 70). Despite the limited numbers of accessible seats available, Red Rocks and its contractors routinely engaged in practices that further decrease the number of seats available for wheelchair users. For example, Red Rocks did little to ensure that tickets for accessible seats were sold or given to people who actually needed accessible seating.
As a result, tickets to accessible seating were regularly unavailable within minutes of going on sale and then were only available on the secondary market. Because many of the accessible seats at Red Rocks are in the front row, they are highly sought-after and were typically resold at a significantly increased cost – up to four or five times the face value. In comparison, those who were not competing for the coveted 78 accessible seats could much more easily buy a ticket to one of the remaining 9,447 seats that are not accessible to wheelchair users. Thus, according to the suit, those who use wheelchairs and wish to attend a concert at Red Rocks were routinely forced to pay a much higher price than other concert-goers who do not use wheelchairs. | https://creeclaw.org/red-rocks/ |
---
author:
- |
Mikhail Braun [^1] , Gian Paolo Vacca\
Department of Physics, University of Bologna\
Istituto Nazionale di Fisica Nucleare - Sezione di Bologna.
title: '**Evolution of the gluon density in $x$ with a running coupling constant**'
---
-30pt
plus 1pt minus 1pt
**Abstract.**
This is a draft describing the calculation of the evolution of the gluon density in $x$ from an initial value $x=x_{0}=0.01$ to smaller values, up to $x=10^{-8}$ in the hard pomeron formalism with a running coupling introduced on the basis of the bootstrap equation. The obtained gluon density is used to calculate the singlet part of the proton structure function. Comparison with experiment and the results following from the fixed coupling evolution is made.
Introduction.
==============
Recent results obtained at HERA [@HERA; @ZEUS2] may be interpreted as a manifestation of the hard pomeron, which naturally explains a sharp rise of $F_{2}(x,Q^{2})$ at low $x$. The original BFKL hard pomeron, however, has a drawback of treating the coupling constant as fixed, since it sums only powers of $\log 1/x$ and not those of $\log Q^2$. A rigorous way to introduce a running coupling into it still remains beyond the possibilities of the theory, since it inevitably involves a problem of low $Q^2$ behaviour and thus of confinement. In a series of papers [@braun1; @braun2; @bvv; @bv2] we have adopted a more intuitive way to attack this problem, based on the so-called bootstrap relation [@lip1; @bart1], which is, in fact, the unitarity condition for the $t$-channel with a colour quantum number of a gluon. It ensures that the one-reggeized-gluon exchange supposed to give a dominant contribution in this channel is unitary by itself, which is a necessary requirement to use it as an input for the construction of the BFKL pomeron.
Assuming that this fundamental requirement should be preseved in the theory with a running coupling, we proposed a minimal modification of the BFKL pomeron equation which, on the one hand, satisfies the bootstrap condition and, on the other hand, leads to the standard results with the running coupling in the double log (DL) limit, i.e. when leading terms in the product $\log1/x\log Q^2$ are summed. This modification reduces to the substitution of every momentum squared $k^2$ in the pomeron equation by a function $\eta(k)$ which at large $k$ behaves as $k^2/2\alpha_{s}(k^{2})$. The behaviour of $\eta(k)$ at low $k$ remains beyond any theoretical controle. We parametrize it as (k)=(b\_[0]{}/8)(k\^[2]{}+m\^[2]{}) where $\Lambda$ is the standard QCD parameter, $b_{0}=11-(2/3)N_{f}$ and the effective gluon mass $m$ simulates both the confinement effects and the freezing of the coupling.
Solving numerically the pomeron equation in this approach we found two supercritical pomerons [@bvv]. Adjusting the mass $m$ to fit the experimental slope of the leading pomeron of 0.25 $(GeV/c)^{-2}$ we obtained for their intercepts $$\Delta_{0}=0.384,\ \ \ \Delta_{1}=0.191$$ and the slope of the subdominant pomeron results $\alpha'_{1}=0.124\
(GeV/c)^-2$. Calculating observable quantities with only these two asymptotic states taken into account we found that the picture which emerges, in all probability, corresponds to energies much higher than the present ones [@bvv]. In particular the average $\langle\kt\rangle$ was found to be very large ($\sim 10\ GeV/c$) and independent of energy, which may indicate a saturation of its growth observed at present energies [@bv2].
To describe the present experimental data it is then necessary to take into account all the states from the pomeron equation spectrum. This can be achieved by converting the pomeron equation into an evolution equation in $1/x$ and solving it with an initial condition at some (presumably small) value $x=x_{0}$. In such an approach, taking a nonperturbative input at $x=x_{0}$ adjusted to the experimental data, also the problem of coupling the pomeron to the hadronic target is solved in an effective way.
This note is devoted to realizing such a program. In Sec. 2 we state our basic equations. The most difficult part of the program is to pass from the gluon density to the observable structure function. It is discussed in Sec. 3. Sec. 4 is devoted to fixing the initial gluon distribution for the future evolution. In Sec. 4 we present our numerical results. Sec. 5 contains a discussion and some conclusions.
Basic equations
===============
For the forward scattering amplitude the pomeron equation reads (H-E)=\_[0]{} Here $\psi$ is a semi-amputated (one leg amputated only) pomeron wave function; $E=1-j$ is the pomeron “energy”, related to its complex angular momentun $j$; $H=T+V$ is the “Hamiltonian” consisting of the kinetic energy given by the sum of the two gluon Regge trajectories, $T=-2\omega$ and of the potential energy $V$. With a running coupling introduced according to [@braun1; @braun2] both are expressed via the mentioned function $\eta$ (Eq. (1)); T(k)= and V(k)=- d\^2k’(k’)(- ) where $N_c$ is the number of colours and $T_{1(2)}$ are the colour operators for the two interacting gluons; in the vacuum channel we have $T_1T_2=-N_c$. Finally, the inhomogeneous term $\psi_{0}$ represents the interaction vertex between the pomeron and the hadronic target.
Taking the Mellin transformation of (2) one converts it into an evolution equation in $1/x$: (x,k)=-H(x,k) which should be supplemented with an initial condition at some $x=x_{0}$ (x\_[0]{},k)=\_[0]{}(k) containing the nonperturbative input about the coupling to the hadronic target.
The physical interpretation of the pomeron wave function is provided by the fact that in the DL approximation Eq. (5) reduces to an equation for the fully amputated function $\phi(x,k)=\eta(k)\psi(x,k)$: (x,k)=(x,k) which coincides with the standard equation for the unintegrated gluon density $xg(x,k^{2})$ in the DL limit. In fact, this circumstance lies at the root of our method to introduce a running coupling into the scheme. Thus we may identify (x,k)=cxg(x,k) The normalizing factor $c$ cannot be determined from the asymptotic equation (7). We shall be able to fix it by studying the coupling of the pomeron to the incoming virtual photon in the next section.
Coupling to the virtual photon
==============================
Once the function $\phi$ proportional to the gluon density is determined, one has to couple it to the projectile particle to calculate observable quantities. In particular, to find the structure function of the target one has to couple the gluons to the incoming virtual photon, that is, to find the colour density $\rho(q,k)$ which connects the photon of momentum $q$ to the gluon of momentum $k$. This problem is trivial within the BFKL approach with a fixed small coupling. Then it is sufficient to take the colour density in the lowest order, which corresponds to taking for it the contribution of a pure quark loop into which the incoming photon goes $\rho_{0}(q,k)$.
The problem complicates enormously when one tries to introduce a running coupling into $\rho$. Then one has to take into account all additional gluon and $q\bar q$ pair emissions which supply powers of the logarithms of transverse momenta. Apart from making the coupling run, they will evidently change the form of $\rho(q,k)$. Unfortunately the bootstrap relation can tell us nothing about the ultimate form of the colour density with a running coupling, which essentially belongs to the $t$-channel with a vacuum colour quantum number. So we have to find a different way to introduce a running coupling into $\rho$.
A possible systematic way to do this consists in applying to the photon-gluon coupling the DGLAP evolution equation. One may separate the colour density from the rest of the amplitude by restricting its rapidity range to some maximal rapidity $y_{0}\sim\log Q^2$ (which, of course should be much smaller than the overall rapidity $Y\sim\log Q^{2}/x$). Then the kinematical region of $\rho(q,k)$ will admit the standard DGLAP evolution in $Q^2$. Solving this equation one will find the quark density at scale $Q^2$ of the gluon with momentum $k$ (i.e. essentially the structure function of the gluon with the virtuality $k^2$). This is exactly the quantity needed to transform the calculated gluon density created by the target into the observable structure function of the target. As a starting point for the evolution one may take the perturbative colour density $\rho_{0}$ at some low $Q^2$ when the logs of the transverse momenta might be thought to be unimportant.
This ambitious program, combining both evolution in both $1/x$ and $Q^2$, does not, however, look very simple to realize. As a first step, to clearly see the effects of the introduction of a running coupling according to \[ 3,4\], we adopt a more phenomenological approach here, trying to guess a possible correct form for $\rho(q,k)$ on the basis of simple physical reasoning and also using the DL approximation to fix its final form.
With a pure perturbative photon colour density one would obtain for the $\gamma^{*}p$ cross-section (x,Q\^[2]{})= In fact, the projectile particle should be coupled to the full pomeronic wave function $\phi/\eta^2$. From the physical point of view this expression is fully satisfactory for physical particles. However it is not for a highly virtual projectile.
To see this, we first note that for the forward amplitude our method of introducing a running coupling reduces to a very simple rule: the scale at which the coupling should be taken is given by the momentum of the emitted real gluon ($(k-k')^2$ in the upper rung in Fig. 1). Now take $Q^{2}$ very large and apply the DL approximation. Then the momenta in the ladder become ordered from top to bottom $$Q^{2}>>k^{2}>>{k'}^{2}>>.....$$ In this configuration, as can be traced from (2) and (9), all $\alpha_{s}$’s acquire the right scale (i.e. corresponding to the DGLAP equation) except for the upper rung: $\alpha_{s}(k^{2})$ appears twice. This defect can be understood if one notices that the upper gluon is, in fact, coupled to a virtual particle. If this particle were a gluon, then the interaction (4) would cancel one of the two $\alpha(k^{2})$’s and substitute it by an $\alpha$ taken at the scale corresponding to its own virtuality. We assume that something similar should take place also for virtual quarks to which the gluon chain may couple. The scale of the particle momenta squared which enter the upper blob in Fig. 1 should have the order $Q^{2}$ (this is the only scale that remains after these momenta are integrated out). As a result the lowest order density should be rescaled according to \_[0]{}(q,k)\_[0]{}(q,k) where $Q_{1}^{2}$ has the same order as $Q^2$.
The approximation we assume in this paper is that the substitution (10) is sufficient to correctly represent the photon colour density with a running coupling. We shall check its validity by studying the quark density which results from (10) in the DL approximation and comparing it with the known result based on the DGLAP equation.
Explicitly the zeroth order density $\rho_{0}$ has the following forms for the transverse (T) and longitudinal (L) photons (see e.g. [@nz1] and do the integration in the quark loop momenta) \^[(T)]{}\_[0]{}(q,k)=\_[f]{}Z\_[f]{}\^[2]{} \_[0]{}\^[1]{}d((\^[2]{}+(1-)\^[2]{}) ((1+2z\^[2]{})g(z)-1)+(1-g(z))) \^[(L)]{}\_[0]{}(q,k)=\_[f]{}Z\_[f]{}\^[2]{} \_[0]{}\^[1]{}d (1-g(z)) Here the summation goes over the quark flavours. The dimensionless variables $\zeta$ and $z$ are defined as =, z= and $m_{f}$ and $Z_{f}$ are the mass and charge of the quark of flavour $f$. The function $g(z)$ is given by g(z)= The structure function is obtained from the cross-section by the standard relation F\_[2]{}(x,Q\^[2]{})=(\^[(T)]{}+\^[(L)]{})
In the DL limit only the transverse cross-section contributes. We can also neglect the quark masses in this approximation. Then, with a substitution (10), from (9), (11) and (15) we obtain an expression for the quark (sea) density of the target xq(x)= \^[Q\^2]{} \_[0]{}\^[1]{}d(\^[2]{}+(1-)\^[2]{}) ((1+2z\^[2]{})g(z)-1) where $g(z)$ is given by Eq. (14) and we assumed that large values of $k^{2}<Q^2$ contribute in accordance with the DL approximation. In this approximation the asymptotics of the gluon density $xg(x,k^{2})$ and consequently of $\phi(x,k^{2})$ is known: (x,k\^[2]{})=cxg(x,k\^[2]{})c where $a=48/b_{0}$. Putting (17) into (16), after simple calculations we find the asymptotical expression for the quark density xq(x,k\^[2]{}) On the other hand, from the DGLAP equation we find, with the same normalization xq(x,k\^[2]{}) As we observe the approximation (10) for the colour density of the photon projectile leads to the correct relation between the quark and gluon densities in the DL limit. This justifies the use of (10), at least for high $1/x$ and $Q^{2}$. Comparing (18) and (19) we also obtain the normalization factor $c$ which relates the pomeron wave function to the gluon density c=\^[2]{}b\_[0]{}/3
The initial distribution
========================
To start the evolution in $1/x$ we have to fix the initial gluon density at some small value $x=x_{0}$. Evidently, the smaller is $x_{0}$, the smaller is the region where we can compare our predictions with the experimental data. On the other hand, if $x_{0}$ is not small enough, application of the asymptotic hard pomeron theory becomes questionable. Guided by these considerations we choose $x_{0}=0.01$ as our basic initial $x$ although we also tried $x=0.001$ to see the influence of possible subasymptotic effects.
The initial wave function $\phi(x_{0},k^{2})$ has to be chosen in accordance with the existing data at $x=x_{0}$ and all $k^2$ available. The experimental $F_{2}$ is a sum of the singlet and nonsinglet parts, the latter giving a relatively small contribution at $x=0.01$. Our theory can give predictions only for the singlet part (and one of the criteria for its applicability is precisely the relative smallness of the nonsinglet contribution). The existing experimental data at $x=0.01$ give values for $F_{2}$ averaged over rather large intervals of $x$ and $Q^2$. For all these reasons, rather than to try to adjust our initial $\phi(x_{0},k^{2})$ to the pure experimental data, we have preferred to match it with the theoretical predictions for the gluon density and the singlet part of $F_{2}$ given by some standard parametrization fitted to the observed $F_{2}$ in a wide interval of $Q^2$ and small $x$. As such we have taken the GRV LO parametrization [@grv]. The choice of LO has been dictated by its comparative simplicity and the fact that at $x=0.01$ the difference betwen LO and NLO is insignificant.
Thus, for the initial distribution we have taken the GRV LO gluon density with an appropriate scaling factor. Putting this density into Eqs. (8),(9) and (15) one should be able to reproduce the sea quark density and thus the singlet part of the structure function. In the GRV scheme the relation between the gluon density and the quark density is much more complicated and realized through the DGLAP evolution. Since the DGLAP evolution and the pomeron theory are not identical, one should not expect that our initial gluon density should exactly coincide with the GRV one to give the same singlet structure funcction. One has also to have in mind the approximate character of our colour density $\rho$ at small $Q^2$. In fact, with the initial $\phi$ given by (8) and the gluon density exactly taken from the GRV parametrization at $x=0.01$ we obtain a 30% smaller values for the singlet part of the structure function as given by the same GRV parametrization, the difference growing at low $Q^2$. To make the description better we used a certain arbitrariness in the scale $Q_{1}^{2}$ which enters (10) and also the scale at which the coupling freezes in the density $\rho$. The optimal choice to fit the low $Q^2$ data is to take (Q\_[1]{}\^[2]{})= With this $\alpha(Q_{1}^{2})$ the obtained singlet structure function at $x=0.01$ has practically the same $Q^{2}$ dependence as the GRV one, although it results 30% smaller in magnitude. This mismatch can be interpreted in two different ways. Either we may believe that the gluon density given by the GRV is the correct one and the deficiency in the singlet part of the structure function is caused by our approximate form of the colour density $\rho$ (which is most probable). Or we may think that the colour density to be used in the DGLAP should coincide with ours only for large enough $Q^2$ and $1/x$ and at finite values they may somewhat differ (our relation (8) was established strictly speaking only in the DL limit). Correspondingly we may either take the relation (8) as it stands and use the GRV LO gluon density at $x=0.01$ in it, or introduce a correcting scaling factor 1.3 which brings the structure function calculated with the help of (9)-(15) into agreement with the GRV predictions. In the following we adopt the second alternative, that is we assume that our initial gluon distribution at $x=0.01$ is 30% higher that the one given by the GRV parametrization. The singlet part of the structure function at $x=0.01$ calculated from (9)-(15) with this choice is shown in Fig. 2 together with the GRV predictions. However one can easily pass to the first alternative by simply reducing our results by factor 1.3.
Evolution: numerical results
============================
With the initial wave function $\phi(x=0.01,k^{2})$ chosen as indicated in the preceding section we solved the evolution equation for $10^{-8}<x<10^{-2}$. The adopted calculational scheme was to diagonalize the Hamiltonian in (2), reduced to one dimension in the transverse momentum space after angular averaging, and represent the initial wave function as a superposition of its eigenvectors. To discretize $k^2$ a grid was introduced, after which the problem is reduced to a standard matrix one. To check the validity of the obtained results we have also repeated the evolution using a Runge-Kutta method, resulting in a very good agreement. The final results obtained for the gluon distribution $xg(x,Q^{2})$ as a function of $Q^2$ for various $x$ are shown in Figs. 3 and 4 and as a function of $x$ for various $Q^2$ in Figs. 5 and 6. Figs. 3 and 5 correspond to $x$ and $Q^2$ presently available, whereas Figs. 4 and 6 show the behaviour of the calculated gluon density in the region up to very small $x$ and very high $Q^2$, well beyond the present possibilities. For comparison we have also shown the gluon densities for the GRV LO parametrization [@grv], for the MRS parametrization [@mrs] and also for the pure BFKL evolution as calculated in [@kms].
Putting the found gluon densities into Eqs. (9)-(15) we obtain the (singlet part of) proton structure function $F_{2}(x,Q^{2})$. The results are illustrated in Figs. 7 and 8 for the $Q^2$-dependence and Figs. 9 and 10 for the $x$ dependence. As for the gluon densities, the experimentally investigated region is shown separately in Figs. 7 and 9, in which the existing experimental data from [@ZEUS2] are also presented.
Finally, to see a possible influence of subasymptotic effects, we have repeated the procedure taking as a starting point for the evolution a lower value $x=0.001$. The resulting gluon distributions and structure functions are also presented in the above figures.
Discussion and conclusions
==========================
To discuss the obtained results we have to remember that they involve two quantities of a different theoretical status. One is the pomeron wave function $\phi$ which can be identified with the gluon distribution (up to a factor) on a rather solid theoretical basis. The other is the quark density (which is equivalent to the structure function), for which we actually have no rule for the introduction of a running constant and which in the present calculation involves a semi-phenomenological ansatz (10). Evidently the results for the latter are much less informative as to the effect of the running coupling introduced in our way. Therefore we have to separately discuss our prediction for the gluon distribution, on the one hand, and for the structure function, on the other.
Let us begin with the gluon distribution. Comparing our results with those of GRV, which correspond to the standard DGLAP evolution, we observe that at high enough $Q^2$ and low enough $x$ our distributions rise with $Q$ and $1/x$ faster than those of GRV. This difference is, of course, to be expected. The hard pomeron theory in any version predicts a power rise of the distribution with $1/x$ to be compared with (19) for the DGLAP evolution. As to the $Q$-dependence, the fixed coupling (BFKL) hard pomeron model predicts a linear rise, again much stronger than (19). Our running coupling model supposedly leads to a somewhat weaker rise. From our results it follows that it is still much stronger than for the DGLAP evolution. However one can observe that these features of our evolution become clearly visible only at quite high $Q$ and $1/x$. For moderate $Q<10\, GeV/c$ and/or $x>10^{-4}$ the difference between our distributions and those of GRV is insignificant. As to the DGLAP evolved MRS parametrization, it gives the gluon distribution which lies systematically below the GRV one and, correspondingly, below our values, the difference growing with $Q$ and $1/x$.
We can also compare our gluon distributions with the pure BFKL evolution (fixed coupling) results, as presented in \[12\]. One should note that the initial values for the evolution chosen in \[12\] are rather different from ours (borrowed from GRV). The initial gluon distribution in \[12\] is smaller than ours by a $Q$-dependent factor, equal to $\sim 2.5$ at $Q=2\ GeV/c$ and $\sim 1.4$ at $Q=30\ GeV/c$. If one roughly takes that into account then from Fig. 3 one concludes that at moderate $1/x$ our evolution and the pure BFKL one lead to quite similar results. However at smaller $x$ (Fig. 4) one observes that our running coupling evolution predicts a weaker rise with $Q$, as expected.
Passing to the structure functions we observe in Fig. 7 and 9 that our results give a somewhat too rapid growth with $1/x$ in the region $10^{-3}<x<10^{-2}$ as compared to the experimental data (and also to the parametrizations GRV fitted to these data). With the scaling factor 1.3 introduced to fit the data at $x=0.01$ we overshoot the data at $x<10^{-3}$ by $\sim$25%. Without this factor we get a very good agreement for $x<10^{-3}$ but are below experiment at $x=0.01$ by the same order. This discrepancy may be attributed either to subasymptotic effects or to a poor quality of our ansatz (10). Comparison with the result obtained with a lower starting point for the evolution $x=0.001$ shows that subasymptotic effects together with a correct form of coupling to quarks may be the final answer.
Acknowledgments.
================
The authors express their deep gratitude to, Prof G.Venturi for his constant interest in this work and helpful discussions. M.A.B. thanks the INFN for its hospitality and financial help during his stay at Bologna University.
Figure Captions
===============
Fig. 1 The forward amplitude for a pomeron coupled to a virtual photon.
Fig. 2 The singlet part of the structure function of the proton at $x=0.01$. The continuous line is the result of our calculation while the dashed line correspond to the GRV prediction.
Fig. 3 The gluon distributions as a function of $Q^2$ evolved from $x=0.01$ and $x=0.001$ for the experimentally accessible kinematical region. Standard DGLAP evolved parametrizations (GRV-LO and MRS) and the BFKL evolved distributions from \[12\] (we report only few points connected by lines) are shown for comparison.
Fig. 4 Same as Fig. 3 for asymptotically high values of $Q^2$ and $1/x$.
Fig. 5 The gluon distributions as a function of $x$ evolved from $x=0.01$ and $x=0.001$ for the experimentally accessible kinematical region. Standard DGLAP evolved parametrizations (GRV-LO and MRS) and the BFKL evolved distributions from \[12\] (we report only few points connected by lines) are shown for comparison.
Fig. 6 Same as Fig. 5 for asymptotically high values of $Q^2$ and $1/x$.
Fig. 7 $Q^2$ dependence of the singlet part of the proton structure function obtained by evolution from $x=0.01$ and $x=0.001$, compared to the GRV prediction and the ZEUS 94 data.
Fig. 8 Same as Fig. 7 for asymptotically high values of $Q^2$ and $1/x$.
Fig. 9 $x$ dependence of the singlet part of the proton structure function obtained by evolution from $x=0.01$ and $x=0.001$, compared to the GRV prediction and the ZEUS 94 data.
Fig. 10 Same as Fig. 9 for asymptotically high values of $Q^2$ and $1/x$.
[3]{}
[^1]: Permanent address: Dep. High-Energy Physics, University of St. Petersburg, 198904 St.Petersburg, Russia
| |
Jeff Bingaman, Christopher Dodd, Edward Kennedy, and John Kerry.
This is the first new federal environmental education grant-making
program authorized in 18 years.
Endorsed by over 240 colleges and universities, higher education
associations, NGOs and corporations, this grant program will
provide the catalyst for colleges and universities to develop and
implement more programs and practices around the principles of sustainability.
The bill also directs the Department of Education to convene a national
summit of higher education sustainability experts, federal agency
staff, and business leaders to identify best practices and opportunities
for collaboration in sustainability.
At the original intended authorization level of $50 million, USP
will annually support between 25 and 200 sustainability projects
at individual higher education institutions and higher education
consortia/associations. Individual institutions are eligible for
funding to:
a) develop and implement administrative and operations practices
that test,
model, and analyze principles of sustainability;
b) establish multidisciplinary education, research, and outreach
programs that
address the environmental, social, and economic dimensions of sustainability;
c) support research and teaching initiatives that focus on
multidisciplinary and integrate environmental, economic, and social
elements;
d) establish initiatives in the areas of energy management,
green building, waste management, purchasing, toxics, transportation,
and other aspects of sustainability;
e) support student, faculty, and staff work at institutions
of higher education to implement, research, and evaluate sustainable
practices;
f) establish sustainability literacy as a requirement for
undergraduate and graduate degree programs; and
g) integrate sustainability curriculum in all programs of
instruction, particularly in business, architecture, technology,
manufacturing, engineering, and science programs.
Associations and consortia are eligible for funding to:
a) conduct faculty, staff and/or administrator trainings;
b) compile, evaluate and disseminate best practices, case studies,
and standards;
c) engage external stakeholders such as business, alumni,
and accrediting agencies,
d) create analytical tools to assess and measure institutional
progress; and
e) develop educational benchmarks.
The immediate purposes of USP are to:
(1) support faculty, staff, and students in their
efforts to establish administrative and academic sustainability
programs on campus;
(2) promote and enhance research by faculty and students
in sustainability practices and innovations; and
(3) support to colleges and universities in their work with
community partners from the business, government, and nonprofit
sectors to design and implement sustainability programs for application
in the community and workplace.
Next Steps:
1) We need to call upon Congress to appropriate funds
for this new program at a level as close to the originally envisioned
$50 million as possible. While funding is unlikely for the FY09
budget since this process is now well underway and Congress is also
likely to level fund almost all government programs,
we need to start now to build momentum for the FY10 budget.
2) If/when Congress appropriates funds, the Department of Education
will then set up the grantmaking process and issue a RFP.
The very earliest this might happen is late spring 2009 but far
more likely to be late spring 2010.
For further information, contact:
James
L Elder, Director
Campaign for Environmental Literacy
978-526-7768
Representative Earl Blumenauer (OR-3) introduced HESA (HR
3637) with:
Vernon J. Ehlers (MI-03)
Rick Boucher (VA-9)
David Wu (OR-1)
21 additional Members signed on as co-sponsors:
George Butterfield [NC]
Yvette Clarke [NY]
Susan Davis [CA]
Bob Filner [CA]
Barton Gordon [TN]
Al Green [TX]
Raul Grijalva [AZ]
Phil Hare [IL]
Paul Hodes [NH]
Rush Holt [NJ]
Michael Honda [CA]
Darlene Hooley [OR]
Zoe Lofgren [CA]
Carolyn Maloney [NY]
Ileana Ros-Lehtinen [FL]
John Sarbanes [MD]
Carol Shea-Porter [NH]
Michael Thompson [CA]
Lynn Woolsey [CA]
Albert Wynn [MD]
John Yarmuth [KY]
Senator
Patty Murray (WA) introduced HESA in the Senate (S 2444) with:
Jeff Bingaman (NM)
Christopher Dodd (CT)
Edward Kennedy (MA)
John Kerry (MA)
3 additional Senators Members signed on as co-sponsors: | http://fundee.org/campaigns/usp/ |
Providing a film or a program with captions (verbatim or in edited form)
Offering content advice by providing subject-related information and conducting research. Content is any material, written or oral, intended for publication or broadcasting. As to evaluation, the content is examined for relevance of information to the subject in question, the consistency of ideas, logic, and overall structure.
Creating brand names and/or writing promotional and advertising copies for ATL and BTL marketing, including but not limited to: slogans, taglines, letters, corporate profiles, and brochures.
Revising a written content to correct any spelling, grammar and/or terminology mistakes. The overall style, syntax and language register are improved or rewritten wherever necessary. Editing is of two types:
- Basic Proofreading: Detecting and correcting spelling, grammatical mistakes and punctuation marks.
- Substantive Editing: Revising the content beyond a basic proofreading. This type of edit involves rewriting and/or making language improvements to ensure that the content has a coherent structure, a sound sequence of ideas, a correct indication to illustrations, graphs and captions (if any).
Verifying the accuracy of factual information in any content. It covers: historical entries, events, names, places, dates; technical, cultural and scientific data.
Linguistic quality assurance of video and e-games.
Planning and managing projects within our areas of expertise.
Providing a film or a program with subtitles (original and translated)
Paraphrasing, translating or editing a written content to make it fit for the target audience of a certain country or location. Language localization considers cultural differences and integrates the content or product into the conventions of the intended environment.
Conveying the meaning of a written source text into a target one. It is a comprehensive process following a set of standard steps:
1) examination of the subject in question;
2) producing an accurate and faithful translation; and
3) rendering a final text that reads as well as the original.
Double-checking a translation to ensure that it is correct, and free from translation and linguistic errors. This edit is of two types:
- Comparative: Checking the accuracy of a translation by comparing the target and source texts.
- Monolingual: Checking a translation without referring back to the source text.
Writing down a recorded material that has no script. | https://tarjamat.net/services-en/ |
In the case of Ebola, xt represents the total number of people infected, x0 represents the number of index cases at the starting point, r represents the rate of disease transmission (believed to be about 2 for Ebola, i.e.: each Ebola victim transmits the disease on average to 2 other people), and t (as an exponent) represents the time interval used for measurement (months). The formula reduces to xt = 3t for a transmission rate of two with a single index case. Since 70% of Ebola patients die, and since the survivors no longer transmit the disease, the formula for Ebola cases further reduces to xt = 2t.
With a transmission rate of 2 and a one month transmission time, there would be 2 active Ebola cases at the end of the first month, assuming the index case either died or survived with immunity. In two months there would be 4 cases, in three months 8 cases, in four months 16 cases, in six months 64 cases, in nine months 512 cases, in one year 4,096 cases, and in two years 16,777,216 cases. If cases emerge after three weeks instead of four weeks, the numbers are much worse. If healthcare workers die off early in the Ebola epidemic, as one would expect with no vaccination and inadequate protection, then the transmission rate would increase from 2 to who knows how high, also leading to much higher case numbers.
As we speak the Ebola epidemic in West Africa is expanding exponentially, doubling each month, apparently with most cases unreported. What if our healthcare system, as good as it is, was unable to alter a transmission rate (r) of 2 for Ebola? All that would be necessary for this to happen is for each Ebola victim in turn to infect one family member, and one friend, stranger or healthcare worker. CDC recommended precautions for your local hospital are currently inadequate to prevent nosocomial (hospital) Ebola transmission, and the CDC's failure to quarantine all known Ebola contacts in the community (the proper way to monitor them) – and their inherent inability to quarantine unknown Ebola contacts – could easily lead to transmission of the dreaded disease in our neighborhoods. If the CDC can't properly monitor Ebola transmission among our doctors and nurses at Texas Health Presbyterian Hospital, then why would we expect them to properly monitor Ebola transmission in the community?
"The pool of people being monitored for potential exposure to the disease appeared to more than double, from 48 to perhaps more than 100, none of whom had reported any symptoms of Ebola. All of those now being evaluated for the first time were workers at Presbyterian who cared for Mr. Duncan after he was admitted. Though the precise number of workers remains unknown, questions were also being raised about why they had not been monitored previously."
The CDC will inevitably be unaware of new index Ebola cases from endemic areas, asymptomatic individuals escaping detection from thermometers at the airport, and likewise unaware of some ensuing secondary contacts in the community, thus each new Ebola index case may transmit the disease to a family member, neighbor or stranger, and the CDC, along with the rest of us, would be in the dark. We would be unaware until index cases present themselves to a clinic or hospital having first transmitted Ebola to secondary contacts prior to arrival at the clinic or hospital – transmission sometimes occurring before symptoms were admitted to or were clinically evident in the index cases – likely near the end of the incubation period or just after. Some of the secondary contacts would be strangers to the index case – shaking hands with salespersons at a counter, a waitress, vomiting or having diarrhea in a public restroom, coughing or sneezing near someone in public transportation, etc., secondary contacts unknown to the index case and thus untraceable by the CDC. The ensuing secondary cases repeat the same pattern, transmitting Ebola to known and anonymous tertiary individuals, both initially unaware of the terrible truth, and so on.... Remember, in order to achieve exponential growth, each new case only has to transmit Ebola to two others, and this would mostly occur outside of the hospital, i.e.: even a great healthcare system would be unable to stop pre-hospital community transmission in an un-vaccinated population. If one or two (or more) of the secondary cases occur anonymously, then the CDC would be unable to track them until they became sick – too late – because by then they would have in turn transmitted Ebola to one or two (or more) tertiary individuals – some anonymously. And so on it goes. The fly in the CDC ointment is anonymous pre-hospital transmission of Ebola – which will defeat case tracking.
"Moreover, said some public health specialists, there is no proof that a person infected – but who lacks symptoms – could not spread the virus to others. 'It's really unclear,' said Michael Osterholm, a public health scientist at the University of Minnesota who recently served on the U.S. government's National Science Advisory Board for Biosecurity. 'None of us know.'"
Under current CDC "leadership" exponential expansion of Ebola in the United States could occur. Unless things change drastically and quickly, and with thanks to our President, FDA, and CDC Director, we could have thousands dead within one year of an index case (x0), and millions dead in two years, and even worse if we end up with multiple index cases via unblocked inbound air travel (and sea travel) from West Africa – or from future geographic areas of epidemic – like South or Central America and Mexico. Where is the ZMapp and other effective therapy which would not only cure cases but also reduce the nosocomial transmission rate? Where is the vaccine? Without widely available effective therapy and widespread Ebola vaccination we the American people are reduced to sitting ducks. What has happened to the all-American virtues of intelligence and common sense – have we become that blind to a clear and present danger? For now our best hope lies in plasma transfusions from survivors, but will there be enough? Will we be able to transfuse secondary blood into tertiary cases before they in turn transmit Ebola in the community? A vaccinated population would not have to face these questions.
We now have enough information which should lead us to distrust our government regarding the Ebola epidemic – we should question their dogma against airborne transmission of Ebola and asymptomatic Ebola transmission – they are not an infallible priesthood. Having failed in their primary duty to protect the American people, we find ourselves in harms' way. | http://www.renewamerica.com/columns/cherry/141017 |
Many people enjoy watching and playing sports. Exercise is important for everyone's health, teamwork carries over into cooperative work and relationships, and the discipline and perseverance necessary to athletic success are character traits that can promote success in other areas of life. One other thing about sports that interests me is the influence famous athletes can have on the culture.
| |
This week I am giving a lot of attention to a picture book focused on a very inspiring person: Emmanuel Ofosu Yeboah, a disabled cyclist who rode across Ghana to challenge people's stereotypes and assumptions about disability. He was and is an inspiration to countless individuals; further, he has successfully advocated for legal protection for the disabled against discrimination in his home country.
In a couple of weeks, on June 1, we are having assemblies with the children's librarians from the Manhattan Beach public library. The topic is the public library's summer reading programs, and grades TK, K, 1, 2, and 3 will be going to the assemblies. Then, after school, the librarians will be available in the Pennekamp library if you or your students would like to come by and meet them, ask any questions, get book recommendations, learn about e-books at the public library--anything you'd like to know! | http://www.pennekamplibrary.com/currentnews/may-16-2016 |
Position Overview: Do you have a passion for writing and storytelling? Do you believe every person has a valuable perspective and experience to share? Do you want to use your creativity to improve South King County while improving your professional experience? Then our KYFS GiveBIG Social Media Volunteer is the position for you.
Kent Youth and Family Services is seeking a GiveBIG Social Media Volunteer to help build our presence leading up to the Seattle Foundation’s GiveBIG fundraiser on May 10. You’ll have a chance to pitch ideas, report and write original stories, network with local experts and guest bloggers, and engage our social media audience in discussions related to GiveBIG. You’ll gain hands-on experience in writing, editing, content creation, blog management, social media, project management, and nonprofit communications.
Desired Talents: You’ve got a great ability to write an update, tweet, or post that attracts attention. Those around you see you as a master of words. You possess a curiosity that can’t be quenched. You ask really great questions. You are in love with your community and eager to give back.
Key Responsibilities:
Under the supervision of the Director of Development and Community Relations
- Help create content for the blog
- Contribute ideas to the newsletter
- Create visual assets to use during the GiveBIG campaign
- Create social media content
- Monitor blog and social media performance and suggest posts’ format and content
- Identify and build relationships with key community figures
- Moderate blog comments and inquiries
- Attend events, conduct interviews, and collaborate KYFS clients, volunteers, donors, and staff to further the agency’s image
Need to Haves:
- Relevant work or volunteer experience
- Strong project coordination skills, including ability to prioritize, work proactively, and build positive relationships with coworkers and collaborators
- Ability to articulate your ideas and write clear, error-free prose
- Track record of keeping commitments, meeting deadlines, and communicating openly to keep teams and projects moving forward
- Outstanding organizational skills and attention to detail
- Curiosity, imagination, sense of humor, and drive to learn new things every day
Odds and Ends
- Must be available for a minimum of 5 hours per week
- Start day is flexible
- End date is May 11
How to Apply
Please email a cover letter, resume, and writing sample to Nathan Box at [email protected] with subject line SOCIAL MEDIA VOLUNTEER. | https://kyfs.org/givebig-social-media-volunteer-needed/ |
AbstractProtein homeostasis (proteostasis) refers to the ability of cells to preserve the correct balance between protein synthesis, folding and degradation. Proteostasis is essential for optimal cell growth and survival under stressful conditions. Various extracellular and intracellular stresses including heat shock, oxidative stress, proteasome malfunction, mutations and aging-related modifications can result in disturbed proteostasis manifested by enhanced misfolding and aggregation of proteins. To limit protein misfolding and aggregation cells have evolved various strategies including molecular chaperones, proteasome system and autophagy. Molecular chaperones assist folding of proteins, protect them from denaturation and facilitate renaturation of the misfolded polypeptides, whereas proteasomes and autophagosomes remove the irreversibly damaged proteins. The impairment of proteostasis results in protein aggregation that is a major pathological hallmark of numerous age-related disorders, such as cataract, Alzheimer’s, Parkinson’s, Huntington’s, and prion diseases. To discover protein markers and speed up diagnosis of neurodegenerative diseases accompanied by protein aggregation, proteomic tools have increasingly been used in recent years. Systematic and exhaustive analysis of the changes that occur in the proteomes of affected tissues and biofluids in humans or in model organisms is one of the most promising approaches to reveal mechanisms underlying protein aggregation diseases, improve their diagnosis and develop therapeutic strategies. Significance: In this review we outline the elements responsible for maintaining cellular proteostasis and present the overview of proteomic studies focused on protein-aggregation diseases. These studies provide insights into the mechanisms responsible for age–related disorders and reveal new potential biomarkers for Alzheimer’s, Parkinson’s, Huntigton’s and prion diseases.
|Author|
|Journal series||Journal of Proteomics, ISSN 1874-3919, (A 35 pkt)|
|Issue year||2019|
|Vol||198|
|Pages||98-112|
|Publication size in sheets||0.7|
|Keywords in English||proteostasis, heat shock proteins, protein aggregation diseases, biomarkers|
|ASJC Classification||;|
|DOI||DOI:10.1016/j.jprot.2018.12.003|
|URL||https://doi.org/10.1016/j.jprot.2018.12.003|
|Language||en angielski|
|Score (nominal)||35|
|Score||= 35.0, 24-07-2019, ArticleFromJournal|
|Publication indicators||: 2017 = 0.982; : 2017 = 3.722 (2) - 2017=3.725 (5)|
|Citation count*|
* presented citation count is obtained through Internet information analysis and it is close to the number calculated by the Publish or Perish system. | https://repozytorium.bg.ug.edu.pl/info/article/UOG802dbd249dcb41b08504962b1edc560a/ |
The Turlock Unified School District has made a plan to better promote its Career Technical Education courses to all English learners as part of the district’s response to a report released by the Stanislaus County Civil Grand Jury in June.
By law, the district had 90 days to officially respond to the jury’s findings and recommendations regarding Career Technical Education. The response must include whether the district agrees or disagrees with the recommendations with accompanying explanations.
According to Assistant Superintendent of Educational Services Heidi Lawler, former Director of CTE and Program Equity Tami Truax and current director John Acha worked alongside school administrators and data systems staff to gather the data requested immediately after the jury’s report was released. Acha presented the findings and the proposed response to the Board prior to their unanimous vote.
“We're excited to share with you this report, and I say excited because this is what I enjoy most about my job: Identifying the great things that TUSD is doing already, as well as instilling change… We always need to evolve and we always have ways to improve and I appreciate the Civil Grand Jury,” Acha said.
Acha began by sharing that the district currently offers over 40 CTE courses with 31 teachers teaching them. After taking a closer look at the current state of the programs, Acha and the research team came to the conclusion that TUSD should agree with 8 of the 14 findings and recommendations.
One of the first topics listed by the SCCGJ that Acha suggest the district agree with was the fact that all English Language Learners in the district have the ability to enroll in CTE course. While that is the case, Acha and the team also agreed with the recommendation that the district can do a better job at promoting the courses to all students, including English learners. This recommendation also correlates with the finding regarding participation, as Acha agreed with the jury that enrollment tends to vary by schools and districts in Stanislaus County.
“[It’s} something we want to look into,” Acha told the Board. “Why is that? What is it that one site may be doing better versus another or what could have caused that? It’s good information to focus on.”
Acha and the team also acknowledged the jury’s finding that CTE completion rate amongst English learners is fairly low. According to data presented to the Board, the CTE course completion rate amongst English learners at Turlock High was only 10% in 2018 and 2019. At Pitman High, there was a 5% completion rate.
“My goal as the Director of CTE and Program Equity is to increase our pathway completion rates, which are called New Career Readiness rates,” Acha said.
Acha shared similar sentiments to the jury’s finding regarding graduation rates at continuation schools in Stanislaus County. Acha explained that while graduation rates varied dramatically by campus, Turlock’s Roselawn High had one of the higher rates in the county. Nevertheless, Acha believes there is always room for improvement.
Amid the long list of agreements, Acha did have some disagreements regarding the jury’s findings and recommendations.
Staying on the topic of continuation schools, Acha and his team partially disagreed with the finding that TUSD offers limited CTE programs to continuation students.
“We disagree partially here as we already as a district are incorporating some of the recommendations,” Acha said. “We already have multiple CTE courses available. At the continuation high schools, it's difficult sometimes with smaller staff just like it would be for a very smaller school, and that is a challenge, but it’s nothing you can't overcome or continue to work on.”
Another disagreement that came about was the jury’s finding that schedule conflicts limit English learners’ participation in CTE programs.
“Coming from my previous job building the master schedule at Pitman High School, I know the work that I did to reduce and limit the amount of complex issues for all students,” Acha said. “Absolutely I want every student to take every class they possibly can, but it's not possible. And it's not limited to English learners. At some point, there may just not be a way to make it work.”
Acha added that the Aeries software can usually assist in resolving schedule conflicts and that there is already an increased effort to decrease conflicts for English learners.
The SCCGJ also listed in their report that there could be eased financial burdens on students interested and participating in CTE courses, a finding that Acha and the team partially disagreed with as current policies and practices are being implemented to remove this barrier for TUSD students. There was a similar partial disagreement as it related to necessary technology. Acha explained that the devices all students are provided with can be compatible with over 130 languages, but acknowledged that steps to access those tools could be better promoted and taught.
All other findings and recommendations released by the SCCGJ were neither agreed or disagreed with as TUSD has already implemented the changes or have already motioned to make the recommended changes before the end of the current calendar year.
Now that the Board has approved the response, a CTE Task Force will be put together for this school year. The Task Force will conduct a comprehensive review of SCCGJ report with counseling and admin teams at Turlock, Pitman and Roselawn High and ultimately develop action plans to address the jury’s findings and implement reccomendations. | https://www.turlockjournal.com/news/education/turlock-school-board-responds-grand-jury-regarding-career-tech-initiatives/ |
Celta Vigo have scored an average of 1.41 goals per game since the beginning of the season. The team's average number of goals scored per game in the last 8 matches is 1.13, which is 19.9% lower than their current season's average.
Of the above 6 remaining matches in La Liga, Celta Vigo will be playing as many matches at home and away (3 in each case).
Celta Vigo's opponents to be played in home games currently have a combined average of 1.43 Points Per Game in their away matches played so far this season in the league.
As for the hosts that Celta Vigo will be visiting, so far this season they have a combined average of 1.79 Points Per Game at home in the league.
With 32 points so far this season, Celta Vigo have picked up 11 points less than they did last season after 32 matches, and their total in the current season is also 12 points lower than it was two seasons ago at the same stage.
Celta Vigo have obtained as many 1-goal margin wins as wins with a margin of 2 goals or more (4 wins for each).
In this example, Celta Vigo have been in the lead for an average duration of 20.8 minutes per match (based on a 90-minute match duration), which is lower than the average duration of 26.0 minutes per match during which their opponents were in the lead in those matches.
Celta Vigo have taken the lead 16 times and have conceded an equalizer on 8 occasions, corresponding to a lead-defending rate of 50% (for an opponents' equalizing rate of 50%).
Of the 25 times that Celta Vigo's opponents have taken the lead, Celta Vigo have managed to score an equalizer on 9 occasions, corresponding to an equalizing rate of 36%.
Avg. min. leading 23.9 min.
In this example, Celta Vigo have been in the lead for an average duration of 23.9 minutes per match played at home (based on a 90-minute match duration), which is longer than the average duration of 20.8 minutes per match during which their opponents were in the lead in those matches.
In home matches, Celta Vigo have taken the lead 9 times and have conceded an equalizer on 3 occasions, corresponding to a home lead-defending rate of 67% (for an opponents' equalizing rate of 33%).
In this example, Celta Vigo have been in the lead for an average duration of 17.6 minutes per match played away (based on a 90-minute match duration), which is lower than the average duration of 31.3 minutes per match during which their opponents were in the lead in those matches.
Of the 15 times that Celta Vigo's opponents have taken the lead when Celta Vigo were playing away, Celta Vigo have managed to score an equalizer on 5 occasions, corresponding to an away equalizing rate of 33%.
Celta Vigo conceded at least 1 goal in each of their last 7 matches. Celta Vigo conceded at least 1 goal in 94% of their away matches. 69% of Celta Vigo's points have been earned at home. Celta Vigo have failed to win in their last 9 away matches.
After 320 matches played in La Liga, a total of 830 goals have been scored (2.59 goals per match on average). The menus above provide access to league-level statistics and results analysis in the Spanish league, including La Liga results and scoring stats such as clean sheets, average goals scored and goals conceded. | https://www.soccerstats.com/h2h.asp?league=spain&t1id=18&t2id=5&ly=2019 |
Financial conflicts of interest in systematic reviews: associations with results, conclusions, and methodological quality.
Financial conflicts of interest in systematic reviews (e.g. funding by drug or device companies or authors' collaboration with such companies) may impact on how the reviews are conducted and reported. To investigate the degree to which financial conflicts of interest related to drug and device companies are associated with results, conclusions, and methodological quality of systematic reviews. We searched PubMed, Embase, and the Cochrane Methodology Register for studies published up to November 2016. We also read reference lists of included studies, searched grey literature sources, and Web of Science for studies citing the included studies. Eligible studies were studies that compared systematic reviews with and without financial conflicts of interest in order to investigate differences in results (estimated treatment effect and frequency of statistically favourable results), frequency of favourable conclusions, or measures of methodological quality of the review (e.g. as evaluated on the Oxman and Guyatt index). Two review authors independently determined the eligibility of studies, extracted data, and assessed risk of bias. We synthesised the results of each study relevant to each of our outcomes. For meta-analyses, we used Mantel-Haenszel random-effects models to estimate risk ratios (RR) with 95% confidence intervals (CIs), with RR > 1 indicating that systematic reviews with financial conflicts of interest more frequently had statistically favourable results or favourable conclusions, and had lower methodological quality. When a quantitative synthesis was considered not meaningful, results from individual studies were summarised qualitatively. Ten studies with a total of 995 systematic reviews of drug studies and 15 systematic reviews of device studies were included. We assessed two studies as low risk of bias and eight as high risk, primarily because of risk of confounding. The estimated treatment effect was not statistically significantly different for systematic reviews with and without financial conflicts of interest (Z-score: 0.46, P value: 0.64; based on one study of 14 systematic reviews which had a matched design, comparing otherwise similar systematic reviews). We found no statistically significant difference in frequency of statistically favourable results for systematic reviews with and without financial conflicts of interest (RR: 0.84, 95% CI: 0.62 to 1.14; based on one study of 124 systematic reviews). An analysis adjusting for confounding due to methodological quality (i.e. score on the Oxman and Guyatt index) provided a similar result. Systematic reviews with financial conflicts of interest more often had favourable conclusions compared with systematic reviews without (RR: 1.98, 95% CI: 1.26 to 3.11; based on seven studies of 411 systematic reviews). Similar results were found in two studies with a matched design, which therefore had a reduced risk of confounding. Systematic reviews with financial conflicts of interest tended to have lower methodological quality compared with systematic reviews without financial conflicts of interest (RR for 11 dimensions of methodological quality spanned from 1.00 to 1.83). Similar results were found in analyses based on two studies with matched designs. Systematic reviews with financial conflicts of interest more often have favourable conclusions and tend to have lower methodological quality than systematic reviews without financial conflicts of interest. However, it is uncertain whether financial conflicts of interest are associated with the results of systematic reviews. We suggest that patients, clinicians, developers of clinical guidelines, and planners of further research could primarily use systematic reviews without financial conflicts of interest. If only systematic reviews with financial conflicts of interest are available, we suggest that users read the review conclusions with skepticism, critically appraise the methods applied, and interpret the review results with caution.
| |
The details of the latest provincial draw by Alberta have been released.
On February 16, 2021, a total of 159 Express Entry candidates – eligible for the Alberta Express Entry Stream – have been invited by the Alberta Immigrant Nomine Program [AINP] to apply for a nomination through the province.
|
|
This is the fourth AINP draw to be held in 2021. With this, a total of 509 invitations have been issued by the AINP so far this year.
In order to receive an invitation by the AINP under the Alberta Express Entry Stream, the candidate must have an active Express Entry profile with Immigration, Refugees and Citizenship Canada [IRCC].
In the latest AINP draw, Express Entry candidates that received an invitation were required to have their ranking score of 352 as per the Comprehensive Ranking System [CRS], that is, a CRS 352.
They must also have expressed their interest – through an Expression of Interest [EOI] profile – in settling within Alberta when granted their Canadian permanent residence.
Alberta is one of the 9 provinces and 2 territories that are a part of the Provincial Nominee Program [PNP] of Canada.
|
|
Express Entry candidates that are successful in securing a nomination through the PNP are guaranteed an invitation from IRCC in the next federal Express Entry draw to be held.
It is the highest-ranked candidates that receive invitations to apply in the federal draws that are held by the Canadian government.
A PNP nomination in itself fetches 600 ranking points – as per the Comprehensive Ranking System [CRS] – for an Express Entry candidate.
Example: An Express Entry candidate with a low CRS of 300 secures a PNP nomination. Now, with the ‘additional’ points added, their score becomes CRS 900 [300 for human capital factors + 600 for PNP nomination].
For 2020, the AINP had an allocation of 6,250. However, Alberta reduced their allocation to 4,000 due to the pandemic situation.
As the target of 4,000 nominations were met by June 2020, no AINP draws were held in the second half of the year.
The provincial nomination allocation for Alberta for 2021 has not been disclosed as yet.
|AINP draws in 2021
|
Total draws held: 5
Total invitations issued: 509
|Invitation Dates||Number of Invitations issued||CRS score of lowest ranked candidate|
|January 8, 2021||50||CRS 406|
|January 28, 2021||100||CRS 360|
|February 10, 2021||200||CRS 301|
|February 16, 2021||159||CRS 352|
If you are looking to Migrate, Study, Invest, Visit, or Work Overseas, talk to Y-Axis, the World’s No. 1 Immigration & Visa Company. | https://www.y-axis.com/news/alberta-invites-159-ee-candidates-latest-ainp-draw/ |
See all full list on scribbr. research proposal3. literature review title page table of contents chapter i – introduction a literature review may be presented as a paper on its own, or it can be contained as an integral background statement of the problem part of an article, research proposal, research significance of the study objectives of the report or dissertation. project proposal and literature review for engineering students target audience: masters students within the engineering field who are in the process of writing a proposal presenter: dr. michael ayomoh department of industrial and systems engineering, university of pretoria. how to write literature review for a research project. you just got an interesting project topic and you need to begin writing on it. you are probably already aware that you should begin your writing with a literature review. a literature review may constitute an essential chapter of a thesis or dissertation, or may be a self- contained review of writings on a subject. in either case, its purpose is to: place each work in the context of its contribution to the understanding of the subject under review. a literature review surveys books, scholarly articles, and any other sources relevant to a particular issue, area of research, or theory, and by so doing, provides a description, summary, and critical evaluation of these works in relation to the research problem being investigated. guidelines for literature/ review proposal due ap introduction the introduction to the literature review/ proposal orients the reader to the problem under study and has three parts.
first, you need to provide a statement of the problem. this statement sets out the general reasons that the research area is important. 1 research proposal definition. a detailed definition is, a research proposal is a document written with the goal of presenting and justifying your interest and need for conducting research on a particular topic. it must highlight the benefits and outcomes of the proposed study, supported by persuasive evidence. Name the movie quote. research proposal outline. construct literature review.
literature review is an important part of the research proposal. it contains critical review of the information you have used and will be using for the work. the purpose of literature review is to let the readers know that you have sufficient knowledge of the topic. literature review template definition: a literature review is an objective, critical summary of published research literature relevant to a topic under consideration for research. its purpose is to create familiarity with current thinking and research literature review on research proposal on a particular topic, and may justify future research into a previously overlooked or. thanks for your question! in the literature review section of a research proposal, you can take a similar approach as you would in a dissertation literature review: focus on connecting the literature to your research questions, and show how your proposed project will contribute to knowledge in your field. what exactly are objectives and what do researcher need to find out? in literature review, are researcher looking at issues of theory, methodology, policy, quantitive research, or what? before researcher start reading it may be useful to compile a list of the main areas and questions involved, and then read with the purpose of e all full list on dissertation- help. remember that a research proposal is not an essay.
before writing your conclusion, proofread and ensure that you have followed the suitable format. it is only supposed to act as a framework/ guide to writing about any of your topics. if you are having difficulties in writing a research proposal, you can download some online samples for guidance. many researchers struggle when it comes to writing literature review for their research paper. a literature review is a comprehensive overview of all the knowledge available on a specific topic till date. this article provides detailed guidelines and tips to write and structure your literature review perfectly. the purpose of a literature review is to highlight a void in the research that your study will fill. the literature review answers why you should conduct your research. research proposal literature review writing assignment sample. below is the snapshot of the sample that i helped a student solve. if you have a similar assignment, then you are going to have a treat today. if not, this blog post can always act as a guide to write the research proposal literature review writing assignment answer.
how to write a literature review in 30 minutes or less" breaks down this academic assignment into 5 easy steps: ( there is a text literature review on research proposal version of this video: http. gather all your papers and articles. depending on the purpose of the review, the researcher can use a number of strategies, standards, and guidelines developed especially for conducting a literature review. then, when should a literature review be used as a research method? for a number of research questions, a literature review may be the best methodological tool to provide answers. while there might be many reasons for conducting a literature review, following are four key outcomes of doing the review. assessment of the current state of research on a topic. this is probably the most obvious value of the literature review. how to format a literature review? when conducting research, a literature review is an essential part of the project because it covers all previous research done on the topic and sets the platform on which the current research is based. no new research can be taken seriously without first reviewing the previous research done on the topic. what do you need to know about writing a literature review?
note: the next is copyrighted materials. Business plan from home. it consists of an excerpt from an article in progress. an apa research paper mannequin thomas delancy and adam solberg wrote the next analysis paper for a psychology class. as you assessment their paper, learn the facet notes how to write a literature review. will g hopkins phd. the research question is one of the most important parts of your research project, thesis or dissertation. it’ s important to spend some time assessing and refining your question before you get started. the exact form of your question will depend on on the length of your project, the type of research, the topic, and the research problem. when writing a lit revision, one of the most essential pieces of the puzzle is understanding why you are writing in the first place.
a contents of literature review in research proposal provides a solid outline of a given research study that you have carried out, as well as the research paper that you have written. review of the literature. the first phase of the literature review should be brief but give the reader enough information to understand the context of the proposed research. it may include references to previous findings and specific studies similar to the current study, and to relevant methodology. a good literature review: 1. an example of a student literature review in psychology and lecturer' s comments is here. a literature review in a proposal to investigate how indigenous peoples choose plant medicines. an example of a literature review on language and gender with annotated comments. below is an example of a lit. review from the social sciences. nine steps to writing a literature review 1.
find a working topic 2. review the literature 3. focus your topic narrowly and select papers accordingly 4. read the selected articles thoroughly and evaluate them 5. organize the selected papers by looking for patterns and by developing subtopics 6. develop a working te: this is where you sell your research proposal to the reader. you need to explain, clearly and simply, how your research will complement the field you have just described in your literature review: literature review on research proposal what you will add, how it fills an existing gap, why the academic world would benefit from your research, etc. 4 the leadership of jesus: a literature review and research proposal behaviors. according to yukl ( ), the behavior approach to the study of leadership examines the typical patterns of leader activities, functions, and responsibilities and how leaders effectively spend their time. as you may already know, writing a literature review is a skill which few have mastered, but our experts know what it takes to make any summary a winner. they know the winning formula over a piece of text and systematically analyzing it from every level, and thanks to them you don’ t have to get stressed out.
writing the literature review for a research proposal - duration: 14: 52. cecile badenhorst 21, 094 views. writing a research thesis proposal - duration: 36: 20. sample literature review this is a literature review i wrote for psychology 109 / research methods i. it received an a. the assignment was to read a variety of assigned articles related to the topic of food and mood, as well as several articles on the topic that we found on our own. a thorough review of the relevant literature is an important part of your preparation for writing a proposal. it plays several critical roles.
don' t reinvent the wheel. you won' t get credit for proposing work that has already been reported. develop the context of your research. show how your work grows from or relates to work that has already. a literature review is a survey of scholarly sources that provides an overview of statement or the study’ s goals or purpose. * this sample paper was adapted by the writing center from key, k. , decristofaro, c. use of p ropofol and emergence agitation in children: a literature review. aana journal, 78( 6. how to write a literature review?
a research proposal must be focused and not be " all over the map" or diverge into on unrelated tangents without a clear sense of purpose. failure to cite landmark works in your literature review. proposals should be grounded in foundational research that lays a foundation for understanding the development and scope of the issue. the literature review can be organized by categories or in the order of your research questions/ hypotheses. while you have been including literature reviews in your research papers and collecting citations for your dissertation, the literature review for a grant proposal is shorter and includes only those studies that are essential in showing. how to write a literature review for a research proposal introduction a literature review surveys the scholarly articles, books, and other scholarly sources ( e. , dissertations, conference proceedings) that are relevant to a particular issue, area of research, or theory. thank you for the a2a a research proposal is the plan for a piece of empirical research that you propose to carry out. it may form an assignment in its own right, which gives a chance for the supervisor to check that a student& # 039; s research knowledge. we did not find results for: rheumatoid arthritis hesi case study answers.
check spelling or type a new query. maybe you would like to learn more about one of these? we did not find results for: alternatives to research papers. taking on global leadership at methanex. open this photo in gallery: bruce aitken, president and ceo of methanex. chuck stoody/ chuck stoody/ the. view melissa wood’ s profile on linkedin, the world' s largest professional community. melissa has 5 jobs listed on their profile. see the complete profile on linkedin and discover melissa’ s connections and jobs at similar companies. methanex corporation ( “ methanex” or the “ company” ) ( mx. to) ( meoh) filed a letter to shareholders today ahead of the company’ s upcoming annual general meeting ( the “ meeting” ) on ap. the letter, included below, spotlights the clear choice for shareholders: methanex’ s consistent strategy that.
improving inventory management for methanex by russell d e a n f e n s k e bachelor of commerce degree, university of alberta, 1999 a thesis submitted in partial f u l f i l l m e n t of t h e requirements for t h e d e g r e e of master of science ( business administration) in the faculty of graduate studies the department of. anorexia nervosa is a very serious disease that can be extremely harmful and sometimes fatal. “ a typical anorexia patient would be a thirteen to fifteen year old white, upper- middle class female” ( gross, 1936, p. anorexia can also occur in teenage boys and adult men and women ( anorexia nervosa. case study marketing campaign. anorexia is common destructive eating disorder that individuals can develop overtime by giving into their deranged thoughts and perceptions. my research paper will describe in detail the actions and behaviors that someone who is suffering from anorexia nervosa demonstrates. research paper anorexia nervosa what is anorexia nervosa? “ anorexia nervosa is an eating disorder that is characterized by the refusal to sustain a healthy weight. ” ( kumar, tung, & iqbai). many believe that anorexia is more common amongst caucasian women, but anorexia occurs throughout all cultures and races. writing a research paper is different from writing an essay on anorexia nervosa as a research paper is more details and you must show your prowess in carrying out research.
this is why you need the help from a professional expert if you are to write a grade winning research paper on anorexia nervosa topic. anorexia nervosa research paper.
How to write response essay Hku dissertation binding Icaew case study past papers What to include in a short bio Venus research paper
How to write a text analysis essay Research papers on school uniforms
utopian spirit of early modernist literature. it’ s hard to define exact characteristics of postmodern literature.
I am always satisfied with the services provided, and what I like the most is the understanding, which had helped a lot.
role of modernism and. | https://writemycustompaper.com/?literature.review.on.research.proposal.html |
The environmental and ocean sciences major, offered by the Department of Environmental and Ocean Sciences, is intended for students interested in the natural world, with three distinct pathways that focus on marine ecology, environmental sciences or environmental studies. All pathways are designed with an interdisciplinary approach, either within the natural sciences (marine ecology and environmental science pathways) or across the natural sciences, social sciences and humanities (environmental studies pathway). The curriculum trains students to apply the scientific method to study critical environmental issues while promoting ethical judgment and behavior as it relates to the scientific process, environmental awareness, and the role humans play within the dynamic earth system. The environmental and ocean sciences major offers students intellectually challenging conceptual training coupled with practical hands-on experience in the field and lab to prepare them for graduate school and diverse environmental career opportunities.
Structure
The environmental and ocean sciences major offers a common preparatory curriculum for all three pathways, designed to prepare students for both the core upper division environmental science classes and the suite of electives they will take as part of the major. Several of the courses in the preparation for the major satisfy core curriculum requirements. Following the common preparatory courses, all students take two gateway courses into the major: a) an in-depth analysis of contemporary environmental issues, and b) an introduction to field and research applications, in which students conduct interdisciplinary marine research in local ecosystems. During the junior and senior years, students take courses in one of three pathways and complete a capstone experience involving undergraduate research with faculty or experiential internships that culminate in a presentation of their findings. Faculty-student research collaborations may involve summer research programs, local or international field work, and the opportunity to participate in professional conferences or publications. In addition to research with faculty, certain courses offered through study abroad programs (such as the School for Field Studies or the Sea Education Association) may satisfy some requirements of the major, including the experiential portion of the capstone.
Pathways
Depending on their interests and goals, environmental and ocean sciences majors choose one of three pathways: marine ecology (which includes a biology minor), environmental science, or environmental studies. Students are encouraged to select an advisor as soon as possible. A list of advisors is available from the chair of the Department of Environmental and Ocean Sciences. | http://www.wappages.info/EnvironmentalIssues/san-diego-environmental-issues |
BEFORE YOU START, HOWEVER...
CONSIDER THIS EXAMPLE:
QU. 1 Title: "A young hero sets out to punish his wicked uncle."
This has enough detail maybe to ring a bell in your mind, but at this stage you shouldn't jump to any rash conclusions...
QU. 2 Title: "On the road to Iolcus, the hero wins the favour of the goddess Juno."
Now you have some extra clues that may help you to decide which story it is, and have also been given the names of a place and a person.
ALWAYS WRITE NAMES USING THE SPELLING AS YOU SEE IT IN THE TITLES.
...O.K. - it's probably all right to change a Latin spelling to one which has a common English equivalent (no-one will mind, e.g., if you change 'Creta' to 'Crete'),but only do this if you are sure you have the correct name with the correct alternative spelling.
Incidently, this particular title should prevent you from confusing the name of the goddess with some unknown verb in the 1st pers. singular...I've known that happen!
QU. 3 Title: "Caught off his guard, Jason is tricked by Pelias into taking on a dangerous quest."
If you hadn't already guessed, you now know for sure that you are dealing with an early part of the story of Jason & the Golden Fleece. You can approach the answers to Qu. 1 (and of course Qu. 2) with this knowledge in mind: for example, if you know the story, it would be a bad idea to answer one of the parts of Qu. 1 by guessing that "punish" in the first title means "kill" straight away. There are many adventures in the story to go before Jason gets round to that!
You have also now been given the name of Jason's uncle; this could also save some time battling with the details of Qu. 1.
ONE POINT TO BEAR IN MIND: do not 'take for granted' that the exact version of the story that you know is the one they will use. There are many variations to Mythology tales, in particular; never assume you know what is going to happen, without properly checking the actual Latin first as well.
NOW YOU ARE READY TO START ON QU. 1 itself.
FIRSTLY - do not be fooled by the 'comprehension' label! This question is as much a test of your translating ability as is Question 2.
In fact, I would advise you to look upon Question 1 as largely a VOCAB TEST. If you can identify and translate the key words that the individual questions are directing you to look for, you will score very highly on this part of the paper.
Make sure then that you know as many as possible of the vocab words for your Level - now may be the time to go back to the Vocab pages on this site and test yourself!
To demonstrate the use of these tips in more detail, here is an example passage followed by some questions:
(N.B. This passage is mostly pitched at Level 1, with a couple of fairly easy level 2 words thrown in. The style of the questions is common to all 3 levels. I have used the old format for the questions below, rather than re-writing each relevant sentence along with each question.)
Now try to answer these questions yourself. When you have decided what you want to say, highlight with your cursor the lines of the relevant letters below to reveal whether you are right! You can then read about how making use of the TOP TIPS would have led you to these answers.
Did you score 15 out of 15?
1. READ THE QUESTIONS FIRST:
Few people would disagree with this! It will probably take you no more than a minute, and can once again give you extra clues about the general outline of the story and the characters' names (spelling!).
2. ANSWER WITH CLOSE REFERENCE TO THE WORDS THEY PRINT FOR EACH QUESTION:
Now that you have to write your answers onto the actual question paper (new from 2013),they always print out for you the relevant part of the passage where you can find the answer. Your answer must refer as closely as possible to these words: the more of them that you can translate accurately, the more likely you are to score the marks on offer (see next tip!)
It will not usually be necessary to look elsewhere in the passage - or to add extra information of your own - all the information will be in the words they print for you.
Watch out for one pitfall, however: since the new format of the paper was introduced in 2013, there were words underlined and given to you in the main complete passage which they did not underline again in the individual questions. Always check back to the full passage to see if there is any extra vocab help.
In the 2014 paper this rather annoying inconsistency had not been addressed, so the above advice is likely to continue to be useful in years to come.
* UPDATE 2017*
Finally, the exam board has started to underline the supplied vocab from the complete passage in the individual questions as well...!
VDB of course claims no credit for this whatsoever....but, you never know...
3. LOOK AT THE MARKS AVAILABLE:
Look at some of the questions above to see how this works.
1 mark: e.g. qu. b) - the one mark is obviously for translating the one word 'malus'. Compare qu's a) "mortuus" and h) "videre". But beware of questions like d) - don't forget to write the Latin word first as well: there may not be a separate mark for it, but if you leave it out it will cost you the one mark that is on offer!
2 marks: this generally means you must translate two vocab words, e.g. qu. g) "diu exspectabat". Sometimes they guide you more clearly, as in qu. e): "2+2 marks" separates the two actions "Aesonem cepit" and "filias necavit".
3 marks: question f) is the hardest one here, because they are asking you to show grammar knowledge as well as translating the vocab. Tough markers might only give you 1 mark for saying "A slave helped him"; 2 marks for "A GOOD slave helped him" - even though you have now given the sense of 3 words! To be sure of scoring all 3 marks, you need to show you understand the meaning of the case endings:
"auxilio.." (abl. case) - WITH the help (auxilium is a noun); "..servi boni" (gen. case) - OF a good slave.
SOME FINAL 'TRICKS OF THE TRADE'...!
The actual wording of the questions can sometimes supply a clue as to what you need to look for as the answer.
"How is (he/she/it) described...?" - they are most probably asking you to find an adjective.
"Why did (so-and-so do something)...? - look for a sentence or clause that begins with the giveaway word "quod" - "because..."
"What did (so-and-so) do...?" - your answer must be based around translating a verb. | https://virdrinksbeer.com/pages/latin-ce-qu1 |
Loading...
Job Seekers, Welcome to NCSS Career Center
Program Coordinator, Stewardship
Vanderbilt University
Program Coordinator, Stewardship
Vanderbilt University
Details
Posted:
April 4, 2021
Location:
Nashville, TN, United States, United States
Salary:
Open
Type:
Full-time
| |
Under the supervision of the Senior Director of Stewardship, contribute to the activities of the Development and Alumni Relations (DAR) Stewardship team to support the objectives of the Office of Stewardship, DAR and Van derbilt University. The Program Coordinator will provide support to all functions of the team, with a particular focus on supporting the stewardship of endowed and non-endowed funds, scholarships, chairs and miscellaneous funds.
The Department of Development and Alumni Relations (DAR) assists Vanderbilt University in securing the resources, both human and financial, that are required to achieve its mission and goals. The Department is responsible for the identification, cultivation, solicitation and stewardship of individuals and organizations whose charitable objectives are consistent with those of Vanderbilt University's teaching and research programs
Duties and Responsibilities
At Vanderbilt University, we are intentional about and assume accountability for fostering advancement and respect for equity, diversity, and inclusion for all students, faculty, and staff. Our commitment to diversity makes us who we are. We have created a community that celebrates differences and lets individuality thrive. As part of this commitment, we actively value diversity in our workplace and learning environments as we seek to take advantage of the rich backgrounds and abilities of everyone. The diverse voices of Vanderbilt represent an invaluable resource for the University in its efforts to fulfill its mission and strive to be an example of excellence in higher education.
Vanderbilt University is an equal opportunity, affirmative action employer. Women, minorities, people with disabilities and protected veterans are encouraged to apply.
Please note, all candidates selected for an offer of employment are subject to pre-employment background checks, which may include but are not limited to, based on the role for which they have been selected: criminal history, education verification, social media review, motor vehicle records, credit history, and professional license verification.
Internal Number: 10000401
About Vanderbilt University
Vanderbilt University is a center for scholarly research, informed and creative teaching, and service to the community and society at large. Vanderbilt will uphold the highest standards and be a leader in the quest for new knowledge through scholarship, the dissemination of knowledge through teaching and outreach, and the creative experimentation of ideas and concepts. In pursuit of these goals, Vanderbilt values most highly intellectual freedom that supports open inquiry, equality, compassion, and excellence in all endeavors. | https://jobs.socialstudies.org/jobs/14640346/program-coordinator-stewardship |
Celebration of Life
This tapestry is to commemorate Brain Injury Awareness Month that occurred this past June. It was initiated at the Celebration of Life event attended by March of Dimes Canada (MODC), Brain Injury Association of York Region (BIAYR) and Community Head Injury Resource Services (CHIRS).
In light of the current, sobering events at this time in our history, the spirit of the event was one of solidarity with the vulnerable and the invisible for whose voices matter. The tapestry represents the voices of the Brain Injury Community. Their voices are represented by words that inspire them or give them joy, in their favourite colour.
Ciara O’Sullivan created the tapestry and Mary DiLallo wrote the poem that was inspired by the piece. | https://www.biayr.org/brain-injury-awareness-month-4/ |
By Melanie Petrucci, Senior Community Reporter
Northborough – Approximately 100 U.S. history students at Algonquin Regional High School (ARHS) in Northborough had the opportunity to interact with 18 visiting European journalists Oct. 26.
The purpose of their visit was to learn from the students’ perspective – how the rights and responsibilities of a free press in a democracy foster transparency and government accountability, how to verify reliable sources of information and counter misinformation, the role of social media in amplification of messages and dissemination of information and to examine ways to foster a discerning audience of media consumers.
The group was invited to the United States under the auspices of the U.S. Department of State’s International Visitor Leadership Program. They began their trip in Washington, D.C., where they met with the New Literacy Project, a nonprofit educational organization. Then they split into two groups – one traveled to St. Louis, Mo., and the other to Iowa City, Iowa. They met back up in Boston before visiting ARHS and returned home to their respective countries Oct. 27.
Catherine Griffin, a business teacher at ARHS, said that a few years ago the school added a digital literacy component to their curriculum. A part of that component is “Checkology,” a fact-checking online simulation program, which she uses in her Computer Essentials class.
Because Griffin’s class is one of the first to use this product, the New Literary Project, who created Checkology, arranged for the journalists to visit ARHS specifically.
“It’s a demonstration, but also to see civics and public education in action and their focus is discerning news information,” Griffin said.
“It’s a unique opportunity for us to come and see how things are done here and meet with policy makers here, meet with practitioners of the press and journalists so we are doing theory and we are doing practice. It’s a good sharing opportunity and exchange platform; we learn from you and you learn from us,” said Stephanie Comey, senior manager, Broadcasting Authority of Ireland. “I’m delighted I was part of it.”
After introductory remarks by Griffin, the students and journalists divided into two groups. The first group viewed how Checkology works and the second group split into four smaller groups where freedoms of speech and the press were discussed. | https://www.communityadvocate.com/2018/11/07/international-journalists-visit-algonquin-regional-high-school/ |
UPDATED 1 Sept: The EI library in London is temporarily closed to the public, as a precautionary measure in light of the ongoing COVID-19 situation. The Knowledge Service will still be answering email queries via email , or via live chats during working hours (09:15-17:00 GMT). Our e-library is always open for members here: eLibrary , for full-text access to over 200 e-books and millions of articles. Thank you for your patience.
EV sales triple in scramble to meet EU CO2 limits
Electric vehicles (EVs) are set to treble their market share in Europe this year as the result of new car CO2 targets, according to NGO Transport & Environment (T&E) . Despite COVID-19, sales of electric and hybrid vehicles have grown since January – and are expected to reach 10% of all EU vehicle sales this year and 15% in 2021.
T&E analysts examined sales data from the first half of this year, in addition to carmakers’ compliance strategies, and said that both represented ambitious regulatory work. But they also warned that there is a chance momentum in the EV market will slow due to ‘lax’ targets from 2025 and 2030.
From the levels of over 122g/km in 2019,2020 new car CO2 emissions dropped to 111 g/km in the first half of this year, the largest drop since the standards came into effect in 2008. Auto manufacturers including Volvo, BMW Group and FCA-Tesla are already complying with the EU’s target for average emissions of new cars.
Meanwhile, Renault, Nissan, the Toyota-Mazda pool and Ford have a small gap to close of 2 grams of CO2 per km. Volkswagen Group still has a 5g gap, while Daimler and Jaguar-Land Rover have more work to do, with 9g and 13g shortages, respectively. Regardless, they’ll cross the line by either selling more plug-in vehicles and/or pooling emissions with other firms.
However, the picture for the year’s European vehicle emissions is not wholly positive. Sales of SUVs crept up to almost 40% of the EU car market last year – on the back of a loophole that permits more lax CO2 targets for carmakers selling heavy vehicles. T&E also claims that half of all the electric cars sold today are ‘fake electric’ plug-in hybrids that are infrequently charged and emit 2–4 times more CO2 in the real world than in the laboratory environment. | https://knowledge.energyinst.org/search/record?id=114655 |
Who are North Korea’s scientists and engineers? What are their backgrounds? Where did they study abroad? Who are their foreign colleagues? Where are they working now? What do they know? What did they post? What lab equipment do they have? What are their main areas of research and development?
These questions are important for analysts, journalists and others trying to piece together such puzzles as the technological level of various industrial sectors of the civilian economy or the state of research and development of weapons of mass destruction ( ADM) in the Democratic People’s Republic of Korea. (DPRK or North Korea).
North Korea releases relatively little information about its overall economy, specific industries, or such sensitive military topics as WMD programs, but analysts and others can use the country’s open sources to gain valuable information on the science and technology (S&T) of the DPRK. The most sensitive information — that concerning WMD and other areas of military research — is not found between the covers of books and periodicals published by North Korea. Pyongyang does not make any military scientific journals or purely military research articles available abroad. Yet there is plenty of published information available that provides many pieces to one puzzle or another, whether purely civilian, dual-use, or military in nature.
This article is the first of two on the use of DPRK science and technology literature and other media as the basis for establishing relatively detailed profiles of North Korean scientists, engineers, and their affiliated institutions. It begins with a list of available S&T periodicals and ends with a section on its shortcomings compared to similar literature published in other countries.
Periodic list and details
Korea Publications Export & Import Corporation (KPEIC), the DPRK’s exclusive overseas sales agency for North Korean books and periodicals, publishes a catalog of publications that includes Rodong Sinmounthe daily of the ruling Workers’ Party of Korea; the illustrated monthly Democratic People’s Republic of Korea and other propaganda newspapers produced for a foreign audience; and natural and social science journals.
For this search, 40 DPRK S&T journals were reviewed (Appendix I includes a full list of titles with bibliographic information).
Many titles Available, Many After Faded away
The above 40 periodicals cover a wide variety of fields, as their titles indicate, including scientific fields such as biology, chemistry, and physics, as well as fields of industrial technology, including agriculture, forestry and technical innovation in general. Included are several scientific journals from Kim Il Sung University (KISU), the DPRK’s flagship university. There is also a periodical of international scientific news, world of sciencefor the general reader.
While 40 titles may seem like relative wealth for a country deemed secretive, they are just the tip of the iceberg. For example, Kim Chaek University of Technology (KCUT), another of the major S&T universities in Pyongyang, has its own publishing house. It produces at least one S&T periodical, which is not among the 40 listed in the appendix. One has to wonder how many other universities in the DPRK, not to mention branches and institutes of the State Academy of Sciences or other work units in North Korea, publish S&T literature.
Out of information
Compared to comparable periodicals published in other countries, the DPRK publications listed above lack detail. Some of the available journals lack English article abstracts and/or a table of contents, a standard feature of journals in China and other countries whose S&T literature is published in languages other than English.
There is little information on the authors of articles – only a few of the 40 journals offer author information. Entries in the DPRK Invention Journal include the standard field code (72), which is the name of the inventors, and (77), which appears to identify their affiliated organization. In the review agricultural irrigation and world of science, some writers are identified as “the reporter for this company” or by title and/or affiliation. It may be reasonable to assume that authors appearing in KISU journals are affiliated with Kim Il Sung University, but their affiliation is not stated. None of the 40 journals used for this research include standard background information such as an author’s education or research interests.
Articles in the reviewed journals also tend to be much shorter than what is published in scientific journals elsewhere in the world. A six-page article is relatively long in journals, where many articles are only three to four pages long. The endnotes, too, are few. It is possible to come across an article with one or even two dozen citations, but most have less than half a dozen references. In comparison, it is common to have several dozen endnotes in a single article in Western S&T literature.
The accompanying illustrations also appear to be fewer than would be featured in articles published elsewhere in the world. The Pyongyang Bimonthly world of sciencefor example, has only a few black and white illustrations per issue, whereas Discover and American scientist, to name but two American periodicals, are full of color photographs and illustrations. North Korean articles also lack conflict of interest terms, common in articles published elsewhere, and an acknowledgment section.
One thing that North Korean S&T journals include that is not found in Western journals is political rationale. Each article begins with a quote from a leader. Older articles cite the first leader, Kim Il Sung; more recent ones tend to cite something from the collected works of Kim Jong Il or something from the titular Kim Jong Un. another. For example, in an article published last year on calculating the efficiency of equipment based on the Internet of Things (IoT), the authors began their article with a quote from the current leader on vigorously achieving breakthroughs in advanced technologies to build the country’s knowledge economy.
Apart from the above differences, researchers in the DPRK essentially follow the same format as their counterparts in other countries. They state a problem, cite previous academic literature, present their materials and methods, offer a conclusion, and end with references.
And after
After listing the available titles and shortcomings of these DPRK S&T periodicals, in the next article I will suggest ways to use these journals to learn more about Pyongyang scientists and engineers, from their backgrounds to their international connections. , their knowledge and the tools at their disposal. . | https://headsets911.com/north-korean-science-and-technology-journals-getting-to-know-the-researchers-part-1/ |
To support peace, we must learn the ground truth about conflict
While I fought to make the world a safer, more peaceful place, I also learned that the ultimate goal must be to prevent and reduce conflict. We must aim to avert crises before they reach a point that requires military intervention.
As a former soldier and police officer, I have seen for myself the ways that conflict and strife can tear people down, and experienced first-hand how people focused on a common good can work together to build healthy societies back up.
As Canada’s Minister of National Defence, I had the privilege of spending a week visiting Ethiopia, Kenya, Uganda, Tanzania and the Democratic Republic of Congo to learn more about how Canada can collaborate with these nations and contribute to conflict prevention and peace support operations.
After our new government was elected last year, Prime Minister Justin Trudeau said Canada will be a responsible partner with the world. Mr Trudeau made it a priority to renew Canada’s commitment to United Nations peace operations.
Canada is a diverse country with a rich history of peacekeeping, which has taught us that we must understand what is happening on the ground in order to contribute to efforts that will result in positive outcomes. Today, the nature of conflict has changed, and so must the ways in which we conduct peace operations.
To gain this better understanding, I was accompanied on my trip by Roméo Dallaire, whom many know as a former three-star general who was in charge of the UN peacekeeping forces in Rwanda during the 1984 Genocide, and is now a retired Canadian senator working to prevent the use of child soldiers through the Dallaire Initiative; Justice Louise Arbour, a former Chief Justice of the Supreme Court of Canada, former chief prosecutor at the International Criminal Tribunal for Rwanda, and former president and CEO of the International Crisis Group; and Marc-André Blanchard, Canada’s permanent representative to the United Nations.
Around the world, the nature of conflict is changing, so what we do to prevent, mitigate, and resolve conflict must change as well. Wars used to be between states, now they are often internal.
How do you keep the peace when there is no peace to keep? Nations face threats from violent extremists, threats that require a comprehensive response that encompasses military, political, humanitarian, and development efforts. As they seek education and jobs, young people are facing challenges that our generation could not imagine.
The old approach and solutions won’t work anymore. We need to think innovatively about how we move forward. To do this, we need to see the situation for ourselves. We need to speak directly to those who know best. We need to respect the knowledge and experience that they have, and learn from it.
We need to understand the root causes that cause conflict. In other words, we need the ground truth. This tour helped us ascertain that.
General Dallaire, Justice Arbour, Mr Blanchard and I learned a great deal this week. We had fruitful, informative discussions with our government counterparts. We listened to and asked questions of our colleagues at the African Union and United Nations.
We had an opportunity to learn from and thank individuals who are working to build up civil society, police officers who protect women and children from abuse, teachers and volunteers who educate the young, and doctors and nurses who heal the sick.
I was honoured that Roméo Dallaire accompanied me to Africa. He has done tireless work on child soldiers through the Dallaire Initiative. By addressing the war crime of recruitment and use of child soldiers, we can prevent conflict while protecting children.
Indeed, General Dallaire’s work is just one example of Canada’s long history of support to East Africa’s security and development. In education, health, agriculture and support to women and girls, the Canadian government and a large number of Canadian institutions have been working for decades with East African partners to improve the quality of life for the people of the region.
I left convinced that we must strengthen and expand that tradition of partnership. What I heard from organisations that have a long presence and commitment here, such as the Aga Khan Development Network, ICRC, and Unicef, is that we must partner with both government and civil society to create the conditions for East Africa’s long term peace, prosperity, and pluralism. And we heard that Canada has an important role to play.
Furthermore, we must avoid working in silos if we are to affect real change on the ground. We must address the root causes of conflict in order to find long-term solutions that sustain peace. These conversations were invaluable. What we saw, the discussions we had, and what we learned will help inform how the government of Canada can best contribute to future peace support operations. | |
View complete answer on https://www.postbulletin.com › lifestyle › a-tumble-throug...
Traditionally a carafe is a 'vessel' that holds liquid, typically water, wine, fruit juice or alcoholic beverages. Today, carafes are more likely to be used for serving water and juices. The shape of the container doesn't affect its characteristics or the taste of the liquid it's holding.Oct 8, 2021
View complete answer on https://www.wineware.co.uk › blog › difference-between-...
Nespresso uses only processes of decaffeination with natural ingredients: water or carbon dioxide, a natural constituent of air. These processes respect the environment and the coffee bean's true nature, allowing us to maintain the strength, variety and richness of its aromas for our consumers.Jan 12, 2015
View complete answer on https://nestle-nespresso.com › news › Consumers can now ...
In contrast to this, Nespresso uses a 100% natural process of decaffeination. Nespresso coffees are processed with just water. Water comes into contact with the unroasted coffee beans and then the caffeine is allowed to dissolve gradually and organically.
View complete answer on https://www.nespresso.com › news › decaffeinated-coffee
Disappointing that Nespresso have removed a decaf option and not replaced it with something else....why? As we were increasing the size of the Lungo family, we needed a room to expand, so we made the choice to discontinue Vivalto Lungo Decaffeinato based on its level of popularity and consumer feedback.
View complete answer on https://twitter.com › nespresso › status
The CO2 process is a natural, safe process that preserves the beans' innate chemical makeup. The result is wonderfully, rich, flavorful coffee. We at QueenBean Coffee Company ran a few taste tests with our regular French Roast and French Roast decaf (CO2 processed) coffees.Aug 5, 2017
View complete answer on https://thequeenbean.blog › 2017/08/05 › natural-decaffei...
The decaf Komodo Dragon Blend and the VIA Instant Decaf Italian Roast are the only two made with a non-toxic Swiss water process.Sep 30, 2013
View complete answer on https://www.quickanddirtytips.com › articles › how-does-s...
Most versions of decaf coffee selections at Starbucks are made through a process that uses a solvent. It's called methylene chloride.Feb 2, 2022
View complete answer on https://www.roastycoffee.com › how-does-starbucks-decaf... | https://coffeebreak.life/how-many-calories-are-in-a-8-oz-cup-of-coffee |
Finally, we constantly monitor our clients' portfolios and periodically rebalance each back to its target mix in an effort to optimize returns for their intended level of risk.
After taking tax implications and trading costs into consideration, we rebalance when dividends from ETFs accrue, a deposit or a withdrawal has been made, or if movements in the portfolio's allocations justify a change.
Asset allocation
The first step in our methodology is to identify a broad set of diversified asset classes to serve as the building blocks for our portfolios.
We determine the optimal mix of our chosen asset classes by solving the "Efficient Frontier" using Mean-Variance Optimization (MVO), the foundation of Modern Portfolio Theory. The Efficient Frontier represents the portfolios that generate the maximum return for every level of risk. | https://ben-givon.com/how-it-works/ |
Our non-commission based advisors provide objective guidance to selecting investments in stocks, bonds, ETFs and broader market indexes on a Fee-Only Basis. Investments are continually monitored, and assets rebalanced intraday, weekly, and quarterly based on predetermine criteria set forth in the clients' Investment Policy Statement and Investment Advisory Contract.
Asset Allocation Matters
Asset Allocation and proper diversification should be an ongoing process, but instead rebalancing a portfolio is most often overlooked by long term investors, leading to erratic ups and downs in their portfolio's such that investors often stay in rising sectors too long, and then often sell at market bottoms. We monitor and rebalance between sectors and various indexes based on a defined diversification strategy.
Asset Allocation among classes and sectors with listed Equities |Indexes | ETFs | Funds
Mega & Large Caps
Mid-Caps
Small-Caps
International Companies
Alternative Investments
Fixed Income & Inflation Hedge Investments
ATIA creates investment plans using Modern Portfolio Theory, Strategic and Tactical Asset Allocation, and supervised portfolio management.
Our clients interests include: | https://www.advancedtrader.com/individuals-institutions/individuals/investments/ |
The Suffolk Travel Guide highlights local attractions and places of interest for visitors and contains tourist information for travellers. The area guide features travel information on local transport and travel, facts & figures, entertainment, events, maps and accommodation.
The county of Suffolk is located in the East of England region in East Anglia, the region is sometimes missed out by tourists when visiting the UK and this presents an opportunity for those that do visit to go to a place devoid of the large crowds and overly commercial tourism on display in other areas.
The county's landscape is essentially flat, with rolling meadows and valleys. The region is also a favourite among boating enthusiasts.
Suffolk has a number of parish churches dating from the late Middle Ages, the economy boomed due to the success of sheep raising.
Lavenham Cottages does attract tourists, they come to see the preserved architecture. The town boats over 300 listed buildings with the majority being authentic medieval houses showing a wide range of styles.
The Suffolk Coast is a mix of better known resorts such as Lowestoft, and many miles of peaceful wave-washed shore.
The Suffolk Heritage Coast runs from near Lowestoft to Felixstowe following the coast for the majority of the way.
In the county of Suffolk, you can visit the remains of the abbey can be see at the historic and attractive market town of Bury St. Edmunds. Those who enjoy racing are well catered for at the National Horse Racing Museum in Newmarket, seen as birthplace racing.
Thetford Forest has good walking routes, cycling and other activities. The coast is a delight the best exploring is to be found on Suffolk Coast Walk. Aldeburgh is a quiet and scenic seaside town that makes for a relaxing, enjoyable visit.
Bury St. Edmunds is located just under 30 miles from Ipswich in Suffolk, in the heart of East Anglia region. It is a picturesque town, it has been around since medieval times.
Historic parts of the original Bury St. Edmunds Abbey remain, these include St. James’ Tower that dates back to the 12th century and the gatehouse dates back to the 14th century and links the Abbey Gardens to Angel Hill.
The Abbey Gardens surround the ruins, are very much a now a beauty spot with wonderful floral displays, they are popular with visitors.
The Art Gallery gives visitors the opportunity to see new artists with new work and new ideas. There are works from both home grown UK and overseas artists and designers.
There are regular street markets held on Wednesdays and Saturdays, the town has good facilities and amenities including a choice of shopping, dining and entertainment, making it a popular place for visitors to base themselves when touring Suffolk and the East Anglia region.
Lowestoft is famous as Britain’s most easterly town, the seaside town is also the southern gateway to the Norfolk Broads. It is located 45 miles north east of Ipswich on the Suffolk coast, the town’s location meant it was damaged in the World War II but there are still some parts of the old town intact.
The town boasts plenty of cobbled lanes to walk around; this is popular among visitors as they wonder around the town. Lowestoft currently has some of the finest beaches in the UK including beaches with blue flag awards.
The seaside resort has plenty to offer visitors from its sandy beaches, Victorian seaside gardens, a range of family attractions and entertainment including traditional British seaside fun and amusement.
Local attractions include the Pleasurewood Hills Family Theme Park, Africa Alive! Family adventure with entertainment at the Marina Theatre and Somerleyton Hall and Gardens located a few miles away.
Ipswich is the county town of Suffolk located next to the River Orwell, it is England's oldest continuously settled Anglo-Saxon town and one of the oldest towns in England. The town includes a number of historic buildings and includes a number of medieval churches and buildings of architectural importance.
The waterfront is a popular place to locals and visitors to gather and includes a number of bars, cafes, restaurants and entertainment making the waterfront a lively area to explore.
Shopping facilities can be found at the Buttermarket Centre and Tower Ramparts shopping centres, Ipswich includes a number of independent shops in addition to well known high street brands.
Local attractions Christchurch Mansion, home to one of the finest collection of Constable and Gainsborough works outside of London, Ipswich Transport Museum, Ipswich Museum and for entertainment the New Wolsey Theatre.
Newmarket is a market town located 40 miles from Ipswich, known internationally as the home of horse racing. The town has been synonymous with horse racing since the 17th century and is home to 1,000s of thoroughbred horses, there are two racecourses in the town.
The National Horseracing Museum is centrally located in the town showcases the history of horseracing with exhibits, memorabilia and information on jockeys, horses, trainers and more.
The National Stud is a horse thoroughbred farm set in over 500 acres on the outskirts of Newmarket; visitors can take guided tours and learn more about the workings of the farm.
Lavenham is located 20 miles west of Ipswich, the village has been described as one of the finest mediaeval villages in England.
Located in the scenic Suffolk countryside, the charming village includes a number of independent shops including bakers, grocers, clothing , gifts and collectables, there are also art galleries for art aficionados to explore.
The village offers visitors an enjoyable day out in the countryside with a number of sites of historic interest including the Guildhall of Corpus Christi, the Little Hall and the 16th century Church of St Peter and St Paul, there are a number of guided tours and audio tours available for visitors.
The village includes a number of places to enjoy a bite to eat and drink with a choice of tea rooms, cafes, village pubs and restaurants. Lavenham has a number of countryside footpaths and circular walks with are located close by offering a great way to enjoy the scenic beauty of the area.
Suffolk offers tourists something a bit different, the area still has large rural areas and no major cities relative to much of the UK.
It presents a great chance to go to a place to enjoy the peace and quiet, for boating aficionados to indulge in their passion and for visitors to benefit getting away from the large crowds and congestion associated with the larger cities and well known tourist regions.
If you like to relax and tour at your own pace away from the crowds, then Suffolk is likely to be a place to interest you.
Disclaimer: The information given in on this website is given in good faith and to the best of our knowledge. If there are any discrepancies in no way do we intend to mislead. Important travel details and arrangements should be confirmed and verified with the relevant authorities. | http://www.essentialtravelguide.com/regional-guides/east-england/suffolk-travel-guide/ |
In the middle of a pandemic that has led to a time of unprecedented global disunity, alienation and social instability, artists have emerged with a type of experiential work that provides methods of healing and care. These practices have restorative and relaxing capacities meant to maintain mental health, to overcome increased anxiety and to simply calm down. They include manuals on how to reconnect with nature and oneself by addressing ancestral knowledge, embracing ancient rituals and by attending online mediation courses – all from the comfort of sitting in front of a laptop.
There has, for example, been a proliferation of online yoga – be it yoga in the form of a collective Deep Listening session on Zoom to interpret Pauline Oliveros’s aural techniques or weekly digital kundalini yoga classes designed by French Guyana–based artist Tabita Rezaire to help strengthen immune systems in times of turbulence and anxiety. In her visual practice, Rezaire explores contemporary implications of post-cyberfeminism, techno-shamanism and the actualisation of ancient sacred healing practices, tools which she uses to cope with the harmful influence of technology and what many call “progress”. These are coupled with AMAKABA, a healing centre where Rezaire’s vision of a camp constructed deep in the Amazonian forest has come to life, providing a space away from information overload as well as a space for collectivity and therapeutic practices working at the intersection of science, art and the spiritual rituals of Indigenous peoples of Latin America. Another certified yoga instructor, Russian artist Sofya Skidan, deconstructs the asanas of hatha and ashtanga vinyasa yoga in her installation and performance works. She melds Eastern spiritual practices with post-humanist theory, reflecting on complex issues like the climate crisis and the ecological instability of the Anthropocene, and raises questions about new perceptions of identity in the context of technogenic culture.
Other artists, meanwhile, are critiquing capitalism’s co-opting of the wellness industry while simultaneously acknowledging and promoting the importance of self-care. In a project commissioned for the online edition of the 13th Gwangju Biennial, Polish artist Ana Prvački responded to global transformations sparked by the pandemic by proposing artworks that function like a facelift: In a series of three videos, she markets her imagined solutions for pandemic-related anxiety, including the CGI Multimask, which can beautify, protect and cover up a nervous breakdown. Prvački approaches the wellness, healing and beauty industries with irony and language borrowed from advertising and start-ups, yet she also believes that care is an essential part of life. Her interest in wellness comes from a place of sincerity: She regularly doles out skincare and recipe recommendations in her personal life and fills her house with a jungle of plants.
Physical and mental health, however, are unfortunately not often seen as a priority in the artworld. Taking cues from artists working with healing practices and methods of care, the artworld should use the time of the pandemic as an opportunity to rethink its values and priorities by reorienting itself towards a more sustainable and restorative model.
Alexander Burenkov is an independent curator, cultural producer, art critic and writer whose work navigates contemporary visual culture and socio-technical systems including ecology and the web. He teaches curatorial research and contemporary art curation at Sreda Obuchenia School, Moscow School of Contemporary Art and RMA Business School. In June 2021, he launched the app Yūgen, which was supported by the Porto Design Biennial and features an ever-growing collection of artist-led exercises to help one reconnect with nature and oneself. | https://expedition.liste.ch/2021/discourse/alexander-burenkov/ |
Secondary hypertension has been associated with hypothyroidism. High blood pressure values and the prevalence of hypertension have been demonstrated in hypothyroid patients. Low cardiac output, increased peripheral and vascular resistance, arterial stiffness, high levels of norepinephrine in serum have been considered the mechanisms involved in developing diastolic hypertension in a state of hypothyroidism. To evaluate the association between hypertension and hypothyroidism, analysis of thyroid hormones levels and blood pressure values were determined in 450 chronic thyroiditis female patients of Hyderabad, Sindh, Pakistan. On the basis of TFT, T4 (thyroxine) and TSH (thyroid stimulating hormone) level in blood 298 females were categorized as euthyroid while 152 were hypothyroid [T4 (thyroxine) = 3.01 ± 0.2 µg/dl and thyroid stimulating hormone (TSH) = 108.5 + 5.4/µU/ml]. In hypothyroid females with age more than 50 years, not systolic but diastolic blood pressure was recorded to be higher as compared to females of same age with euthyroid. In hypothyroid females, high prevalence of hypertension was observed with systolic/diastolic BP more than the reference value of 160/95 mm Hg (15.4% vs. 6.3% and p value< 0.01). The data of both groups hypothyroid and euthyroid was collectively analyzed, significant correlations was found between thyroid hormones level T3 and T4 (r = - 0.159, p value< 0.01, and r = - 0.198, p value< 0.01). Replacement therapy of thyroid hormone can be used to normalize the thyroid function. It can be concluded that there was found a strong association of hypothyroidism and hypertension. | https://thepab.org/index.php/journal/article/view/1834 |
Fatal violence erupted in Britain again this weekend. Another sign that the war on terrorism continues unabated sparking once again calls for stronger retaliation.
The principal problem is that too many of the terrorists become radicalized because western nations including the United States fail to make these residents part of our national culture.
When you grow up feeling marginalized, alienated, and rejected by the minority culture, it’s tempting to accept a surrogate family and embrace their ideals. This is why urban gangs proliferate in poor black and Latino U.S. neighborhoods. And young Muslims turn to a violent form of Islam.
It’s easy to argue for stronger anti-terrorism laws and harsher law enforcement response. But until our respective nations, cities and neighborhoods reach out to minority families and embrace them as one of us, this sense of isolation will fester until it explodes as it has in several nations.
The truth is that too many of us in our cultural cocoons ignore others and really don’t want to know them. As long as we keep at arms length those who are different from us, they will choose a different path. Often with violent consequences for all of us. | https://stephencoon.org/2017/06/04/to-stop-terrorism-we-have-to-look-at-ourselves/ |
Sony Xperia Z4 Tablet’s display offers high maximum brightness, but we’ve detected screen flickering
Just a little tease before the actual review comes out – the initial tests of Xperia Z4 Tablet’s screen are in and we have mixed feelings about them. While the maximum brightness is more than 500 cd/m2 assuring comfortable usage even under direct sunlight, which is by the way job well done considering the pixel-packed panel (2560×1600), we’ve also recorded PWM below 67% screen brightness.
Quite surprising to be honest, most smartphones and tablets with LCD panels don’t have aggressive light pulsation, but this one has it during most of the time. You can see that screen flickering occurs only at 67% and below, but even then the frequency of the emitted light is 9.5 kHz. This is pretty high and it is considered less harmful so only users with really sensitive eyes will notice it.
Stay tuned for our full review later today. | https://www.laptopmedia.com/news/sony-xperia-z4-tablets-display-offers-high-maximum-brightness-but-weve-detected-screen-flickering/ |
2022 Interim Results Highlights:
For the six months ended 30th June
(RMB million)
2022
2021
Change
Revenue
605
579
+4%
Gross profit
252
286
-12%
Profit attributable to owners of the parent
134
151
-11%
Contracted projects
GFA (million sq.m)
48.6
41.9
+16%
Number of projects
264
234
+30
Projects under management
GFA (million sq.m)
24.0
18.6
+29%
Number of projects
156
125
+31
HONG KONG SAR –
Media OutReach – 31 August 2022 –
SCE Intelligent Commercial Management Holdings Limited (“SCE CM” or the “Company”, together with its subsidiaries, the “Group”, HKEX Stock Code: 606), a comprehensive property management service provider in China, is pleased to announce its unaudited interim results for the six months ended 30th June 2022 (the “Period”).
Benefiting from the increase in GFA under management, the Group recorded a 4% year-on-year (YoY) increase in revenue to approximately RMB605 million during the Period. Gross profit was RMB 252 million, overall gross profit margin reached approximately 41.6%. During the Period, the Group recorded a profit of RMB137 million. Profit attributable to owners of the parent was RMB134 million and the basic earnings per share was RMB6.47 cents.
As of 30th June 2022, the Group’s total number of contracted commercial property projects and commercial property projects under management were 43 and 16, respectively. The corresponding contracted GFA increased by 24% YoY to approximately 6 million sq.m, while GFA under management saw a substantial YoY increase of 51% to approximately 1.6 million sq.m. The commercial property management and operational services segment contributed approximately RMB203 million to the revenue.
For the residential property management services segment, the Group recorded a revenue of approximately RMB402 million, increased by 35% YoY. Thanks to economies of scales achieved from business expansion, the gross profit margin rose from 35.7% to 39.8% as compared to the same period last year. As of 30 June 2022, the Group had a total of 221 contracted residential projects, total contracted GFA increased by 15% YoY to approximately 42.5 million sq.m. The number of residential projects under management increased by 27 to a total of 140, the corresponding GFA also recorded an increase of 27% YoY to approximately 22.4 million sq.m.
Despite the persistent impacts the COVID-19 outbreaks had on SCE CM and the industry as a whole in the first half of the year, the Group strived to promote the enhancement of experience-based commercial space and actively expanded its diversified business by focusing on multiple aspects such as joint sales and online planning, successfully offsetting the impacts of the pandemic and achieving excellent results.
Mr. Wong Lun, Chairman of SCE CM, said, “Looking ahead, SCE CM will continue to promote digital application to lower the demand for human resources to further enhance the Group’s profitability. The Group will also continue to drive the expansion of third parties’ projects under management by increasing the number of contracted projects and projects under management, so as to further enhance the operational capabilities of the Group and its influence in the market.”
Hashtag: #SCEIntelligentCommercialManagement
The issuer is solely responsible for the content of this announcement. | https://newspatrolling.com/sce-cm-announces-2022-interim-results/ |
The American Law Reports (ALR) are a set of books used by lawyers, paralegals, and other law professionals to conduct legal research on cases from all of the state and federal courts in the United States. In addition to including case law from all United States jurisdictions, the American Law Reports also contain articles, called annotations, which address specific points of law, legal rules, doctrines, and principles. The annotations are authored by lawyers, and they include cases dating back as far as 1919.
A typical American Law Reports annotation contains a complete illustrative case, along with a thorough summary of the case at the beginning of the annotation. Additionally, it includes a summary of any cases that are pertinent to the particular legal issue at hand. The annotation generally notes any differences among the outcomes of the cited cases, and it usually also directs the researcher to any similar topics. Ordinarily, an annotation includes references to any statutes, rules, and regulations which relate to the subject matter being reviewed.
An American Law Reports annotation usually includes practical information for lawyers, called practice pointers. In addition, it normally includes a reference that allows a lawyer to determine whether a particular case is still valid or whether it has been overturned by another ruling. Most annotations also contain a summary of other references that a lawyer can look at in researching the subject matter under consideration. For instance, an article may include a listing of footnotes, law review articles, digests, and treatises on the topic at issue. In addition, a reference to the briefs, pleadings, and motions that support the cases discussed in a particular annotation may be included.
If a lawyer is looking for a case in a particular jurisdiction, he or she can refer to the American Law Reports table of jurisdictions. This table contains a complete listing of court decisions from specific states. If the researcher determines that the annotation he or she is reviewing is not on point, the researcher can check for corollary references in the annotation. These references point to cases, statutes, and secondary research sources that address similar or related topics.
The American Law Reports are updated on a regular basis. As a result, legal researchers must ensure the annotations that they are reviewing are current. This can be done by examining supplements in the back of each volume or by reviewing a history table contained in index section of the book.
One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK! | http://www.wisegeek.net/what-are-the-american-law-reports.htm |
The impact of COVID-19 has been acutely felt in cities around the world, transforming all aspects of city life, including how goods and people travel. Initial lockdowns caused a dramatic decrease in passenger travel in cities, which has slowly rebounded, but continues to be below pre-pandemic levels, as people continue to work from home (where possible) or have become unemployed due to the impacts of the pandemic and no longer travel to work. There has also been a significant uptick in urban deliveries in some cities, as residents have replaced in-store shopping with online orders.
Travel preferences have also been changing, as health concerns have overtaken cost, convenience and time considerations. (McKinsey) For example, as fears over contagion of COVID-19 on shared forms of mobility mounted, public transport and rail ridership fell substantially across the world's regions in the first months of the crisis. (Washington Post) Many urban dwellers who were able switched to individual forms of transport, including biking, walking, and private vehicle use for their mobility needs. (Euractiv) In China, for example, car use increased substantially from pre-pandemic levels (ITDP), creating new challenges for cities.
In response to these significant shifts in travel behavior, cities have shown their agility, making dynamic policy decisions including a number of immediate, often temporary measures to ensure that people and goods are able to safely travel within city boundaries (e.g. through farm-to-market rolling stores in the Philippines).
Among these measures include a reallocation of public space, in many cases restricting car use on streets in order to make room for an increase in biking and walking (e.g. through temporary pedestrian zones in Kenya). This is happening in more than 100 cities across the world. (COVID Mobility Works)
For additional information on how cities have been at the forefront of responding to the impacts of COVID-19 on urban mobility, please see public transport, cycling, walking, informal transport, and freight transport, as covered elsewhere in the gTKP.
Citizens have also seen the benefits of reduced traffic and improved air quality, especially apparent when lockdowns eased and high levels of air pollution returned, as has been the case in Delhi and a number of other cities around the world. This has raised awareness among policymakers of the need to rethink transport policies that exacerbate pollution, such as favouring car use over other, more sustainable and accessible forms of transport.
There have also been more far-sighted responses to the pandemic, including new land use policies prioritising accessibility. In the popular “15-minute city” concept, job opportunities, services, and recreational spaces are clustered in multiple points within a city, enabling residents to meet their needs within 15 minutes of their homes on foot, by bike, or on public transport. Similar concepts have been highlighted in EcoMobility Festivals in Korea, South Africa and Taiwan (ICLEI), which transform a neighborhood or business district into a carfree and ecomobile area for a month, demonstrating the possibilities of an innovative and forward-thinking urban mobility culture. These demonstrations take on new relevance in the face of COVID-19 related urban mobility challenges, and have the potential to be replicated in cities across the Global South to increase post-pandemic resilience.
This section was developed by SLOCAT Partnership on Sustainable, Low Carbon Transport with contributions from the World Resources Institute (WRI) and the Institute for Transportation & Development Policy (ITDP). | https://www.gtkp.com/themepage.php&themepgid=472 |
With a name and a voice that has become synonymous with wildlife, nature, and travel, the great Sir David Attenborough is as much a global expert as he is a global icon. In his 60+ years as a broadcast journalist, the UK native has notched up more stamps on his passport and film rolls in his camera that the rest of us combined can merely dream of.
RELATED: A Shocking Comparison: 5 Photos Of The Great Barrier Reef 10 Years Ago & 5 Of It Today
If there's any living man on Earth who knows the north pole from the south, and the barren Saraha from the flourishing Amazon Rainforest, it is without a doubt Sir David. So, for someone who's been there, done that more times than our planet has revolved around the run, which destinations stand up above the rest?
10 Far North Queensland, Australia
It’s no surprise that Far North Queensland is Sir David Attenborough’s favorite spot, because anyone who’s had the privilege to check it out is likely to say the same. Following a visit, the great man himself labeled Australia’s tropical Far North as “the most extraordinary place on earth” (flightcentre.co.uk), and it’s not hard to see why - it’s the only place on Earth that features two side-by-side World Heritage sites, the sprawling Daintree Rainforest and the renowned, biologically diverse Great Barrier Reef.
Just a few hours’ drive further south sits some of the world’s most picturesque beaches as well - the Whitsundays - so why not knock it all out in one trip?
9 London, England
As Sir David Attenborough’s home city, it’d be impossible to leave the British capital off of this list. Not only is this his familiar locale (in case you haven’t been able to decipher that infamous accent), but according to the man himself, it has plenty to offer in a tourist sense as well - ''London has fine museums, the British Library is one of the greatest library institutions in the world... It's got everything you want, really.'' (telegraph.co.uk)
RELATED: 5 Reasons Why Brits Love London (And 5 Reasons Why They Hate It)
It’s not like you needed David to tell you to visit London though, did you? There’s a reason why it sits atop so many bucket lists.
8 Wales, UK
Sharing the western side of the main UK landmass with the border of England, Wales is a place of truly underrated beauty. It’s always been one of David Attenborough’s vacation destinations, partly because of the convenience but also due to its unique organisms such as trilobites and graptolites.
As the television legend states himself, “My favourite place to go on holiday has always been the west of Wales. That stems back to the time when I was in the Royal Navy in the 1940s” (via Travel Weekly)
7 The Amazon Rainforest
Alongside the Great Barrier Reef in Far North Queensland, the monstrous South American Amazon rainforest is, according to Sir David Attenborough, one of the “two incredible places I never want to contemplate not visiting again.” (Travel Weekly)
The rainforest, which is by far and away the world’s largest, spans over two million square miles across Brazil, Peru, Colombia, Bolivia, Venezuela, Ecuador, Guyana, Suriname, and French Guiana. It plays host to a plethora of rare wildlife, unique ecosystems, and incredible biodiversity. For any nature lover, the Amazon should sit at the top of the bucket list.
6 Cley Marshes, Norfolk, England
Over on the eastern side of England, Mr. Wildlife himself David Attenborough considers Norfolk’s Cley Marshes to be “one of the great places in Britain to see wildlife” (Travel Weekly). Only a few hours drive from London, it can provide a much-needed breath of fresh air from the hustle and bustle of the big city.
RELATED: 10 Coastal Destinations In England That Will Melt Your Heart
The great man has spoken about how this area, although protected, is severely at risk of change due to its proximity to the coast and effects of climate change. Attenborough has also stated the ironic pleasantry that the Norfolk Wildlife Trust was founded the same year he was born - 1926.
5 The Galapagos Islands
Belonging to the South American nation of Ecuador, the Galapagos Islands are a world-renown volcanic archipelago in the Pacific Ocean which offer marine biodiversity unlike any other location on the planet. Its list of native animals is as long as a child’s letter to Santa, with species such as the flightless cormorant, Galapagos flamingo, Galapagos penguin, and Darwin finch just a few of the stunning creatures.
When asked by Travel Weekly which place Sir David would recommend above all others for nature enthusiasts, his response was simple: “I would recommend the Galapagos Islands to anybody if they have the means to go there. Flying over the volcanoes in a helicopter, though logistically difficult, was such a thrilling thing to be able to do.”
4 Leicestershire, England
The landlocked county nested in the English midlands has been a favorite destination for Sir David Attenborough for decades, having collected his first fossils from the area back when he was just a young boy of six or seven - many of which are on display these days at Leicester University (Travel Weekly). As told by the great man, the area was abundant with Jurassic rocks, which he loved.
While it’s certainly not a revered holiday destination like your Londons and New Yorks, Attenborough's love for Leicestershire comes from the personal significance it holds.
3 Skeleton Coast, Namibia
Sir David Attenborough does admit that, unfortunately, travelers are widely put off from visiting certain parts of Africa thanks to its representation in the media. He counters that in Travel Africa Mag, however, saying that despite there being places of which our judgment is clouded, like the Atlas Mountains, “there’s still so much to explore. Me, I want to explore the mythical tundra of the Skeleton Coast in Namibia. There is so much to see and do.”
RELATED: 10 Breathtaking Deserts You Need To Walk Across On Your Next Trip!
The Skeleton Coast is as barren as it is majestic, with striking red hues and a vast, dazzling contrast between sand and sea.
2 Madagascar
You’d be hard-pressed to find any fan of David Attenborough who hasn’t binge-watched Planet Earth I and Planet Earth II time and time again. In the 'Jungles' episode of Planet Earth II, Attenborough ventures into the tropical rainforests of the small, yet incredibly biodiverse nation of Madagascar.
It’s no secret that the great man has an undying affinity for all of Earth’s creatures, big and small - and, incredibly, more than half of the world’s species call the small African island home. It’s a wildlife haven unlike any other on the planet, so it makes perfect sense why it sits high up on Sir David’s list.
1 Antarctica
For the final stop on our Attenborough-inspired journey around the world, let’s hope that you’ve packed your mittens and coat, cause it’s gonna be a cold one. When filming Planet Earth and Blue Planet, Attenborough and his team made the long journey down to the south pole territory of Antarctica.
Considering that it’s such a vast, mostly desolate, isolated, and largely untouched landscape, the wildlife that struts around in the world’s coldest regions are a little camera shy. For anyone who does dare make the trip, however, Emperor penguins, killer whales, snow petrels, and many more stunning species await. | https://www.thetravel.com/david-attenborough-favorite-travel-spots/ |
Personal Information: When and how we collect it and how we use it
Neither KCRMD nor KCRMA collects personal information from visitors to the website who are simply looking at the site. As a visitor to our site, you may choose whether or not to provide personal information to KCRMD and/or KCRMA online. “Personal information” is information about a natural person that is readily identifiable to that specific individual. Personal information includes such things as an individual’s name, address, and phone number. A domain name or Internet Protocol address is not considered personal information (please see “Log Files”). We collect no personal information about you unless you voluntarily provide it to us by sending us e-mail, participating in a survey, signing up for an e-subscribe service, completing an on-line form, or engaging in an “online” transaction. You may choose not to contact us by e-mail, participate in a survey, provide personal information, sign up for an e-subscribe service, provide personal information using an on-line form or engage in an on-line transaction. If you do provide us with personal information, we treat the information we collect online in the same manner we treat information collected by other means. Any personal information collected by KCRMD and KCRMA is strictly for the use of KCRMD and KCRMA and not for sale or disclosure to any other organization or advertising agency. The goal of collecting information is to provide you with more personalized and effective services. KCRMD and KCRMA will not share personal information or e-mail addresses with anyone outside KCRMD and KCRMA or any third party unless under a court order. Our goal is to protect online customer information in the same manner that we protect information provided by any other means.
Security
This site has measures in place to protect the loss, misuse and alteration of the information under our control. This information should not be construed in any way as giving business, legal, or other advice, or warranting as fail proof, the security of information provided via KCRMD’s and KCRMA’s web site.
Credit Card Information
This site encrypts your credit card number prior to transmission over the internet using secure socket layer (SSL) encryption technology to safeguard your personal information.
Cookies
Log Files
Every computer connected to the internet is given a domain name and a set of numbers. This domain name and set of numbers serves as the computers’ “Internet Protocol” or “IP” address. When a visitor requests a page from any website within the ken-carylranch.org domain, our web servers automatically recognize the visitor’s domain name and IP address. This does NOT include personal information such as names or e-mail addresses of visitors. The domain name and IP address reveal nothing personal about the visitor, other than the IP address from which the visitor has accessed the site. These domain names and IP address are recorded in our servers’ log files. Log files can be used for the following purposes: to monitor which pages are visited the most and least often, for purposes of improving the site; to detect server errors for purposes of troubleshooting; to generate aggregate reports of website traffic; and to monitor files for site security. Log files are not kept on file permanently and will eventually be deleted. KCRMD and KCRMA will not release any information regarding the collection of IP addresses to any third party except under court order.
Direct Mail
KCRMD and KCRMA may use your personal data to send you information regarding our services and activities. We will not sell this information to a third party, and it will not be disclosed unless under a court order.
Site Comments
We appreciate questions, comments and any other feedback that will help us to improve our service to you. If there is something not on this website that you would like to see, if you have noticed something incorrect on this website, or if you have any comments on the technical or creative aspects of this website, please contact us at [email protected] .
Copyright
The pictures, text, design and layout of these Internet web sites are the sole and exclusive property of the Ken-Caryl Ranch Metropolitan District and the Ken-Caryl Ranch Master Association. While this information is available for the personal and private use of the authorized users of this website, no commercial use is to be made of this information without the expressed, written consent of the KCRMD and the KCRMA. This means that you may not: distribute the text or graphics to others without the express permission of KCRMD and KCRMA, “mirror” KCRMD and KCRMA’s information on your server without permission, or modify or reuse the text or graphics on our system. You may print copies of this information for your own personal use and reference this server from your own documents. Commercial use of the materials is prohibited without the written consent of KCRMD and KCRMA. In all copies of this information, you must retain this notice and any other copyright notices originally included with such information.
The Ken-Caryl Ranch Metropolitan District (KCRMD) and the Ken-Caryl Ranch Master Association (KCRMA) attempt to keep the information on this internet website current, accurate and complete. However, reliance upon any materials on this site shall be at your sole risk. All such information of importance to the user should be verified independently before being used or placing reliance upon it. The KCRMD and KCRMA assume no responsibility for any damages arising out of the use of this site.
The materials provided on this website are provided “as is”. Neither KCRMD nor KCRMA makes any representations or warranties concerning the accuracy, likely results, or reliability of the use of the materials on this website, or otherwise relating to these materials or on any sites linked to this site. Links or references to other websites or organizations do not imply endorsement by the KCRMD or the KCRMA. | https://ken-carylranch.org/privacy-policy/ |
British Christianity Death Watch
A landmark in national life has just been passed. For the first time in recorded history, those declaring themselves to have no religion have exceeded the number of Christians in Britain. Some 44 per cent of us regard ourselves as Christian, 8 per cent follow another religion and 48 per cent follow none. The decline of Christianity is perhaps the biggest single change in Britain over the past century. For some time, it has been a stretch to describe Britain as a Christian country. We can more accurately be described now as a secular nation with fading Christian institutions.
There is nothing new in the decline of the church, but until recently it had been a slow decline. For many decades it was possible to argue that while Christians were eschewing organised religion, they at least still regarded themselves as having some sort of spirit-ual life which related to the teachings of Jesus. Children were asked for their Christian name; conversations ended with ‘God bless’. Such phrases are now slipping out of our vocabulary — to wear a cross as jewellery is seen as making a semi-political statement. Christians are finding out what it’s like to live as a minority.
Just 15 years ago, almost three quarters of Britons still regarded themselves as Christians. If this silent majority of private, non-churchgoing believers really did exist, it has undergone a precipitous decline. Five years ago, the number of people professing no religion was only 25 per cent.
In March, the American Journal of Sociology published research and analysis by David Voas and Mark Chaves, showing that the United States, because of its high levels of religiosity compared to other Western nations, can no longer be considered an exception to the secularization thesis. From the study:
We have established three central empirical claims. First, religiosity has been declining in the United States for decades, albeit slowly and from high levels. Second, religious commitment is weakening from one generation to the next in the countries with which the United States has most in common, and generational differences are the main driver of the aggregate decline. Third, the same pattern of cohort replacement is behind American religious decline. This decline seems to have begun with cohorts born early in the 20th century. At least since then, strong religious affiliation, church attendance, and firm belief in God have all fallen from one birth cohort to the next. None of these declines is happening fast, and levels of religious involvement in the United States remain high by world standards. But the signs of both aggregate decline and generational differences are now unmistakable.
In other words, Britain is way ahead of us, but we are on the same downward course.
As you may know, I’ve been at a conference this weekend in which the Benedict Option was the theme. I learned a lot, and got some good, constructive criticism from some of the panelists. Some others, though, seemed to me to be determined to reject the thesis without ever really grappling with it or (more to the point) without recognizing the problems it tries, however badly, to address. Stuff along the lines of:
Me: “I’m not saying that we have to all head for the hills. I’m not saying that we have to all head for the hills. Head for the hills? I’m not saying that. Some might feel called to do that, and God bless them, but I think that is neither feasible nor desirable for all of us. To repeat: I’m not saying that we all have to head for the hills.”
Critic: “You’re saying we have to head for the hills, and that’s just crazy.”
Leaving aside the legitimate criticism of the Benedict Option concept, made in good faith — and there is plenty of it, and I’m grateful for it because it helps me learn and refine the model — my guess is that a lot of people who fiercely, even angrily, reject the very idea of the Ben Op find it unthinkable that things in America are not always going to be more or less okay for us Christians. And/or, they cannot accept the possibility that whatever goes wrong cannot be fixed within the system we have now. If my analysis is correct, then a lot of things that they believe are true about the way we Americans live no longer are true, and the response required is a radical one along the lines of what I propose in the Benedict Option. Because that is emotionally and conceptually repulsive to them, the Benedict Option must be nonsense. That damn fool building the ark over there ought to wise up and realize the rain is bound to stop, and besides, it has never flooded in these parts.
Well. First, even if religious liberty jurisprudence were to freeze in place today, and we orthodox Christians were able to hold on to the liberty that we have, we would still need the Benedict Option, because the government is far from our biggest problem. We live in a culture that has shattered, and is shattering to religious truth. In other words, we live today in what Zygmunt Bauman has described as “liquid modernity”:
Liquid Modernity is sociologist Zygmunt Bauman’s term for the present condition of the world as contrasted with the “solid” modernity that preceded it. According to Bauman, the passage from “solid” to “liquid” modernity created a new and unprecedented setting for individual life pursuits, confronting individuals with a series of challenges never before encountered. Social forms and institutions no longer have enough time to solidify and cannot serve as frames of reference for human actions and long-term life plans, so individuals have to find other ways to organize their lives.
Bauman’s vision of the current world is one in which individuals must to splice together an unending series of short-term projects and episodes that don’t add up to the kind of sequence to which concepts like “career” and “progress” could be meaningfully applied. These fragmented lives require individuals to be flexible and adaptable — to be constantly ready and willing to change tactics at short notice, to abandon commitments and loyalties without regret and to pursue opportunities according to their current availability. Liquid times are defined by uncertainty. In liquid modernity the individual must act, plan actions and calculate the likely gains and losses of acting (or failing to act) under conditions of endemic uncertainty. The time it takes to fully consider options and make fully formed decisions has fragmented.
This is a very different Dark Age than the one that followed the fall of the Western empire, but a Dark Age it is, insofar as you can describe a Dark Age as an Age of Chaos and Mass Forgetting. And it will require a new, and quite different, St. Benedict for Christians to resist it, and ride out the flood.
The story above about faith in Britain, taken from the Spectator, reminds me of an English friend here in America who, with her husband, left her native land to settle here, in part because she wanted to give her children a better shot at remaining Christian than they would have if they stayed in Britain. She chose to go into exile from the country of her birth, the country she loves, because she recognized that some things are more important. From the long view, she has bought her family line some time — a generation, probably two — before the same flood that has drowned Christianity in Britain reaches catastrophic levels here. We had all better make good use of the time we have been given to prepare. Voas and Chaves, who are among the world’s leading scholars on this kind of thing, say that this process, which is carried along by very deep cultural currents, starts, it is very difficult to reverse.
I would add that yes, the United States has gone through periods in its past (e.g., the Colonial period) in which religious observance was not particularly robust, and those periods were reversed through revival (Great Awakenings). Britain has had the same experience. But the condition of liquid modernity, I argue, makes it highly, highly unlikely that we are going to see a repeat. God may send us this great grace, but we must prepare for much worse. As the poet Terence addresses his jolly critic in the well-known A.E. Housman poem (cited by Hope College’s Jeff Polet in his remarks at this weekend’s conference):
Therefore, since the world has still
Much good, but much less good than ill,
And while the sun and moon endure
Luck’s a chance, but trouble’s sure,
I’d face it as a wise man would,
And train for ill and not for good.
I am eager to hear in this thread from UK readers of this blog who are observant Christians. How do you regard the present and the future, in terms of your faith? How are you preparing your children for it? | https://www.theamericanconservative.com/dreher/british-christianity-death-watch/ |
PETALING JAYA: Bank Negara Malaysia (BNM) is expected to hike its overnight policy rate (OPR) by 50 basis points (bps) to 2.25% in 2022 to rebuild policy buffers and maintain ringgit stability.
According to Fitch Solutions Country Risk and Industry Research, the central bank seeks to ensure the stability of the ringgit and maintain its real interest rate differential over the US, which is now likely to begin hiking in 2022 rather than in 2023.
“The inflationary pressures are building, while the BNM does not have a numeric inflation target, it is still mandated to maintain price stability. We continue to see some upside risks if major central banks around the world were to tighten monetary policy even quicker than we currently expect due to growing and underlying inflationary pressures,“ it noted in a statement.
BNM decided at its monetary policy committee meeting to hold its benchmark OPR at 1.75% on Jan 20.
“In the statement accompanying the decision, BNM highlighted uncertainties surrounding the global economy noting that while the ‘global economy continues to recover’, risks such as ‘the emergence of new variants of concern, risks of prolonged global supply disruptions, and risks of heightened financial market volatility amid adjustments in monetary policy in major economies’, remained.
“The government expects growth to range between 5.5% and 6.5% in 2022, against our forecast of 5.5% growth, and the stronger growth picture will provide the central bank space to hike to head off inflationary pressures and rebuild policy space.
“We have revised our average inflation forecast for 2022 upward to 3.3% compared to our 2.5% estimate for 2021. The main driver of inflation in 2022 will likely be higher average oil prices and higher input costs as economic activity normalises. Our oil & gas team now expects Brent crude oil prices to average slightly higher at US$72 (RM301.68) per barrel (bbl) in 2022, compared to US$70.95/bbl in 2021,“ it noted.
Besides, the consumer price index also surged by 12.6% year-on-year (y-o-y) in November 2021, marking the eighth consecutive month of double-digit increase, and this will likely feed through into higher consumer prices in the coming months.
“Risks to our interest rate forecast are slightly tilted to the upside. A surge in inflation could prompt the central bank to accelerate the rate hiking cycle in 2022 to ensure positive real interest rates and rebuild policy buffers in anticipation of global monetary policy tightening.
“The contingency becomes more likely if major central banks accelerate their rate hiking cycle, perhaps in response to a swifter increase in inflation than they currently anticipate.
“Although there remains the possibility of another negative shock to the economy from an emergence of another variant of concern that causes greater strain on the healthcare sector, the high vaccination rates should prevent another largescale lockdown similar to the full movement control order imposed from June to October 2021,“ it noted. | https://www.thesundaily.my/business/bnm-still-on-track-to-hike-rate-in-2022-BI8794003 |
Evolutionists argue that organisms can evolve by loss of function. Sounds like Behe’s hypothesis. But you’ll never get wings and eyes that way.
Vindicated But Not Cited: Paper in Nature Heredity Supports Michael Behe’s Devolution Hypothesis
The literature is looking at the same data that intelligent design proponents are looking at, making similar observations, and asking similar questions.
Harvard Molecular Geneticist Vindicates Michael Behe’s Main Argument in Darwin Devolves
Mainstream evolutionary biologists are independently arriving at very similar conclusions to Behe’s central thesis.
A Response to My Lehigh Colleagues, Part 1
Their review pretty much completely misses the mark. Nonetheless, it is a good illustration of how sincere-yet-perplexed professional evolutionary biologists view the data.
Bacteriophages, Budding Yeast, and Behe’s Vindication
It’s been known for some time that bacteria evade antibiotics by mutating the target of the antibiotic, often at a cost to themselves. | https://evolutionnews.org/tag/loss-of-function-mutations/ |
Salgomed - Search Algorithms for Medicine
Salgomed provides pharmaceutical companies and physicians with actionable information about the clinical efficacy of drugs and drug combinations. Salgomed has developed a novel platform for precision medicine that can be used to identify novel cancer drug targets and to provide rapid clinical validation of candidate drugs.
Salgomed’s precision medicine platform is based on several novel technologies: (i) Machine learning to identify specific drug targets that are associated with a therapeutic response. (ii) Disease-specific molecular networks, reconstructed with a novel method emphasizing data reproducibility using independent clinical datasets. (iii) High-throughput screening on patient’s cells in clinically predictive culture conditions, using search algorithms to identify effective drug combinations.
The integration of these computational and experimental technologies represents a novel approach to artificial intelligence in healthcare.
Salgomed has alumni affiliation with JLABS San Diego
Contact us at : [email protected]
Supported by:
Copyright 2019 Salgomed Inc. All rights reserved. | http://salgomed.com/ |
How Many Charging Stations Needed for e-Cars in Oslo by 2020?
(photo from dagsavisen.no)
Background
Norway is recognized as the country with the largest electric cars per capita in the world. In 2015, electric vehicles had a 22 % market share in Norway (elbil.no).
At the end of 2015, there is a very hot debate in Norway about the traffic in the Oslo center. It is expected that by 2019 private cars are banned in the Oslo center. Following this policy, all private cars have to stop around Ring 1 in Oslo. All electric vehicles also stop at Ring 1. We define the term “Charging Zone” to refer to the area where all electric vehicles stop and charge their batteries.
One significant consequence is that all electric vehicles will be charged at the same time on the early morning in the charging zone. This will give very high electricity demand to the power grid and serious blackouts may happen. The power demand may be also very dynamic at different time and different location. Consequently, to successfully implement the “bilfritt” policy by 2020, we need to make a very careful design for electric vehicles, charging stations and smart grid. In particular, we should decide the number of charging stations needed and also the location of charging stations.
Goal:
Plan the charging stations for electric vehicles in Oslo by 2020
What will I do? | https://www.mn.uio.no/ifi/studier/masteroppgaver/nd/how-many-charging-stations-needed-for-e-cars-in-os.html |
---
abstract: 'We provide a construction of associating a de Rham subbundle to a Higgs subbundle in characteristic $p$ in the geometric case. As applications, we obtain a Higgs semistability result and a $W_2$-unliftable result.'
address:
- 'School of Mathematical Sciences, University of Science and Technology of China, Hefei, 230026, China'
- 'Institut für Mathematik, Universität Mainz, Mainz, 55099, Germany'
author:
- Mao Sheng
- He Xin
- Kang Zuo
title: A note on the characteristic $p$ nonabelian Hodge theory in the geometric case
---
\[section\] \[thm\][Theorem]{} \[thm\][Lemma]{} \[thm\][Corollary]{} \[thm\][Proposition]{} \[thm\][Addendum]{} \[thm\][Variant]{} \[thm\][Lemma and Definition]{} \[thm\][Construction]{} \[thm\][Notations]{} \[thm\][Question]{} \[thm\][Problem]{} \[thm\][Remark]{} \[thm\][Remarks]{} \[thm\][Definition]{} \[thm\][Claim]{} \[thm\][Assumption]{} \[thm\][Assumptions]{} \[thm\][Properties]{} \[thm\][Example]{} \[thm\][Conjecture]{} \[thm\][Proposition and Definition]{}
[^1]
Introduction {#introduction .unnumbered}
============
This small note grows out of our efforts to understand the spectacular work of Ogus-Vologodsky [@OV] (see [@Sch] for the logarithmic analogue) on the nonabelian Hodge theory in characteristic $p$. Let $k$ be an algebraically closed field of positive characteristic $p$, and $X$ a smooth variety over $k$ which admits a $W_2(k)$-lifting. The authors loc. cit. establish a correspondence between a category of vector bundles with integrable connections and a category of Higgs bundles over $X$, the objects of which are subject to certain nilpotent conditions (see Theorem 2.8 loc. cit.). The whole theory is analogous to the one over complex numbers (see [@Si1]). Their construction relies either on the theory of Azumaya algebra or on a certain universal algebra ${{\mathcal A}}$ associated to a $W_2$-lifting of $X$ on which both an integrable connection and a Higgs field act (see §2 loc. cit.). The correspondence is generally complicated. However, there are two cases where the correspondence is known to be classical: the zero Higgs field and the geometric case. In the former case this correspondence is reduced to a classical result of Cartier (see Remark 2.2 loc. cit. and Theorem 5.1 [@Ka]), while in the latter case the Higgs bundle corresponding to the Gau[ß]{}-Manin system of a geometric family is obtained by taking gradings of the de Rham bundle with respect to either the Hodge filtration or the conjugate filtration by the Katz’s $p$-curvature formula (see Remark 3.19 loc. cit. and Theorem 3.2 [@Ka1]). Unlike the zero Higgs field case, an explicit construction of the converse direction in the geometric case is still unknown. In the complex case this amounts to solving the Hermitian-Yang-Mills equation (see [@Si] for the compact case), which is transcendental in nature.
The main finding of this note is that one can profit from using the relative Frobenius of a geometric family, which behaves like the Hodge metric of the associated variation of Hodge structures over ${{\mathbb C}}$ in a certain sense. Indeed, we show that one can use them to construct a de Rham subbundle from a Higgs subbundle in the geometric case. Throughout this note $p$ is an odd prime and $n$ is an integer which is greater than or equal to $p-2$. Our main results read as follows:
\[theorem 1\] Let $X$ be a smooth scheme over $W=W(k)$ and $f: Y\to X$ a proper smooth morphism as given in Example \[geometric situation\]. Let $(H,\nabla)$ (resp. $(E,\theta)$) be the associated de Rham bundle (resp. Higgs bundle) of degree $n$ to $f$. Then to any Higgs subbundle $(G,\theta)\subset (E,\theta)_0$ in characteristic $p$ one can associate naturally a de Rham subbundle $(H_{(G,\theta)},\nabla)$ of $(H,\nabla)_0$. For a subsystem of Hodge bundles in $(E,\theta)_0$ the Cartier-Katz descent of the associated de Rham subbundle $(H_{(G,\theta)},\nabla)$ to $(G,\theta)$ is $(G,\theta)$ itself.
Our construction is independent of that of Ogus-Vologodsky. It is interesting to compare it with the inverse Cartier transform of Ogus-Vologodsky loc. cit. in the situation of the above theorem. The construction works as well when the base scheme is equipped with a certain logarithmic structure or defined over $W_{n+1}=W_{n+1}(k)$. For a Higgs subsheaf of $(E,\theta)_0$, by which we mean a $\theta$-stable coherent subsheaf in $E_0$, the above construction yields a de Rham subsheaf of $(H,\nabla)_0$. In the most general form, the construction works for a Higgs subsheaf of a Higgs bundle coming from the modulo $p$ reduction of $Gr_{Fil}(H,\nabla)$, where $(H,Fil^{\cdot},\nabla,\Phi)$ is an object of the Faltings category $\mathcal{MF}^{\nabla}_{[0,n]}(X)$ (see §1). We obtain two applications of the construction.
\[theorem 2\] Assume that $X$ is projective over $W_{n+1}$.
- For a subsystem of Hodge bundles or a Higgs subbundle with zero Higgs field $(G,\theta)\subset (E,\theta)_0$, one has the slope inequality $\mu(G)\leq 0$, where $\mu(G)$ is the $\mu$-slope of $G$ with respect to the restriction of an ample divisor of $X$ to $X_0$.
- The following statements are true:
- Let $g_0: C_0\to X_0$ be a morphism of a smooth projective curve $C_0$ to $X_0$ over $k$ which is liftable to a morphism $g: C\to X$ over $W_{n+1}$. Then for any Higgs subbundle $(G,\theta)\subset (E,\theta)_0$, one has $\deg(g_0^*G)\leq 0$.
- Let $C$ be a smooth projective curve in $X$ over $W_{n+1}$. Then the Higgs bundle $(E,\theta)_0$ is Higgs semistable with respect to the $\mu_{C_0}$-slope.
Let $F$ be a real quadratic field such that $p$ is inert in $F$. Let $m\geq 3$ be an integer coprime to $p$. Let $M$ be the smooth scheme over $W$ which represents the fine moduli functor which associates a $W$-algebra $R$ a principally polarized abelian surface over $R$ with a real multiplication ${{\mathcal O}}_F$ and a full symplectic level $m$-structure. Let $S_h\subset M_{0}$ be the Hasse locus which is known to be a ${{\mathbb P}}^1$-configuration in characteristic $p$ (see Theorem 5.1 [@BG]).
\[normal crossing case\] Let $D$ be an irreducible component in $S_h$. Let $C=\sum C_i$ (may be empty) be a simple normal crossing divisor in $M_{0}$ such that $D+C$ is again of simple normal crossing. If the intersection number $D\cdot C$ is less than or equal to $p-1$, then the curve $D+C$ is $W_2$-unliftable inside the ambient space $M_{1}$.
As a convention, we denote the reduction modulo $p^{i+1}, i\geq 0$ of an object by attaching the subscript $i$. However for a connection or a Higgs field this rule will not be strictly followed for simplicity of notation.
The category $\mathcal{MF}_{[0,n]}^{\nabla}(X)$ {#section on faltings
category}
===============================================
In his study of $p$-adic comparison over a geometric base, Faltings has introduced the category $\mathcal{MF}_{[0,n]}^{\nabla}(X)$ in various settings. Its objects are the strong divisible filtered Frobenius crystals over $X$, which could be considered as the $p$-adic analogue of a variation of ${{\mathbb Z}}$-Hodge structures over a complex algebraic manifold. One shall be also aware of the fact that Ogus has developed systematically the category of $F-T$-crystals in the book [@O], which is closely related to the category $\mathcal{MF}_{[0,n]}^{\nabla}(X)$ (see particularly §5.3 loc. cit.).
Smooth case
-----------
Let $X$ be a smooth $W$-schme. A small affine subset $U$ of $X$ is an open affine subscheme $U\subset X$ over $W$ which is étale over ${{\mathbb A}}_W^d$ [^2]. As $X$ is smooth over $W$ there exists an open covering consisting of small affine subsets of $X$. Let $U\subset X$ be a small affine subset. For it one could choose a Frobenius lifting $F_{\hat U}$ on $\hat U$, the $p$-adic completion of $U$. An object in $\mathcal{MF}_{[0,n]}^{\nabla}(\hat U)$ (see Ch. II [@Fa], §3 [@Fa1]) is a quadruple $(H, Fil^{\cdot},
\nabla, \Phi_{F_{\hat U}})$, where
- $(H,Fil^{\cdot})$ is a filtered free ${{\mathcal O}}_{\hat U}$-module with a basis $e_i$ of $Fil^i, 0\leq i\leq n$ .
- $\nabla$ is an integrable connection on $H$ satisfying the Griffiths transversality: $$\nabla(Fil^i)\subset Fil^{i-1} \otimes \Omega^1_{\hat U}.$$
- The relative Frobenius is an ${{\mathcal O}}_{\hat U}$-linear morphism $\Phi_{F_{\hat U}}: F_{\hat U}^*H\to H$ with the strong $p$-divisible property: $\Phi_{F_{\hat U}}(F_{\hat U}^*Fil^i)\subset p^iH$ and $$\sum_{i=0}^{n}\frac{\Phi_{F_{\hat U}}(F_{\hat U}^*Fil^i)}{p^i}=H.$$
- The relative Frobenius $\Phi_{F_{\hat U}}$ is horizontal with respect to the connection $F_{\hat U}^*\nabla$ on $F_{\hat U}^*H$ and $\nabla$ on $H$.
The filtered-freeness in i) means that the filtration $Fil^{\cdot}$ on $H$ has a splitting such that each $Fil^i$ is a direct sum of several copies of ${{\mathcal O}}_{\hat U}$. The pull-back connection $F_{\hat
U}^*\nabla$ on $F_{\hat U}^*H$ is the composite $$\begin{aligned}
F_{\hat U}^*H=F_{\hat U}^{-1}H\otimes_{F_{\hat U}^{-1}{{\mathcal O}}_{\hat U}}{{\mathcal O}}_{\hat
U} &\stackrel{F_{\hat U}^{-1}\nabla\otimes id}{\longrightarrow}&
(F_{\hat U}^{-1}H\otimes F_{\hat U}^{-1}\Omega^1_{\hat
U})\otimes_{F_{\hat U}^{-1}{{\mathcal O}}_{\hat U}}{{\mathcal O}}_{\hat U} \\
&=& F_{\hat U}^*H\otimes F_{\hat U}^*\Omega^1_{\hat U} \stackrel{id\otimes
dF_{\hat U}}{\longrightarrow}F_{\hat U}^*H\otimes \Omega^1_{\hat U}.\end{aligned}$$ The horizontal condition iv) is expressed by the commutativity of the diagram $$\xymatrix{
F_{\hat U}^*H \ar[d]_{F_{\hat U}^*\nabla} \ar[r]^{\Phi_{F_{\hat U}}} & H \ar[d]^{\nabla} \\
F_{\hat U}^*H\otimes
\Omega^1_{\hat U}\ar[r]^{\Phi_{F_{\hat U}}\otimes id} & H\otimes
\Omega^1_{\hat U}. }$$ As there is no canonical Frobenius liftings on $\hat U$, one must know how the relative Frobenius changes under another Frobenius lifting. This is expressed by a Taylor formula. Let $\hat U={\mathrm{Spf}}R$ and $F: R\to R$ a Frobenius lifting. Choose a system of étale local coordinates $\{t_1,\cdots,t_d\}$ of $U$ (namely fix an étale map $U\to {\mathrm{Spec}}(W[t_1,\cdots,t_d])$). Let $R'$ be any $p$-adically complete, $p$-torsion free $W$-algebra, equipped with a Frobenius lifting $F': R'\to R'$ and a morphism of $W$-algebras $\iota: R\to R'$. Then the relative Frobenius $\Phi_{F'}:
F'^*(\iota^*H)\to \iota^*H$ is the composite $$F'^*\iota^*H\stackrel{\alpha}{\cong}
\iota^*F^*H\stackrel{\iota^*\Phi_{F}}{\longrightarrow} \iota^*H,$$ where the isomorphism $\alpha$ is given by the formula: $$\alpha(e\otimes 1)=\sum_{\underline i}\nabla_{\partial}^{\underline
i}(e)\otimes \frac{z^{\underline i}}{\underline{i}!}.$$ Here $\underline{i}=(i_1,\cdots,i_d)$ is a multi-index, and $z^{\underline i}=z_1^{i_1}\cdots z_d^{i_d}$ with $z_i=F'\circ
\iota(t_i)-\iota\circ F(t_i), 1\leq i\leq d$, and $\nabla_{\partial}^{\underline j}=\nabla_{\partial_{
t_1}}^{i_1}\cdots\nabla_{\partial_{t_d}}^{i_d}$.
One defines then the category $\mathcal{MF}_{[0,n]}^{\nabla}(X)$ (see Theorem 2.3 [@Fa]). Its object will be denoted again by a quadruple $(H,Fil^{\cdot},\nabla,\Phi)$. Here $(H,Fil^\cdot,\nabla)$ is a locally filtered free ${{\mathcal O}}_X$-module with an integrable connection satisfying the Griffiths transversality. For each small affine $U\subset X$ and each choice $F_{\hat U}$ of Frobenius liftings on $\hat U$, $\Phi$ defines $\Phi_{F_{\hat U}}: F_{\hat
U}^*H|_{\hat U}\to H|_{\hat U}$ such that it, together with the restriction of $(H,Fil^\cdot,\nabla)$ to $\hat U$, defines an object in $\mathcal{MF}_{[0,n]}^{\nabla}(\hat U)$.
\[geometric situation\] Let $f: Y\to X$ be a proper smooth morphism of relative dimension $n\leq p-2$ between smooth $W$-schemes. Assume that the relative Hodge cohomologies $R^if_*\Omega^j_Y, i+j=n$ has no torsion. By Theorem 6.2 [@Fa] [^3], the crystalline direct image $R^nf_{*}({{\mathcal O}}_{Y},d)$ is an object in $\mathcal{MF}^{\nabla}_{[0,n]}(X)$.
Logarithmic case
----------------
The category $\mathcal{MF}_{[0,n]}^{\nabla}(X)$ has a logarithmic variant (see IV c) [@Fa], §4 (f) [@Fa90], and §3 [@Fa1]). A generalization of the Faltings category to a syntomic fine logarithmic scheme over $W$ can be found in §2 [@T]. We shall focus only on two special cases: the case of ’having a divisor at infinity’ and the semistable case. In the first case, $X$ is assumed to be a smooth scheme over $W$, and $D\subset X$ a divisor with simple normal crossings relative to $W$, i.e. $D=\cup_{i}D_i$ is the union of smooth $W$-schemes $D_i$ meeting transversally. In the second case, $X$ is assumed to be a regular scheme over $W$ such that *Zariski* locally there is an étale morphism to the affine space ${{\mathbb A}}_W^d$ or ${\mathrm{Spec}}(W[t_1,\cdots,t_{d+1}]/(t_1\cdots
t_{d+1}-p))$ over $W$. We call an open affine subset $U\subset X$ small if there is an étale morphism $U\to {{\mathbb A}}^d_W$ mapping $U\cap
D$ to a union of coordinate hyperplanes (may be empty) in the first case, and if $U$ satisfies one of the two conditions in the second case. In each case one associates a natural fine logarithmic structure to the scheme $X$ such that the structural morphism $X\to
{\mathrm{Spec}}W$ is log smooth (in the former case one equips ${\mathrm{Spec}}W$ with the trivial log structure and in the latter case with the log structure determined by the closed point of ${\mathrm{Spec}}W$). See (1.5) (1) and Examples (3.7) (2) [@K]. Note also that the assumptions made above use the Zariski topology on $X$ instead of the the étale topology as in [@K]. The logarithmic crystalline site is then defined for $X\to {\mathrm{Spec}}W$ with the above logarithmic structure (see §5 loc. cit.).
Let $X\to {\mathrm{Spec}}W$ be as above. Compared with the definition of $\mathcal{MF}_{[0,n]}^{\nabla}(X)$ for the smooth case, we shall take the following modifications for the logarithmic analogue. In the first case, for a small affine open subset $U\subset X$ a Frobenius lifting on $\hat U$ shall respect the divisor $\widehat{U\cap D}\subset \hat U$ (called a logarithmic Frobenius lifting by Faltings), and $\nabla$ is an logarithmic integrable connection $$\nabla(Fil^i)\subset Fil^{i-1}\otimes \Omega^1_{\hat U}(\log
\widehat{U\cap D}).$$ In the second case, for an affine open subset $U\subset X$ which meets the singularities of $X_0$ it is *necessary* to consider a closed $W$-embedding $i: U\hookrightarrow Z$ in the category of logarithmic schemes together with a logarithmic Frobenius lifting on $Z$, by which we mean a Frobenius lifting respecting the logarithmic structure. In the current special case $Z$ can be chosen to be smooth over $W$. Write $J$ for the PD-ideal of $i$ and $D^{\log}_U(Z)$ the logarithmic PD-envelope of $U$ in $Z$ (see Proposition 5.3 [@K]). Denote by $\widehat{D^{\log}_U(Z)}$ the $p$-adic completion of $D^{\log}_U(Z)$. Then $H$ is a free ${{\mathcal O}}_{\widehat{D^{\log}_U(Z)}}$-module and the decreasing filtration $Fil^{\cdot}$ on $H$ is compatible with the PD-filtration $J^{[\cdot]}$ on ${{\mathcal O}}_{\widehat{D^{\log}_U(Z)}}$ and is filtered free (see Page 119 [@Fa1]). For the formal logarithmic scheme $\widehat{D^{\log}_U(Z)}$ let $\Omega^1_{\widehat{D^{\log}_U(Z)}}$ be the sheaf of the formal relative logarithmic differentials on $\widehat{D^{\log}_U(Z)}$ (see (1.7) [@K]). For a choice of a logarithmic Frobenius lifting $F_Z$ on $Z$ let $F_{\widehat{D^{\log}_U(Z)}}$ be the induced morphism on $\widehat{D^{\log}_U(Z)}$. Then by replacing $\hat U$ in the definition of $\mathcal{MF}^{\nabla}_{[0,n]}(\hat U)$ in §1 with $\widehat{D^{\log}_U(Z)}$ we get the description of the local category $\mathcal{MF}^{\nabla}_{[0,n]}(\widehat{D^{\log}_U(Z)})$. Taking a small affine covering ${{\mathcal U}}=\{U\}$ of $X$ and a family of closed embeddings $i: U\to Z$ in the second case, one defines the global category $\mathcal{MF}_{[0,n]}^{\nabla}(X)$ (see §4 (f) [@Fa90], §2 [@T]).
One basic example of objects in the category $\mathcal{MF}_{[0,n]}^{\nabla}(X)$ is provided by the result of Faltings (Theorem 6.2 [@Fa], Remark §3, page 124 [@Fa1]): For a $W$-morphism $f: Y\to X$ which is proper, log-smooth and generically smooth at infinity, if the relative Hodge cohomology $R^if_{*}\Omega^j_{Y,\log},i+j=n$ has no torsion, then the direct image $R^nf_*({{\mathcal O}}_{Y},d)$ of the constant filtered Frobenius logarithmic crystal of $Y$ is an object in the category $\mathcal{MF}_{[0,n]}^{\nabla}(X)$.
One needs also a logarithmic version of the Taylor formula for the same purpose as in the smooth case. For that we refer the reader to the formula (6.7.1) in [@K]. In the semistable case we make it more explicitly as follows. For $U={\mathrm{Spec}}R$ étale over ${\mathrm{Spec}}W[t_1,\cdots,t_{d+1}]/(\prod_{1\leq i\leq d+1}t_i-p)$, choose a surjection $R'\twoheadrightarrow R$ of $W$-algebras with $R'$ log smooth over $W$ and a logarithmic Frobenius lifting $F'$ on the $p$-adic completion $\hat {R'}$. Assume $\{d\log x_1,\cdots,d\log
x_{r}\}$ forms a basis for $\Omega^{1}_{\log}(R')$. For another choice $R''\twoheadrightarrow R$ with the following commutative diagram $$\xymatrix{
R' \ar[d]_{\iota} \ar[r]^{} & R \\
R'' \ar[ur]_{} },$$ and $F'': \hat{R''}\to \hat {R''}$ a logarithmic Frobenius lifting, we let $$u_i=F''\circ \iota(x_i)/\iota\circ F'(x_i), 1\leq i\leq r$$ and $\nabla^{\log}_{\partial_{ x_i}}$ be the differential operator defined in Theorem 6.2 (iii) [@K]. Then $\alpha:
F''^*(\iota^*H)\to \iota^*F'^*H$ given by the Taylor formula $$\alpha(e\otimes
1)=\sum_{\underline{i}=(i_1,\cdots,i_l,\cdots,i_{r})\in
{{\mathbb N}}^{r}}(\prod_{1\leq i\leq r,0\leq j\leq
i_{l}}(\nabla^{\log}_{\partial_{x_i}}-j)(e))\otimes (\prod_{1\leq
i\leq r}\frac{(u_i-1)^{i_l}}{i_l!})$$ is an isomorphism.
The analogue of the category $\mathcal{MF}^{\nabla}_{[0,n]}(X)$ exists when $X$ is smooth over a truncated Witt ring. In this case $H$ is of $p$-torsion. So the formulation of strong divisibility as stated in iii) has to be modified. See §2 c)-d) [@Fa]. In the logarithmic case one finds in §2.3 [@T] the corresponding modification. Other conditions of the category can be obtained by taking reduction directly. In the following we shall abuse the notions of $\mathcal{MF}^{\nabla}_{[0,n]}(X)$ in the case of (log) smooth $X$ over $W$ for the corresponding category in the case of (log) smooth over a truncated Witt ring.
The construction {#section on construction}
================
Let $X$ be a smooth scheme over $W$ and $(H,Fil^{\cdot},\nabla,\Phi)$ an object in $\mathcal{MF}^{\nabla}_{[0,n]}(X)$. Let $(E,\theta)=Gr_{Fil}(H,\nabla)$ be the associated Higgs bundle and $(G,\theta)\subset (E,\theta)_0$ a Higgs subbundle. We start with a description of a construction of the de Rham subbundle $(H_{(G,\theta)},\nabla)\subset (H,\nabla)_0$. We first notice that there is a natural isomorphism of ${{\mathcal O}}_{X_i}$-modules: $$\frac{1}{[p^i]}: p^iH/p^{i+1}H\to H_0.$$ This follows from the snake lemma applied to the commutative diagram of ${{\mathcal O}}_{X_i}$-modules: $$\xymatrix{ 0\ar[r]&pH/p^{i+1}H\ar@{=}[d]\ar[r]&H/p^{i+1}H\ar@{=}[d]\ar[r]^{p^i}&p^iH/p^{i+1}H\ar[r]&0 &\\
0\ar[r]&pH/p^{i+1}H\ar[r]&H/p^{i+1}H\ar[r]&H/pH\ar[u]_{p^i}\ar[r]&0&
}$$
Local construction
------------------
Take $U\in {{\mathcal U}}$ and a Frobenius lifting $F_{\hat U}: \hat U\to \hat
U$. So we get an object $(H_U,Fil^{\cdot}_U,\nabla_U,\Phi_{F_{\hat
U}})\in \mathcal{MF}^{\nabla}_{[0,n]}(\hat U)$ by the restriction. For simplicity of notation, we omit the appearance of $U$ in this paragraph. Consider the composite $$\frac{\Phi_{F_{\hat U}}}{[p^i]}: F_{\hat U}^*Fil^iH
\stackrel{\Phi_{F_{\hat U}}}{\longrightarrow}p^iH\twoheadrightarrow
p^iH/p^{i+1}H\stackrel{\frac{1}{[p^i]}}{\longrightarrow}H_0.$$ By the property that $\Phi_{F_{\hat U}}(F_{\hat
U}^*Fil^{i+1})\subset p^{i+1}H$ the above map factors through the quotient $$F_{\hat U}^*Fil^iH\twoheadrightarrow F_{\hat U}^*Fil^iH/F_{\hat
U}^*(Fil^{i+1}H+pFil^iH).$$ By the filtered-freeness in i) one has $Fil^{i+1}H\cap
pFil^iH=pFil^{i+1}H$. So one obtains an isomorphism $$Fil^iH/Fil^{i+1}H+pFil^iH\cong E^{i,n-i}/pE^{i,n-i},$$ hence an ${{\mathcal O}}_{U_0}$-morphism $$\frac{\Phi_{F_{\hat U}}}{[p^i]}: F_{U_0}^*(E^{i,n-i})_0\to H_0.$$ It follows from the strong $p$-divisibility (see §1 iii)) that the map $$\tilde{\Phi}_{F_{\hat U}}:=\sum_{i=0}^{n}\frac{\Phi_{F_{\hat
U}}}{[p^i]}: F_{U_0}^*E_0\to H_0$$ is an isomorphism. For another choice of Frobenius lifting $F'_{\hat
U}$ over $\hat U$, write $z_i:=F_{\hat U}(t_i)-F'_{\hat U}(t_i)$. We have the following
\[taylor formula using higgs field over the same open set\] For a multi-index $\underline{j}=(j_1,\cdots,j_d)$, write $|\underline j|=\sum_{l=1}^dj_l$ and $\theta_{\partial}^{\underline
j}=\theta_{\partial_{
t_1}}^{j_1}\cdots\theta_{\partial_{t_d}}^{j_d}$. Then for a local section $e\in (E^{i,n-i})_0(U_0)$, one has the formula $$\frac{\Phi_{F_{\hat U}}}{[p^i]}(e\otimes 1)-\frac{\Phi_{F'_{\hat
U}}}{[p^i]}(e\otimes 1)=\sum_{|\underline
j|=1}^{i}\frac{\Phi_{F'_{\hat U}}}{[p^{i-|\underline
j|}]}(\theta_{\partial}^{\underline j}(e)\otimes
1)\otimes\frac{z^{\underline j}}{p^{|\underline j|}\underline{j}!}$$
First of all, as each $z_j, 1\leq j\leq d$ is divisible by $p$ and $i\leq n\leq p-2$, $\frac{z^{\underline j}}{p^{|\underline
j|}\underline{j}!}$ in the above formula is a well-defined element in ${{\mathcal O}}_{U_0}$. Let $\tilde e\in Fil^iH$ be a lifting of $e$. Applying the Taylor formula over ${{\mathcal O}}_{\hat U}$ in the situation that $R'=R$ and $\iota=id$, we get $$\Phi_{F_{\hat U}}(\tilde e\otimes1)=\sum_{|\underline
j|=0}^{\infty}\Phi_{F_{\hat
U}^{\prime}}(\nabla_{\partial}^{\underline j}(\tilde e)\otimes
1)\otimes\frac{z^{\underline j}}{\underline{j}!}.$$ We observe ${\mathrm{ord}}_p(\frac{p^{|\underline j|}}{\underline{j}!})\geq
p-1$ for $|\underline j|\geq p$ and ${\mathrm{ord}}_p(\frac{p^{|\underline
j|}}{\underline{j}!})=|\underline j|$ for $|\underline j|\leq p-1$. The the above formula can written as $$\Phi_{F_{\hat U}}(\tilde e\otimes1)-\Phi_{F'_{\hat U}}(\tilde
e\otimes1)=\sum_{|\underline j|=1}^{i}\Phi_{F_{\hat
U}^{\prime}}(\nabla_{\partial}^{\underline j}(\tilde e)\otimes
1)\otimes\frac{z^{\underline j}}{\underline{j}!}+\sum_{|\underline
j|\geq i+1}\Phi_{F_{\hat U}^{\prime}}(\nabla_{\partial}^{\underline
j}(\tilde e)\otimes 1)\otimes\frac{z^{\underline
j}}{\underline{j}!}.$$ As $i+1\leq p-1$, the above estimation on the $p$-adic valuation implies that the second term in the right side is an element in $p^{i+1}H$. By the Griffiths transversality, $\nabla_{\partial}^{\underline j}(\tilde e)\in Fil^{i-|\underline
j|}H$. Write $\tilde e_0=\tilde e\mod p$. Thus we have the following formula which takes value in $H_0(U_0)$: $$\frac{\Phi_{F_{\hat U}}}{[p^i]}( e\otimes1)-\frac{\Phi_{F'_{\hat
U}}}{[p^i]}( e\otimes1)=\sum_{|\underline
j|=1}^{i}\frac{\Phi_{F_{\hat U}^{\prime}}}{[p^{i-|\underline
j|}]}(\nabla_{\partial}^{\underline j}(\tilde e_0)\otimes
1)\otimes\frac{z^{\underline j}}{p^{|\underline j|}\underline{j}!}.$$ Regarding $\frac{\Phi_{F_{\hat U}^{\prime}}}{[p^{i-|\underline
j|}]}$ as a morphism between sheaves of abelian groups $$F_{U_0}^{-1}(Fil^{i-|\underline j|}H)_0\to H_0,$$ one has the following commutative diagram: $$\xymatrix{
F_{U_0}^{-1}(Fil^{i}H)_0\ar[rr]^{F_{U_0}^{-1}\nabla_{\partial}^{\underline
j}}\ar[d]_{pr}& &F_{U_0}^{-1}(Fil^{i-|\underline
j|}H)_0\ar[d]^{pr}\ar[rr]^{\frac{\Phi_{F_{\hat U}^{\prime}}}{[p^{i-|\underline j|}]}} && H_0 \\
F_{U_0}^{-1}(E^{i,n-i})_0\ar[rr]_{F_{U_0}^{-1}\theta_{\partial}^{\underline j}} && F_{U_0}^{-1}(E^{i-|\underline j|,n-i+|\underline j|})_0\ar[urr]_{\frac{\Phi_{F_{\hat U}^{\prime}}}{[p^{i-|\underline j|}]}} &&}$$ It implies that in the previous formula the connection can be replaced by the Higgs field. Hence the lemma follows.
\[taylor formula for tilde phi\] Notation as above. For a local section $e$ of $E_0(U_0)$, one has the following formula: $$\tilde{\Phi}_{F_{\hat U}}(e\otimes 1)-\tilde{\Phi}_{F'_{\hat
U}}(e\otimes 1)=\sum_{|\underline j|=1}^{n}\tilde{\Phi}_{F'_{\hat
U}}(\theta_{\partial}^{\underline j}(e)\otimes 1)\otimes
\frac{z^{\underline j}}{p^{|\underline j|}\underline{j}!}.$$
Write $e=\sum_{i=0}^{n}e_i$ with $e_i\in (E^{i,n-i})_0$. Lemma \[taylor formula using higgs field over the same open set\] implies $$\tilde{\Phi}_{F_{\hat U}}(e\otimes 1)-\tilde{\Phi}_{F'_{\hat
U}}(e\otimes 1)=\sum_{i=0}^{n}\sum_{|\underline
j|=1}^{i}\frac{\Phi_{F'_{\hat U}}}{[p^{i-|\underline
j|}]}(\theta_{\partial}^{\underline j}(e_i)\otimes
1)\otimes\frac{z^{\underline j}}{p^{|\underline j|}\underline{j}!}.$$ As $\theta_{\partial}^{\underline
j}(e)=\sum_{i=0}^{n}\theta_{\partial}^{\underline j}(e_i)$ and $\theta_{\partial}^{\underline j}(e_i)=0$ for $|\underline j|\geq
i+1$, the above summation is equal to $$\sum_{|\underline j|=1}^{n}[\sum_{i=|\underline
j|}^{n}\frac{\Phi_{F'_{\hat U}}}{[p^{i-|\underline
j|}]}(\theta_{\partial}^{\underline j}(e_i)\otimes
1)]\otimes\frac{z^{\underline j}}{p^{|\underline j|}\underline{j}!}
=\sum_{|\underline j|=1}^{n}\tilde{\Phi}_{F'_{\hat
U}}(\theta_{\partial}^{\underline j}(e)\otimes 1)\otimes
\frac{z^{\underline j}}{p^{|\underline j|}\underline{j}!}.$$
The above proposition justifies the following
\[local associated bundle\] For the Higgs subbundle $(G,\theta)\subset (E,\theta)_0$, the locally associated subbundle $S_{U_0}(G)\subset H_0$ over $U_0\subset X_0$ is defined to be $\tilde{\Phi}_{F_{\hat
U}}(G_{U_0})$, where $U$ is a small affine subset of $X$ with the closed fiber $U_0$ and $F_{\hat U}$ is a Frobenius lifting over $\hat U$.
Gluing
------
Take $U,V\in {{\mathcal U}}$, and Frobenius liftings $F_{\hat U}, F_V,
F_{\widehat{U\cap V}}$ on $\hat U,\hat V,\widehat{U\cap U}$ respectively. We are going to show the following equality of subbundles in $H_0|_{U_0\cap V_0}$: $$S_{U_0}(G)|_{U_0\cap V_0}=S_{U_0\cap
V_0}(G)=S_{V_0}(G)|_{U_0\cap V_0}.$$ The following lemma is a variant of Lemma \[taylor formula using higgs field over the same open set\]:
\[taylor formula using higgs field over the inclusion of open sets\] Write $z_i=F_{\hat U}\circ \iota(t_i)-\iota\circ
F_{\widehat{U\cap V}}(t_i)$, where $\iota:\widehat{ U\cap
V}\hookrightarrow \hat U$ is the natural inclusion. Then for a local section $e\in (E^{i,n-i})_0(U_0)$, one has the formula $$\iota_0^*[\frac{\Phi_{F_{\hat U}}}{[p^i]}(e\otimes
1)]-\frac{\Phi_{F_{\widehat{U\cap V}}}}{[p^i]}[\iota_0^*(e)\otimes
1]=\sum_{|\underline j|=1}^{i}\frac{\Phi_{F_{\widehat{U\cap
V}}}}{[p^{i-|\underline
j|}]}(\iota_0^*[\theta_{\partial}^{\underline j}(e)]\otimes
1)\otimes\frac{z^{\underline j}}{p^{|\underline j|}\underline{j}!}$$
The proof is the same as in Lemma \[taylor formula using higgs field over the same open set\] except that we shall apply the Taylor formula in the situation that $R'$ is the one with ${\mathrm{Spf}}(R')=\widehat {U\cap V}$, $F'=F_{\widehat{U\cap V}}$ and $\iota: R\to R'$ is the one induced by the natural inclusion.
A formula similar to that of Proposition \[taylor formula for tilde phi\] shows that $S_{U_0}(G)|_{U_0\cap V_0}=S_{U_0\cap
V_0}(G)$. By symmetry we have also the second half equality. The open covering ${{\mathcal U}}$ of $X$ gives rise to an open covering ${{\mathcal U}}_0$ of $X_0$ by reduction modulo $p$. Thus we glue the locally associated bundles $\{S_{U_0}(G)\}_{U_0\in {{\mathcal U}}_0}$ into a subbundle $H_{(G,\theta)}\subset H_0$, which we call the *associated subbundle to* $(G,\theta)$. We remark that the construction is independent of the choice of a small affine open covering ${{\mathcal U}}$ of $X$ as we can always refine such a covering and Lemma \[taylor formula using higgs field over the inclusion of open sets\] shows the invariance of the construction under a refinement.
Horizontal property
-------------------
We ought to show the associated subbundle $H_{(G,\theta)}\subset H$ is actually $\nabla$-invariant. Let $F_{\hat U}: \hat U\to \hat U$ be a Frobenius lifting over $\hat U$. Then one can write $\frac{\partial F_{\hat U}}{\partial t_j}=pf_j$ for $f_j\in
{{\mathcal O}}_{\hat U}$. Here is a lemma
\[commutation formula using higgs field\] For a local section $e\in (E^{i,n-i})_0(U_0)$, one has the formula $$\nabla_{\partial_{t_j}}[\frac{\Phi_{F_{\hat
U}}}{[p^{i}]}(e\otimes1)]=\frac{\Phi_{F_{\hat
U}}}{[p^{i-1}]}[\theta_{\partial_{t_j}}(e)\otimes f_{j,0}].$$
Let $\tilde e\in Fil^iH_U$ be a lifting of $e$. The horizontal property iv) yields the following commutation formula $$\nabla_{\partial_{t_j}}[\Phi_{F_{\hat U}}(\tilde e\otimes
1)]=\Phi_{F_{\hat U}}[\nabla_{\partial_{t_j}}(\tilde e)\otimes
1]\otimes \frac{\partial F_{\hat U}}{\partial t_j}.$$ Thus we have a formula in characteristic $p$: $$\nabla_{\partial_{t_j}}[\frac{\Phi_{F_{\hat
U}}}{[p^{i}]}(e\otimes1)]=\frac{\Phi_{F_{\hat
U}}}{[p^{i-1}]}[\nabla_{\partial_{t_j}}(e)\otimes 1]\otimes f_{j,0}.$$ Finally by the same reason as given in the proof of Lemma \[taylor formula using higgs field over the same open set\] we could replace the connection in the right side by the Higgs field, and hence obtain the lemma.
The associated subbundle $H_{(G,\theta)}$ to the Higgs subbundle $(G,\theta)\subset (E,\theta)_0$ is a de Rham subbundle of $(H,\nabla)_0$.
As the question is local, it suffices to show the invariance property of the locally associated subbundle $S_{U_0}(G)\subset
H_0|_{U_0}$. Lemma \[commutation formula using higgs field\] implies that for a local section $e\in G(U_0)$, $$\nabla_{\partial_{t_j}}[\tilde{\Phi}_{F_{\hat
U}}(e\otimes1)]=\tilde{\Phi}_{F_{\hat
U}}[\theta_{\partial_{t_j}}(e)\otimes f_{j,0}],$$ which is again an element of $S_{U_0}(G)$ by the $\theta$-invariance of $G$. As $\{\partial_{t_j}\}_{1\leq j\leq d}$ spans $\textrm{Der}_{k}({{\mathcal O}}_{U_0},{{\mathcal O}}_{U_0})$, we have shown that $\nabla(S_{U_0}(G))\subset S_{U_0}(G)\otimes \Omega^1_{U_0}$ as claimed.
Variants
--------
By examining the above construction, one finds immediately that it works as well for a smooth scheme $X$ over $W_{n+1}$. Also it is immediate to see that the same construction applies for a coherent subobject in $(E,\theta)_0$. Namely, for a Higgs subsheaf of $(E,\theta)_0$, we obtain a de Rham subsheaf of $(H,\nabla)_0$ from the construction. A similar construction also works in the logarithmic case. In the case of having a divisor at infinity, one simply replaces the Frobenius liftings and the integrable connection in the above construction with the logarithmic Frobenius liftings and the logarithmic integrable connection. In the semistable case, for a closed embedding $U\hookrightarrow Z$ as in §1.2, we replace the local operators $\frac{\Phi_{F_{\hat U}}}{[p^i]}$ in the smooth case with the reduction of the operator $ \displaystyle{
\frac{\Phi_{F_{\widehat{D^{\log}_U(Z)}}}}{[p^i]}}$ modulo the PD-ideal $J$, and use the Taylor formula of §1.2 in the proofs. The resulting construction yields a logarithmic de Rham subsheaf of $(H,\nabla)_0$ for any logarithmic Higgs subsheaf of $(E,\theta)_0$.
Basic properties
----------------
Let $X$ be a smooth (resp. log smooth) scheme over $W$ (resp. $W_{n+1}$) as above, and $(H,Fil^{\cdot},\nabla,\Phi)\in
\mathcal{MF}^{\nabla}_{[0,n]}(X)$. Let $(E,\theta)=Gr_{Fil}(H,\nabla)$ be the associated Higgs bundle, and for a Higgs subbundle $(G,\theta)\subset (E,\theta)_0$, $(H_{(G,\theta)},\nabla)\subset (H,\nabla)_0$ the associated de Rham subbundle by the previous construction. It is not difficult to check the following properties:
\[basic properties\] The following statements hold:
- The construction is compatible with pull-backs. Namely, for $f$ a morphism between smooth (resp. log smooth) schemes over $W$ (resp. $W_{n+1}$), one has $$(H_{f^*(G,\theta)},\nabla)=f^*(H_{(G,\theta)},\nabla).$$
- The construction is compatible with direct sum and tensor product.
One can make our construction into a functor. First of all, one makes a category as follows: An object in this category is a Higgs subbundle of an $(E,\theta)_0$, which is the modulo $p$ reduction of $Gr_{Fil}(H,\nabla)$ where $(H,Fil^{\cdot},\nabla)$ comes from an object in $\mathcal{MF}^{\nabla}_{[0,n]}(X)$ for an $n\geq p-2$. The set of morphisms are required to be inclusions of Higgs subbundles in the same Higgs bundle $(E,\theta)_0$. One defines the parallel category on the de Rham side. These categories have direct sums and tensor products. Proposition \[basic properties\] ii) says that the functor respects direct sum and tensor product. Summarizing the above discussions, we have the following
Let $X$ be as above and $(H,Fil^{\cdot},\nabla,\Phi)$ an object in $\mathcal{MF}^{\nabla}_{[0,n]}(X)$. Let $(E,\theta)=Gr_{Fil}(H,\nabla)$ be the associated Higgs bundle. Then one associates *naturally* a Higgs subbundle of $(E,\theta)_0$ to a de Rham subbundle of $(H,\nabla)_0$.
In the following let $X$ be a smooth scheme over $W$ or $W_{n+1}$. The next result relates our construction in the zero Higgs field case with the Cartier descent (see Theorem 5.1 [@Ka]).
\[corollary on the canonical connection\] If $(G,0)\subset (E,\theta)_0$ is a Higgs subbundle with zero Higgs field, then one has an isomorphism of vector bundles with integrable connection $$\tilde \Phi:
(F_{X_0}^*G,\nabla_{can})\stackrel{\cong}{\longrightarrow}
(H_{(G,0)},\nabla|_{H_{(G,0)}}),$$ where $\nabla_{can}$ is the canonical connection associated to a Frobenius pull-back vector bundle.
This is a direct consequence of the construction of the subbundle $H_{(G,0)}$ and the formula in Proposition \[taylor formula for tilde phi\] in the case of $\theta=0$. Note also that $\{\tilde{\Phi}(e_i\otimes 1)\}$, where $\{e_i\}$ runs through a local basis of $G$, makes an integrable basis of $S_{U_0}(G)$, which follows directly from the formula in Lemma \[commutation formula using higgs field\] in the case of $\theta=0$.
Cartier-Katz descent
--------------------
Let $(H,Fil^{\cdot},\nabla,\Phi)$ be a geometric one, namely it comes from Example \[geometric situation\]. Then $H_0$ is equipped with the conjugate filtration $0=F^{n+1}_{con}\subset
F^{n}_{con}\subset\cdots\subset F^0_{con}=H_0$, which is horizontal with respect to the Gau[ß]{}-Manin connection (see §3 in [@Ka]). For a subbundle $W\subset H_0$ we put $Gr_{F_{con}}(W)=\bigoplus_{q=0}^{n} \frac{W\cap F_{con}^{q}}{W\cap
F_{con}^{q+1}}$. The $p$-curvature $\psi_{\nabla}$ of $\nabla$ defines the $F$-Higgs bundle $$\psi_{\nabla}:Gr_{F_{con}}(H_0)\to Gr_{F_{con}}(H_0)\otimes
F_{X_0}^*\Omega_{X_0}.$$ As a reminder to the reader, we recall the definition of *$F$-Higgs bundle*: an $F$-Higgs bundle over a base $C$, which is defined over $k$, is a pair $(E',\theta')$ where $E'$ is a vector bundle over $C$, and $\theta'$ is a bundle morphism $E'\to E'\otimes
F_C^*\Omega_C$ with the integral property $\theta'\wedge\theta'=0$. The following lemma is a simple consequence of Katz’s $p$-curvature formula (see Theorem 3.2 [@Ka1]).
\[grading of conjugate filtration\] Let $(W,\nabla)$ be a de Rham subbundle of $(H,\nabla)_0$. Then the $F$-Higgs subbundle $(Gr_{F_{con}}(W),\psi_{\nabla}|_{Gr_{F_{con}}(W)})$ defines a Higgs subbundle of $(E,\theta)_0$ by the Cartier descent.
We call the above Higgs subbundle the *Cartier-Katz descent* of $(W,\nabla)$. Back to the discussion on the associated de Rham subbundle $(H_{(G,\theta)},\nabla)\subset (H,\nabla)_0$ with $(G,\theta)\subset (E,\theta)_0$. After a terminology of Simpson (see [@Si]) we shall call a Higgs subbundle $(G,\theta)$ with the property $G=\oplus_{i=0}^{n}(G\cap (E^{i,n-i})_0)$ a *subsystem of Hodge bundles*. We have the following
\[cartier-katz descent special case\] Let $(G,\theta)\subset (E,\theta)_0$ be a subsystem of Hodge bundles. Then the Cartier-Katz descent of $(H_{(G,\theta)},\nabla)$ is equal to $(G,\theta)$.
First we recall that the relative Cartier isomorphism defines an isomorphism $${{\mathcal C}}: Gr_{F_{con}}H_0\stackrel{\cong}{\longrightarrow} F_{X_0}^*E_0.$$ We need to show that it induces an isomorphism $$Gr_{F_{con}}(H_{(G,\theta)})\cong F_{X_0}^*G.$$ Write $G^{i,n-i}=G\cap (E^{i,n-i})_0$. Then $G=\oplus_{i=1}^{n}G^{i,n-i}$. Now that over $U_0$ the composite $$F^*_{U_0}E^{i,n-i}_0|_{U_0}\stackrel{\frac{\Phi_{F_{\hat
U}}}{[p^i]}}{\longrightarrow}
F_{con}^{n-i}H_0|_{U_0}\twoheadrightarrow Gr_{Fon}^{n-i}H_0|_{U_0}$$ is the inverse relative Cartier isomorphism ${{\mathcal C}}^{-1}|_{U_0}$ over $U_0$, it follows from the local construction of $H_{(G,\theta)}$ that $${{\mathcal C}}^{-1}|_{U_0}(F^*_{U_0}G^{i,n-i}|_{U_0})\stackrel{\cong}{\longrightarrow}
Gr_{Fon}^{n-i}H_{(G,\theta)}|_{U_0}.$$ This implies the result.
The above proof implies also the equalities $$H_{(G^{\leq
i},\theta)}=H_{(G,\theta)}\cap F_{con}^{n-i}, 0\leq i\leq n,$$ where $G^{\leq i}$ is the Higgs subbundle $\oplus_{q\leq i}G^{q,n-q}$ of $G$.
\[grading of hodge filtration\] The grading of $(H_{(G,\theta)},\nabla)$ with respect to the Hodge filtration defines a Higgs subbundle of $(E,\theta)_0$ which is in general not $(G,\theta)$. In the case that they are equal and $X$ is proper over $W$, $(G,\theta)$ defines a $p$-torsion subrepresentation of $\pi^{arith}_1(X^0)$, the étale fundamental group of the generic fiber $X^0$ of $X$, implied by a result of Faltings (see Theorem 2.6\* [@Fa]). A similar remark has appeared in §4.6 [@OV].
Applications
============
Higgs semistability
-------------------
In this paragraph $X$ is assumed to be smooth and projective over $W_{n+1}$ with connected closed fiber $X_0$ over $k$. Fix an ample divisor $D$ on $X$. Recall that the $\mu$-slope of a torsion free coherent sheaf $Z$ on $X_0$ is defined to be $$\mu(Z)=\frac{c_1(Z)\cdot D_0^{d-1}}{{{\rm rank}}Z}.$$
\[nonpositivity of subsystems of Hodge bundles\] Let $(E,\theta)$ be the associated Higgs bundle in the geometric case, i.e. Example \[geometric situation\]. Then the following statements hold:
- For any subsystem of Hodge bundle $(G,\theta)\subset (E,\theta)_0$, one has $\mu(G)\leq 0$.
- For any Higgs subbundle $G\subset
E_0$ with zero Higgs field, it holds that $\mu(G)\leq 0$.
Assume that there exists a subsystem of Hodge bundles $(G,\theta)$ with positive $\mu$-slope in $(E,\theta)_0$. Take such one with the largest slope. By the proof of Proposition \[cartier-katz descent special case\], one has an isomorphism $Gr_{F_{con}}H_{(G,\theta)}\cong F_{X_0}^*G$, and consequently the equalities $\mu(H_{(G,\theta)})=\mu(F_{X_0}^*G)=p\mu(G)$. Then the observation in Remark \[grading of hodge filtration\] says that $Gr_{Fil}(H_{(G,\theta)},\nabla)$ gives a subsystem of Hodge bundles of $(E,\theta)_0$ of slope $p\mu(G)>\mu(G)$, a contradiction. Hence i) follows. Now assume the existence of a Higgs subbundle $(G,0)$ with positive $\mu$-slope. By Corollary \[corollary on the canonical connection\], the associated de Rham subbundle $H_{(G,0)}\subset H_0$ is isomorphic to $F_{X_0}^*G$, whose $\mu$-slope is equal to $p\mu(G)>0$. Then $Gr_{Fil}(H_{(G,0)},\nabla)$ gives rise to a subsystem of Hodge bundles with positive $\mu$-slope, which contradicts i).
Let $C\subset X$ be a smooth projective curve over $W_{n+1}$. For a coherent sheaf $Z$ over $X_0$, the $\mu_{C_0}$-slope of $Z$ is defined to be $\frac{\deg(Z|_{C_0})}{{{\rm rank}}Z}$. Recall that a Higgs bundle $(E,\theta)$ over $X_0$ is said to be Higgs semistable with respect to the $\mu_{C_0}$-slope if for any Higgs subbundle $(F,\theta)\subset(E,\theta)$ the inequality $\mu_{C_0}(F)\leq
\mu_{C_0}(E)$ holds.
\[degree formula\] For any Higgs subbundle $(G,\theta)\subset (E,\theta)_0$, it holds that $$\det C_0^{-1}(G,\theta)\cong(F_{X}^*\det G,\nabla_{can}).$$
The functor $C_0^{-1}$ has been further studied in the paper [@LSZ]. It follows from Proposition 5 loc. cit. that $C_0^{-1}(G,\theta)$ is isomorphic to the exponential twisting of $F_X^{*}G$. Precisely, it is obtained by gluing the local flat bundles $(F_{U}^*G|_{U},\nabla_{can}+\frac{dF_{\hat
U}}{p}F_U^*\theta|_U)$ via the gluing functions $\exp[h_{UV}(F_{U\cap V}^*\theta)]F_{U\cap V}^*M_{U\cap V}$. Here $U,V$ are two open subsets of $X$, $M_{U\cap V}$ is the transition function of two local bases of $G$ over $U\cap V$, $h_{UV}$ is the derivation measuring the difference of two Frobenius liftings. Due to the fact that $\theta$ is nilpotent, the determinant of the exponential twisting is simply the identity. Therefore $\det
C_0^{-1}(G,\theta)$ is isomorphic to $(F_{X}^*\det G,\nabla_{can})$.
Let $(E,\theta)$ be the associated Higgs bundle to an object in $\mathcal{MF}_{[0,n]}^{\nabla}(X)$. Then the following statements hold:
- Let $g_0: C_0\to X_0$ be a morphism from a smooth projective curve $C_0$ to $X_0$ over $k$ which is liftable to a morphism $g: C\to X$ over $W_{n+1}$. Then for any Higgs subbundle $(G,\theta)\subset (E,\theta)_0$, one has $\deg(g_0^*G)\leq 0$.
- Let $C$ be a smooth projective curve in $X$ over $W_{n+1}$. The Higgs bundle $(E,\theta)_0$ is Higgs semistable with respect to the $\mu_{C_0}$-slope.
Let $(E,\theta)|_{C}$ be the Higgs bundle pulled back via $g$. We use similar notions for the pull-backs via $g_0$. The pull-back of $H$ via $g$ gives an object in $\mathcal{MF}^{\nabla}_{[0,n]}(C)$. Assume the existence of a Higgs subbundle $(G,\theta)\subset
(E,\theta)_0$ satisfying $\deg(G|_{C_0})>0$. By Lemma \[degree formula\], it holds that $$\deg Gr_{Fil}C_0^{-1}(G,\theta)|_{C_0}= \deg
C_0^{-1}(G,\theta)|_{C_0}= \deg F_{C_0^{*}}G|_{C_0}
= p\deg G|_{C_0}.$$ Iterating this process, one obtains Higgs subbundles in $E_0|_{C_0}$ with arbitrary large degrees, which is impossible. Hence the result i) follows. The proof of ii) is similar by replacing the degree in the previous argument with the $\mu_{C_0}$-slope.
In the above result i), it is natural to make the liftability assumption on $C_0$. The example of Moret-Bailly (see [@Mo]) shows that over a $W_2$-unliftable curve in the moduli space of principal polarized abelian surfaces in characteristic $p$ the Higgs bundle of the restricted universal family contains a Higgs line bundle with positive degree. The special case of i) with zero Higgs fields could be considered as a characteristic $p$ analogue of the negativity result of the kernels of Kodaira-Spencer map over ${{\mathbb C}}$ established in Theorem 1.2 [@Z]. The result here is however weaker. Compared with the one loc. cit., the result here gives no further information in the case that the above inequality is actually an equality. This difference deserves a further understanding.In the case $X$ being a curve, the assumption on $p$ made in Proposition 4.19 [@OV] reads $n({{\rm rank}}E-1)\max\{2g-2,1\}\leq p-2$ where $g$ is the genus of $X_0$. The above result ii) removes the dependence of $p$ on the genus as well as the rank. It may be also worth noting that the Higgs semistability is a weaker property than the Higgs polystability, a significant property satisfied in the complex setting (see [@Si1]). For instance, the Higgs polystability is an important ingredient in the proofs of various generalizations of the fact that there exists no nonisotrivial family of abelian surfaces over ${{\mathbb P}}^1$ in characteristic zero (see for example [@Z],[@VZ]). It is however not true in characteristic $p$ because of the example of Moret-Bailly.
$W_2$-unliftability
-------------------
Let $F$ be a real quadratic field with the ring of integers ${{\mathcal O}}_F$. Assume $p$ is inert in $F$. Fix an integer $m\geq 3$, coprime to $p$. Let $M$ be the moduli scheme over $W$, and $S_h$ the Hasse locus of $M_{0}$ as described in the introduction. Let $Z_0\subset
S_h$ be a curve with simple normal crossing. It is said to be $W_2$-liftable inside $M_{1}$ if there exists a semistable curve $Z_1\subset M_{1}$ over $W_2$ such that its closed fiber is $Z_0$. In the following the $W_2$-liftability means always the $W_2$-liftability inside $M_{1}$. It is interesting to know whether the components in $S_h$ are liftable to $W_2$. The $W_2$-liftability on the whole or part of the ${{\mathbb P}}^1$-configuration $S_h$ is in fact a subtle problem. On the one hand, it shall be more or less well known that each component of $S_h$ is $W_2$-unliftable. On the other hand, a result of Goren (see Theorem 2.1 [@G]) implies that the whole configuration is $W_2$-liftable if the zeta value $\zeta_{F}(2-p)$ is a non $p$-adic integer. Our partial result on this question is the following
\[thm 3\] Let $D$ be a component in $S_h$. Let $C=\sum C_i$ (may be empty) be a simple normal crossing divisor in $M_{0}$ such that $D+C$ is again a simple normal crossing divisor. If $D\cdot C\leq p-1$, then the curve $D+C$ is $W_2$-unliftable.
Before giving the proof, we introduce several notations. Let $f:
X\to M$ be the universal abelian scheme. Let $(E=E^{1,0}\oplus
E^{0,1},\theta)$ be the Higgs bundle of $f$, where $E^{1,0}=f_*\Omega^1_{X|M}$ and $E^{0,1}=R^1f_*{{\mathcal O}}_{X}$. It is known that $(E,\theta)$ has a decomposition under the ${{\mathcal O}}_F\otimes
W$-action in the form $$(E,\theta)=(E_1,\theta_1)\oplus (E_2,\theta_2),$$ where $E^{1,0}_i={{\mathcal L}}_i$ and $E^{0,1}_i={{\mathcal L}}_i^{-1}$ for $i=1,2$. It is also known that either ${{\mathcal L}}_1|_{D}\simeq {{\mathcal O}}_{{{\mathbb P}}^1}(-1), \
{{\mathcal L}}_2|_{D}\simeq {{\mathcal O}}_{{{\mathbb P}}^1}(p)$ or ${{\mathcal L}}_1|_{D}\simeq {{\mathcal O}}_{{{\mathbb P}}^1}(p),
\ {{\mathcal L}}_2|_{D}\simeq {{\mathcal O}}_{{{\mathbb P}}^1}(-1)$ holds for any component $D$ in $S_h$. One has a description of the Higgs field of the Higgs bundle associated to the restricted universal family to $D$: In the former case, $\theta_2: {{\mathcal L}}_2|_{D}\to {{\mathcal L}}_2^{-1}|_{D}\otimes \Omega^1_{D}$ is zero for the reason of degree, and $\theta_1: {{\mathcal L}}_1|_{D}\to
{{\mathcal L}}_1^{-1}|_{D}\otimes \Omega^1_{D}$ can be shown to be an isomorphism (we shall not use this fact in the following argument). The properties of $\theta_1$ and $\theta_2$ are exactly exchanged in the latter case. Put the log structure on $Z_{0}=D+C$ by its components and the trivial log structure on ${\mathrm{Spec}}k$. Let $\Omega^1_{\log}(Z_{0}/k)$ be the sheaf of log differentials (see (1.7) [@K]). It is locally free ${{\mathcal O}}_{Z_0}$-module of rank one. The family $f_0$ restricts to a family $f_0: Y_0\to Z_{0}$. With the pull-back logarithmic structure on $Y_0$, $f_0$ is log smooth. So one forms the logarithmic de Rham bundle $(H,\nabla)$ of $f_0$, which is by definition the first hypercohomology of the relative logarithmic de Rham complex. The relative Hodge filtration on the complex degenerates at $E_1$, thus one forms the logarithmic Higgs bundle over $Z_0$: $$\eta: F\to F\otimes \Omega^1_{\log}(Z_0/k),$$ where $F=F^{1,0}\oplus F^{0,1}$ with $F^{1,0}={{\mathcal L}}_1|_{Z_0}\oplus
{{\mathcal L}}_2|_{Z_0}$ and $F^{0,1}={{\mathcal L}}_{1}^{-1}|_{Z_0}\oplus
{{\mathcal L}}_2^{-1}|_{Z_0}$.
Assume $D\cdot D_{21}>0$. Consider $C_{0}:=D+D_{21}$. We give a local description for a later reference: Locally at the normal crossing point, we have the ring ${{\mathcal O}}_{C_{0}}=k[x,y]/(xy)$. Say $D=\{y=0\}$ and $D_{21}=\{x=0\}$. Then $$\Omega^1_{\log}(C_{0}/k)={{\mathcal O}}_{C_{0}}\{d\log x\}={{\mathcal O}}_{C_{0}}\{d\log
y\}, \ d\log x=-d\log y.$$ We record also that the sheaf of differentials $$\Omega^1_{C_{0}/k}={{\mathcal O}}_{C_{0}}\{dx,dy\}/{{\mathcal O}}_{C_{0}}\{ydx+xdy\}.$$ One has a natural map (globally) $\Omega^1_{C_{0}/k}\to
\Omega^1_{\log}(C_{0}/k)$. The quotient $R_{C_{0}/k}:=\Omega^1_{\log}(C_{0}/k)/\Omega^1_{C_{0}/k}$ is called the residue sheaf (after Ogus). It is a skyscraper sheaf supported at the intersection points $D\cap D_{21}$. For $x\in D\cap D_{21}$, $R_{C_{0}/k,x}\simeq k(x)$, and the natural map (call it residue map) $\Omega^1_{\log}(C_{0}/k,x)\to R_{C_{0}/k,x}$ is given by $$f(x,y)d\log x\mapsto f(0,0).$$ The log structure on $C_{0}$ pulls back to the log structures on $D$ and $D_{21}$. We have for either $D_{i1}$ $$\Omega^1_{\log}(C_{0}/k)\otimes_{{{\mathcal O}}_{C_{0}}}{{\mathcal O}}_{D_{i1}}\simeq
\Omega^1_{\log}(D_{i1}/k)\simeq \Omega^1_{D_{i1}}(\log(D\cap
D_{21})).$$ Now we assume $Z_0$ lifts a semistable curve $Z\subset M_{1}$ over $W_2$. We equip $Z$ with the log structure determined by the divisor $Z_0$ and ${\mathrm{Spec}}W_2$ with the one by ${\mathrm{Spec}}k$. We can assume that $${{\mathcal L}}_1|_{D}\simeq {{\mathcal O}}_{{{\mathbb P}}^1}(-1), \ {{\mathcal L}}_2|_{D}\simeq
{{\mathcal O}}_{{{\mathbb P}}^1}(p).$$ Over the open subset $D-D\cap C$, $\eta|_{D}$ coincides with the Higgs bundle coming from the restricted universal family to $D$. So by the above discussion, $\eta|_D({{\mathcal L}}_2|_{D})=0$. Consider the following coherent subsheaf in $F$: Take the subsheaf $${{\mathcal L}}_2|_{D}\otimes {{\mathcal O}}_{D}(-D\cap C)\subset {{\mathcal L}}_2|_{D}$$ over $D$ and take the zero sheaf over $C$, considered as the subsheaf of ${{\mathcal L}}_2|_{C}$. They glue into a subsheaf ${{\mathcal L}}$ of ${{\mathcal L}}_2|_{Z_0}\subset F$ over $Z_0$ since over a small open neighborhood of any point $P\in D\cap C$, ${{\mathcal L}}_2|_{D}\otimes
{{\mathcal O}}_{D}(-D\cap C)$ has a local basis vanishing at $P$. Note that ${{\mathcal L}}$ is a Higgs subsheaf of $F$. In fact the Higgs field $\eta$ acts on ${{\mathcal L}}$ trivially by construction. Then the construction of §2 in the semistable case applies. So $({{\mathcal L}},0)\subset (F,\eta)$ gives rise to a de Rham subsheaf $H_{({{\mathcal L}},0)}\subset (H,\nabla)$. Note that $H_{({{\mathcal L}},0)}|_{D}$ is isomorphic to $F_{D}^*({{\mathcal L}}_2|_{D}\otimes {{\mathcal O}}_{D}(-D\cap C))$ and hence has degree $p(p-D\cap C)$. We have a short exact sequence from the Hodge filtration: $$0\to {{\mathcal L}}_1|_{D}\oplus {{\mathcal L}}_2|_{D}\to H|_{D} \to {{\mathcal L}}_1^{-1}|_{D}\oplus
{{\mathcal L}}_2^{-1}|_{D}\to 0.$$ Then we first consider the composite $$H_{({{\mathcal L}},0)}|_{D}\subset
H|_{D}\twoheadrightarrow {{\mathcal L}}_1^{-1}|_{D}\oplus {{\mathcal L}}_2^{-1}|_{D}.$$ As $\deg {{\mathcal L}}_1^{-1}|_{D} =1$ and $\deg {{\mathcal L}}_2^{-1}|_{D} =-p$ are both smaller than $\deg H_{({{\mathcal L}},0)}|_{D}$, one has the factorization $$H_{({{\mathcal L}},0)}|_{D}\subset {{\mathcal L}}_1|_{D}\oplus {{\mathcal L}}_2|_{D}\subset H|_{D}.$$ [**Case 1:**]{} $D\cap C\leq p-2$. Again for the reason of degree, the above nontrivial map is impossible.
[**Case 2:**]{} $D\cap C= p-1$. In this case, one obtains the equality $$H_{({{\mathcal L}},0)}|_{D}={{\mathcal L}}_2|_{D}.$$ This is also impossible because of the semilinearity of the relative Frobenius. For a small affine $U\subset Z$ whose modulo $p$ reduction is $U_0\subset D-D\cap C$ and a Frobenius lifting $F_{\hat
U}$, the local operator $\frac{\Phi_{F_{\hat U}}}{[p]}$ maps a local section in ${{\mathcal L}}_2|_{D}(U_0)$ to a local section in ${{\mathcal L}}_1|_{D}(U_0)$. As ${{\mathcal L}}_2|_{D}\otimes {{\mathcal O}}_{D}(-D\cap C)\subset
{{\mathcal L}}_2|_{D}$, it is impossible to have $H_{({{\mathcal L}},0)}(U_0)\subset
{{\mathcal L}}_2|_{D}(U_0)$ by the construction of $H_{({{\mathcal L}},0)}$.
[X-X00]{}
E. Bachmat, E.Z. Goren, On the non-ordinary locus in Hilbert-Blumenthal surfaces, Math. Ann. 313 (1999), no. 3, 475-506.
G. Faltings, Crystalline cohomology and Galois representations, in Algebraic Analysis, Geometry, and Number Theory, Proceedings of the JAMI Inaugural Conference (J. Igusa, ed.), Johns Hopkins Univ. Press, 1989.
G. Faltings, $F$-isocrystals on open varieties: results and conjectures, The Grothendieck Festschrift, Vol. II, 219-248, Progr. Math., 87, Birkhäuser Boston, Boston, MA, 1990.
G. Faltings, Crystalline cohomology of semistable curves, and $p$-adic Galois-representations, J. Algebraic Geom. 1 (1992), no. 1, 61-81. G. Faltings, Integral crystalline cohomology over very ramified valuation rings, Journal of the AMS, Vol. 12, no. 1, 117-144, 1999.
E.Z. Goren, Hasse invariants for Hilbert modular varieties, Israel J. Math. 122 (2001), 157-174.
E.Z. Goren, F. Oort, Stratifications of Hilbert modular varieties, J. Algebraic Geom. 9 (2000), no. 1, 111-154.
K. Kato, Logarithmic structures of Fontaine-Illusie, Algebraic analysis, geometry, and number theory (Baltimore, MD, 1988), 191-224, Johns Hopkins Univ. Press, Baltimore, MD, 1989.
N. Katz, Nilpotent connections and the monodromy theorem: Applications of a result of Turrittin, Inst. Hautes Études Sci. Publ. Math. No. 39 (1970), 175-232.
N. Katz, Algebraic solutions of differential equations ($p$-curvature and the Hodge filtration), Invent. Math. 18 (1972), 1-118.
G.-T. Lan, M. Sheng, K. Zuo, An inverse Cartier transform via exponential in positive characteristic, arXiv: 1205.6599, 2012.
L. Moret-Bailly, Familles de courbes et de varietes Abeliennes sur ${{\mathbb P}}^1$, Asterisque 86, 1981, 125-140.
A. Ogus, $F$-crystals, Griffiths transversality, and the Hodge decomposition, Astérisque No. 221 (1994).
A. Ogus and V. Vologodsky, Nonabelian Hodge theory in characteristic $p$, Publ. Math. Inst. Hautes études Sci. 106 (2007), 1-138.
D. Schepler, Logarithmic nonabelian Hodge theory in characteristic $p$, Ph.D Thesis, University of California, Berkeley, 2005.
C. Simpson, Constructing variations of Hodge structure using Yang-Mills theory and applications to uniformization. J. Amer. Math. Soc. 1 (1988), no. 4, 867-918.
C. Simpson, Higgs bundles and local systems, Publ. Math. Inst. Hautes étud. Sci. 75 (1992), 5-95.
M. Sheng, J.-J. Zhang, K. Zuo, Higgs bundles over the good reduction of a quaternionic Shimura curve, J. reine angew. Math., DOI 10.1515, 2011.
T. Tsuji, Syntomic complexes and $p$-adic vanishing cycles, J. Reine Angew. Math. 472 (1996), 69-138.
E. Viehweg, K. Zuo, On the isotriviality of families of projective manifolds over curves, J. Algebraic Geom. 10 (2001), no. 4, 781-799.
K. Zuo, On the negativity of kernels of Kodaira-Spencer maps on Hodge bundles and applications, Kodaira’s issue. Asian J. Math. 4 (2000), no. 1, 279-301.
[^1]: This work is supported by the SFB/TR 45 ‘Periods, Moduli Spaces and Arithmetic of Algebraic Varieties’ of the DFG and partially supported by the University of Science and Technology of China.
[^2]: A small affine open subset in the sense of Faltings in the $p$-adic comparison in the $p$-adic Hodge theory is required to be étale over ${{\mathbb G}}_m^d$. Since nowhere in this note the $p$-adic comparison is used, it suffices to take the above notion for a small affine subset.
[^3]: Faltings loc. cit considered only the $p$-torsion objects. One obtains this by passing to the $p$-adic limit or applying another result of Faltings (see Remark pp. 124 [@Fa1]).
| |
Q:
Random Dice not reseeding
I created the following function to create random numbers for a dice game
#include <iostream>
#include <ctime>
#include <cstdlib>
#include "dice.h"
#include "display.h"
using namespace std;
int die[6][10];
void dice(int times, int dice){
int r;
for(int x=0;x<times;x++){
for(int x=0;x<dice;x++){
srand(time(0));
r=(rand() % 5);
die[r][x]+=1;
cout<<"Die #"<<x+1<<" rolled a "<<r<<endl;
}
}
}
It doesn't reseed though. It just outputs the same number for each die. Does anyone know how I could fix it?
A:
You are not using the srand and rand functions correctly. You are supposed to 'seed' the random number generator once, and then use rand() to retrieve successive values from the RNG. Each seed results in a particular sequence of numbers that fits certain randomness criteria.
Instead, you seed the random number generator each time, and then retrieve the first value in the random sequence. Since time() is called so rapidly that it returns the same seed, you are effectively resetting the random number generator back to the beginning of the same sequence, and therefore you get the same number you got before.
Even if the value returned by time() updated quickly enough that you got a new seed each time, you still would not be guaranteed good random numbers. The random number generator is designed to produce a sequence of numbers where that sequence has certain statistical properties. However there's no guarantee that the same properties hold over values chosen from different sequences.
So to use a deterministic random number generator you should seed the generator only once and then consume the sequence of values produced by that one seed.
Another point; the random number generators used to implement rand() have not historically been very good, rand() is not re-entrant or thread safe, and transforming the values produced by rand() into values with your desired distribution is not always straightforward.
In C++ you should prefer the <random> library which provides much better features. Here's an example of using <random>.
#include <random>
#include <iostream>
int main() {
const int sides = 6;
int groups = 10, dice_per_group = 3;
std::uniform_int_distribution<> distribution(1,sides); // create an object that uses randomness from an external source (provided later) to produces random values in the given (inclusive) range
// create and seed the source of randomness
std::random_device r;
std::seed_seq seed{r(), r(), r(), r(), r(), r(), r(), r()};
std::mt19937 engine(seed);
for (int i=0; i<groups; ++i) {
for (int j=0; j<dice_per_group; ++j) {
// use the distribution with the source of randomness
int r = distribution(engine);
std::cout << "Die #" << j+1 << " rolled a " << r << '\n';
}
std::cout << '\n';
}
}
A:
srand() in a function that is called repeatedly, or a loop is no good.
Put your call to srand() in main. Only call it once per program.
| |
Given an integer N, the task is to find the minimum number of digits required to generate a number having the sum of digits equal to N.
Examples:
Input: N = 18
Output: 2
Explanation:
The number with smallest number of digits having sum of digits equal to 18 is 99.
Input: N = 28
Output: 4
Explanation:
4-digit numbers like 8884, 6877, etc are the smallest in length having sum of digits equal to 28.
Approach: The problem can be solved by the following observations:
- Increment count by 9. Therefore, now count is equal to the number of 9’s in the shortest number. Reduce N to N % 9
- Now, if N exceeds 0, increment count by 1.
- Finally, print count as the answer.
Below is the implementation of the above approach:
C++
Java
Python3
C#
3 2
Time Complexity: O(1)
Auxiliary Space: O(1)
Attention reader! Don’t stop learning now. Get hold of all the important mathematical concepts for competitive programming with the Essential Maths for CP Course at a student-friendly price. To complete your preparation from learning a language to DS Algo and many more, please refer Complete Interview Preparation Course. | https://www.geeksforgeeks.org/minimum-count-of-digits-required-to-obtain-given-sum/ |
Finding the latest Sophie Kinsella bestseller or Harry Potter hardback can be a tiresome task in Burma. Very few shops sell brand-new books in English. If you do happen to track one down, it might cost you somewhere around 30,000 kyat (US $40)— more than 10 times the daily wage that a factory worker in Rangoon earns.
Of course, you can always find an old moth-eaten John Grisham novel on the bookshelves for three or four dollars, perhaps even a dusty George Orwell or a yellowing dog-eared copy of Jean-Paul Sartre’s Nausea.
In fact, chances are that if you do find a stall or a second-hand bookstore on your travels, it will be teeming with eager bookworms ransacking through piles of paperbacks, trying to quench their thirst for world literature.
That’s why it was just a joy to come upon 67-year-old Khin Maung Sein and his quaint alcove of bookshelves located under the staircase of an old colonial building in downtown Rangoon.
And let’s face it … where else could you find gems such as “An Account of An Embassy to the Kingdom of Ava,” written by British officer Michael Symes in the late 18th century?
Question: What drives you to work as a bookseller?
Answer: Literature is my profession as well as my greatest passion. I have been selling books for more than 30 years, and I cannot even imagine any other way to make a living.
Q: So how did it all come about?
A: It began in 1977. At that time I was a family man and a primary school teacher in Mandalay. The pay was not very good so I had to find a way to earn some extra income. I found out that some of my fellow teachers were selling books at the night market. As it happened, I had quite a large collection of books, mostly novels. I thought: ‘why not?’ and I joined them at the market after school was finished. It worked out.
At that time, my family was in Rangoon where my wife worked for a government office. I couldn’t travel to see them so often because I couldn’t afford it. Then suddenly I had a financial incentive to go to Rangoon—I knew good places where I could buy books. I would take the train down every Friday, spend the weekend with my family and replenish my stock.
Then in 1979 I was about to be posted in a high school in a remote corner of the country. Taking into account my children’s future, I resigned from school and moved back to Rangoon. My first bookstall was on the pavement near the passport office on Pansodan Street.
Q : Wasn’t that a hard transition? I mean, in so far as you went from a highly regarded and respectable position to being a street vendor.
A: Not really. I was happy to be working so directly with literature, and living with my wife and children again. I believe there’s no shame in making an honest living.
I was in Mandalay recently and some of my former students came to see me to pay their respects. Some gave me a big hug, and they all still addressed me as saya gyi [great teacher]. I must admit that I felt proud to be a teacher—but I was always happier to be a bookseller.
Q: Do you specialize in any particular subject?
A: No, I don’t. I’m a generalist. I deal with every subject apart from engineering. But I have a special love for books on Burma published between 1800 and 1910. They are the only ones internationally recognized as ‘rare books’ and were published in the early days of the British Raj, and were written in English by foreigners.
Some of my favorites include “An Account of An Embassy to the Kingdom of Ava” by Michael Symes, “Wanderings in Burma” by George W. Bird, and “Burma” by Max and Bertha Ferrars.
I put them on display in 2006, and sold some of them. But I still keep a few volumes for myself.
Q : They must be expensive. Who buys them?
A : There are some local collectors here in Rangoon. For first editions, generally I can name the price. But I always negotiate with true collectors. For those who can’t afford to buy an original, I can make a photocopied version.
Q : Doesn’t that infringe on copyright?
A : Not at all. Any work that is more than 100 years old is public domain.
Q : Are there any rare books published in Burmese?
A : We generally regard books published before World War II as rare books. For example, books from the Red Dragon Book Club. But most of them were destroyed during the war. So, for even myself, I still haven’t seen some of the books published by the club.
Q : Who are your everyday customers?
A : Local bookworms. Sometimes foreigners drop in to buy novels. University students and scholars working on their theses are frequent visitors.
When the Constitution was being drafted, ministers sent their junior staff to look for reference books. They were always in a rush, saying ‘Uncle, hurry up, please. I have to hand that book to “him” as soon as he gets off the plane.’
When Ne Win was in power, one of his ministers called me and said, ‘I want such-and-such a book. Send it to me by this evening.’ Just like that. So, of course, I had to do it. A few days later he gave me a ring to inform me that he liked the book and had decided to add it to his collection. He didn’t even pay for it.
Q : You’re not getting any younger, Uncle. What are you going to do with the thousands of books you have collected?
A : I haven’t decided yet. It pains me that I don’t have anyone to hand them over to. My wife has no interest in literature. My children have passed away.
I always used to feel so sad when I sold one of my favorite works. It’s like handing your heart over to a stranger. I’m trying my best to cut the bond to my books. After all, I can’t take them with me when I die. I think I’ll sell them all, including the books on Burma. I’ll keep photocopies as reminders that, once upon a time, I owned those precious books. | https://www.irrawaddy.com/in-person/interview/a-quiet-man-of-many-words.html |
The 3rd-seeded ’79-’80 Sabres would be charged with manslaughter after the way Game’s 1 and 2 went. They defeated the 6th-seeded ’97-’98 Sabres by a combined score of 17-2 after the first two games of the series. They won Game 1 by a score of 9-1 and Game 2 by a score of 8-1. In Game 1, Gilbert Perreault tallied six assists to lead them offensively while both Danny Gare and defenseman Jim Schoenfeld posted five point games, three goals and two assists and two goals and three assists, respectively. Gare then posted two goals and three assists in Game 2 while Perreault and defenseman John Van Boxmeer both posted three-point games, two goals and one assist and one goal and two assists, respectively. The rest of the series was respectable for the ’97-’98 Sabres. A PP goal from Derek Smith four and a half minutes into third period was the game-winner in a 4-2 win for the ’79-’80 Sabres in Game 3. Then, the ’97-’98 Sabres avoided the sweep with a 2-1 win in Game 4 with defenseman Jason Woolley scoring the game-winner halfway through the third. Perreault put up two goals and an assist, again, in Game 5 as the reigning representatives finished off the series with another 4-2 win.
GAME 1:
1st Period 2nd Period 3rd Period Overtime Total ’97-’98 Sabres 0 1 0 — 1 ’79-’80 Sabres 1 4 4 — 9
GAME SUMMARY: ’79-’80 BUF – 9 ’97-’98 BUF – 1 (1-0 ’79-’80 BUF)
STARS OF THE GAME:
1st Star: RW – Danny Gare (’79-’80 BUF) – 3 G 2 A 5 P +4 12 S
2nd Star: D – Jim Schoenfeld (’79-’80 BUF) – 2 G 3 A 5 P +4 11 S
3rd Star: C – Gilbert Perreault (’79-’80 BUF) – 0 G 6 A 6 P +4 6 S 2 PIM
GAME 2:
1st Period 2nd Period 3rd Period Overtime Total ’97-’98 Sabres 0 1 0 — 1 ’79-’80 Sabres 3 5 0 — 8
GAME SUMMARY: ’79-’80 BUF – 8 ’97-’98 BUF – 1 (2-0 ’79-’80 BUF)
STARS OF THE GAME:
1st Star: RW – Danny Gare (’79-’80 BUF) – 2 G 3 A 5 P +4 6 S
2nd Star: C – Gilbert Perreault (’79-’80 BUF) – 2 G 1 A 3 P +4 7 S 2 PIM
3rd Star: D – John Van Boxmeer (’79-’80 BUF) – 1 G 2 A 3 P +4 8 S 2 PIM
GAME 3:
1st Period 2nd Period 3rd Period Overtime Total ’79-’80 Sabres 0 2 2 — 4 ’97-’98 Sabres 1 0 1 — 2
GAME SUMMARY: ’79-’80 BUF – 4 ’97-’98 BUF – 2 (3-0 ’79-’80 BUF)
STARS OF THE GAME:
1st Star: C – Derek Smith (’79-’80 BUF) – 2 G 0 A 2 P Even 3 S
2nd Star: G – Dominik Hasek (’97-’98 BUF) – L 4.00 GAA .915 SV% (43 SV’s)
3rd Star: D – Richie Dunn (’79-’80 BUF) – 0 G 2 A 2 P Even 2 S
GAME 4:
1st Period 2nd Period 3rd Period Overtime Total ’79-’80 Sabres 0 1 0 — 1 ’97-’98 Sabres 0 1 1 — 2
GAME SUMMARY: ’97-’98 BUF – 2 ’79-’80 BUF – 1 (3-1 ’79-’80 BUF)
STARS OF THE GAME:
1st Star: G – Dominik Hasek (’97-’98 BUF) – W 1.00 GAA .973 SV% (36 SV’s)
2nd Star: D – Jason Woolley (’97-’98 BUF) – 1 G 0 A 1 P +2 2 S
3rd Star: LW – Paul Kruse (’97-’98 BUF) – 1 G 0 A 1 P Even 2 S
GAME 5:
1st Period 2nd Period 3rd Period Overtime Total ’97-’98 Sabres 1 0 1 — 2 ’79-’80 Sabres 2 1 1 — 4
GAME SUMMARY: ’79-’80 BUF – 4 ’97-’98 BUF – 2 (4-1 ’79-’80 BUF)
STARS OF THE GAME:
1st Star: C – Gilbert Perreault (’79-’80 BUF) – 2 G 1 A 3 P +2 4 S
2nd Star: LW – Rick Martin (’79-’80 BUF) – 1 G 1 A 2 P +1 2 S
3rd Star: D – Darryl Shannon (’79-’80 BUF) – 0 G 2 A 2 P +2 0 S
That concludes the First Round of the 2015 Buffalo Sabres Qualifying Tournament!
| |
Pamela regained strength and went from tropical storm to category 1 hurricane on the Saffir-Simpson scale, the National Meteorological Service of the National Water Commission reported (Conagua). It is expected to make landfall later in Sinaloa.
It was located approximately 130 kilometers west-southwest of Mazatlán, Sinaloa, Already 235 kilometers east of Cape St. Luke, Baja California Sur, With sustained maximum winds of 120 kilometers per hour.
It is forecast to cause occasional torrential rains in areas of Durango, Nayarit and Sinaloa; very strong in places of Jalisco, as well as strong in zones of Sonora. Wind gusts of 120 to 150 kilometers per hour are also forecast with waves 4 to 6 meters high on the coasts of Sinaloa; gusts of 80 to 100 kilometers per hour with waves of 3 to 5 meters off the coast of Nayarit, as well as gusts of 50 to 70 kilometers per hour and waves of 2 to 4 meters on the coasts of Baja California Sur and Jalisco. | https://sclate.com/mexico/hurricane-pamela-degrades-into-tropical-storm-continues-its-trajectory/ |
The model used to determine future vulnerability of Eastern Hemlock to the spread of Hemlock Woolly Adelgid consisted of: weighted multi-criteria evaluation (MCE) to determine the area Hemlock Woolly Adelgid is most likely to spread, and a multiplication of spread with hemlock density to determine what hemlocks are at risk of exposure to Hemlock Woolly Adelgid. This allowed for the inclusion of continuous data and of multiple criteria of varying significance. The MCE functioned by adding all weighted criteria together and multiplying the sum by the constraints:
MCE equation: Constraints*(Weight1Criteria1 + Weight2Criteria2 + Weight3Criteria3 + Weight4Criteria4)
There were other models considered for the analysis, such as binary models and process models, but the MCE was found to be the most appropriate. This is because binary models must satisfy all criteria fully, and therefore would not provide the likelihood of spread based on low and high values of each criterion. Process models were also not suitable for this type of analysis because they use existing knowledge in a set of equations that quantify the process, and this was not applicable to the research question.
The data of the layers contained different units and therefore were standardized in order to be compared against each other. For this, all the data was standardized to a range of 0-100 with higher values being more suitable. Two equations were used to achieve this:
x = 100 * ( xi - xmin) / (xmax - xmin) where higher scores are better
x = 100 * (1 - ( xi - xmin) / (xmax - xmin)) where lower scores are better
Where x represented the standardized score for the pixel, xi represented the initial pixel value, and xmax/xmin represented the maximum and minimum pixel value for the data layer.
Temperature, current Hemlock Woolly Adelgid distribution, distance to roads and road types were classified as criteria. To determine the weight of each factor, a pairwise comparison was used. Temperature was also classified as a constraint with a score of 0 being unsuitable and 1 suitable.
Table 2: Nine point pairwise comparison scale (Bonnycastle et al., 2017).
To calculate the suitability scores the following equation was used for each criterion: | https://www.uoguelph.ca/geography/4480-w19/4480w19-g1/objective-2-develop-gis-based-model-showing-current-spread-hemlock-wooly-adelgid-and |
Structurally, the distinctive difference between earthbag domes and brick domes is their higher tensile strength, derived from the installation of two strands of barbed wire per row. In essence, the added tensile strength combined with the woven polypropylene fabric helps unify the individual rows into a series of stacked rings. Each of these complete rings creates a mild tensionring effect, offering tension under compression throughout the whole dome not just at a single bond beam. Excellent results have been obtained from both load bearing and lateral exertion tests conducted in Hesperia, California (see Tom Harp and John Regner,
11.21: The further the bags (tubes) are cantilevered over the underlying bags, the more they flatten out. Photo credit: Keith G.
"Sandbag/Superadobe/Superblock: A Code Official Perspective," Building Standards 62, no. 5 (1998) pg. 28). In Chapter 18 we give a condesed reprint of this article.
Functionally, the steeper the pitch is, the more quickly it sheds water. This is an added benefit in dry climates that get short bursts of violent storms. Rain has very little time to soak into an earthen clay or lime plastered roof covering.
Design-wise, the taller interior allows ample room for a second story or loft space to be included (or for a trapeze or trampoline for more athletic folk). A built-in loft can also double as a scaffold during construction. A dome also provides the most living area for the least surface area, compared to a comparably sized rectilinear structure.
Spiritually, the compass formula we use creates an arc within a perfect square. This is our personal opinion, but we have a strong feeling that the specific ratio of this shape acts as both a grounding device and an amplifier for whatever intentions are radiated within the structure. So pay attention to your thoughts.
Energy Consumption: as noted in Appendix D, a circle provides more area than a rectangle built with the same length of perimeter wall. A dome takes another step by increasing the efficiency of the ratio of interior cubic feet of space to exterior wall surface. Domes provide the greatest overall volume of interior space with the least amount of wall surface. A dome's smaller surface area to internal space ratio requires less energy to heat and to cool it. And because of the shape, air is able to flow unimpeded without ever getting stuck in a corner.
Climatically, earthen walls are natural indoor regulators. Earthen walls breathe. They also absorb interior moisture and allow it to escape through the walls to the outside, while at the same time help to regulate interior humidity. A hot, dry climate gains the benefit of walls that are capable of releasing moisturized air back into the living space. The same situation can benefit the dry interior created by wood burning stoves in a cold winter climate. Earth is a natural deodorizer and purifier of toxins. An earthen dome literally surrounds one with the benefits inherent in natural earth.
11.22: Experiment with small-scale projects before tackling a large one.
Was this article helpful?
You Might Just End Up Spending More Time In Planning Your Greenhouse Than Your Home Don’t Blame Us If Your Wife Gets Mad. Don't Be A Conventional Greenhouse Dreamer! Come Out Of The Mould, Build Your Own And Let Your Greenhouse Give A Better Yield Than Any Other In Town! Discover How You Can Start Your Own Greenhouse With Healthier Plants… Anytime Of The Year! | https://www.northernarchitecture.us/earthbag-building/advantages-of-earthbag-domes.html |
---
abstract: |
[In this paper, we test whether two datasets share a common clustering structure. As a leading example, we focus on comparing clustering structures in two independent random samples from two mixtures of multivariate Gaussian distributions. Mean parameters of these Gaussian distributions are treated as potentially unknown nuisance parameters and are allowed to differ. Assuming knowledge of mean parameters, we first determine the phase diagram of the testing problem over the entire range of signal-to-noise ratios by providing both lower bounds and tests that achieve them. When nuisance parameters are unknown, we propose tests that achieve the detection boundary adaptively as long as ambient dimensions of the datasets grow at a sub-linear rate with the sample size.]{}
**Keywords.** Minimax testing error, Sparse mixture, Phase transition, High-dimensional statistics, Discrete structure inference
author:
- Chao Gao
- Zongming Ma
bibliography:
- 'reference.bib'
title: Testing Equivalence of Clustering
---
Introduction
============
Clustering analysis is one of the most important tasks in unsupervised learning. In the context of Gaussian mixture model, we have independent observations $X_1,\cdots,X_n\sim N(z_i\theta,I_p)$, where $z_i\in\{-1,1\}$ for each $i\in[n]$ and $\theta\in\mathbb{R}^p$. In this setting, clustering is equivalent to estimating the unknown label vector $z\in\{-1,1\}^n$. It is known that the minimax risk of estimating $z$ is given by $$\inf_{{\widehat}{z}}\sup_{z\in\{-1,1\}^n}\mathbb{E}_{(z,\theta)}\ell({\widehat}{z},z)=\exp\left(-(1+o(1))\frac{\|\theta\|^2}{2}\right), \label{eq:yu-lu-diao}$$ as long as $\|\theta\|^2\rightarrow\infty$. See, for instance, [@lu2016statistical]. Throughout the paper, the distance between two clustering structures is defined as $$\ell({\widehat}{z},z)=\left(\frac{1}{n}\sum_{i=1}^n{{\mathbf{1}_{\left\{{{\widehat}{z}_i\neq z_i}\right\}}}}\right)\wedge \left(\frac{1}{n}\sum_{i=1}^n{{\mathbf{1}_{\left\{{{\widehat}{z}_i\neq -z_i}\right\}}}}\right),$$ the normalized Hamming distance up to a label switching. Taking minimum over label switching is necessary since switching labels does not alter clustering structure. Here and after, $a\wedge b = \min(a,b)$ for any real numbers $a$ and $b$. Since the exponent is in the form of $\|\theta\|^2/2$, formula (\[eq:yu-lu-diao\]) suggests that more covariates help to increase the clustering accuracy as they increase $\|\theta\|^2$. To be concrete, suppose we additionally have independent observations $Y_1,\cdots,Y_n\sim N(z_i\eta, I_q)$ for some $\eta\in\mathbb{R}^q$. By combining the datasets $X$ and $Y$, the error rate can be improved from (\[eq:yu-lu-diao\]) to $$\exp\left(-(1+o(1))\frac{\|\theta\|^2+\|\eta\|^2}{2}\right). \label{eq:improved-clustering-rate}$$
In fact, integrating different sources of data to improve the performance of clustering analysis is a common practice in many areas. For example, in cancer genomics, researchers combine molecular features such as copy number variation and gene expression in integrative clustering to reveal novel tumor subtypes [@shen2009integrative; @curtis2012genomic; @mo2013pattern]. In collaborative filtering, combining different databases helps to better identify user types and thus makes better recommendations [@ma2008sorec; @ma2011recommender; @wang2013location]. In covariate-assisted network clustering [@binkiewicz2017covariate; @deshpande2018contextual], additional variables are collected to facilitate the clustering of nodes in a social network.
In the present paper, we investigate the hypothesis underpinning the foregoing practices: two datasets share a common clustering structure. To be concrete, let us consider independent samples $X_i\sim N(z_i\theta,I_p)$ and $Y_i\sim N(\sigma_i\eta,I_q)$ with some $z_i\in\{-1,1\}$ and $\sigma_i\in\{-1,1\}$ for all $i\in[n]$. If one uses $\{X_i\}_{1\le i\leq n}$ and $\{Y_i\}_{1\le i\leq n}$ to cluster the subjects $\{1,2,\dots,n\}$ into two disjoint subsets, it is implicitly assumed that $\ell(z,\sigma)=0$. From a statistical viewpoint, checking whether $\ell(z,\sigma)=0$ is equivalent to testing $$H_0:\ell(z,\sigma)=0 \quad \mbox{vs.} \quad H_1:\ell(z,\sigma)> \epsilon \label{eq:problem}$$ for some $\epsilon \geq 0$. Let $P^{(n)}_{(\theta,\eta,z,\sigma)}$ be the joint distribution of the two datasets $\{X_i\}_{1\le i\leq n}$ and $\{Y_i\}_{1\le i\leq n}$. Given any testing procedure $\psi$, we define its worst-case testing error by $$R_n(\psi,\theta,\eta,\epsilon)=\sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n\\\ell(z,\sigma)=0}}P^{(n)}_{(\theta,\eta,z,\sigma)}\psi + \sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n\\\ell(z,\sigma)> \epsilon}}P^{(n)}_{(\theta,\eta,z,\sigma)}(1-\psi). \label{eq:def-worst-risk}$$ We call $\psi$ consistent if $R_n(\psi,\theta,\eta,\epsilon)\rightarrow 0$ as $n\rightarrow\infty$. The minimax testing error is defined by $$R_n(\theta,\eta,\epsilon)=\inf_{\psi}R_n(\psi,\theta,\eta,\epsilon).$$ We will find necessary and sufficient conditions under appropriate calibrations of $\theta$, $\eta$ and $\epsilon$ such that $R_n(\theta,\eta,\epsilon)\rightarrow 0$ as $n\rightarrow\infty$.
An intuitive approach to testing is to first estimate $z$ and $\sigma$ with ${\widehat}{z}$ and ${\widehat}{\sigma}$ by separately clustering $\{X_i\}_{1\le i\leq n}$ and $\{Y_i\}_{1\le i\leq n}$, and then one could reject the null hypothesis when $\ell({\widehat}{z},{\widehat}{\sigma})>\epsilon/2$. With the known minimax optimal estimation error rate for ${\widehat}{z}$ and ${\widehat}{\sigma}$ in (\[eq:yu-lu-diao\]), one can show that such a test is consistent as long as $$\|\theta\|^2\wedge\|\eta\|^2>2\log\left(\frac{1}{\epsilon}\right). \label{eq:very-very-bad}$$ However, we shall show that condition is not optimal and that the required signal-to-noise ratio (SNR) for an optimal test to be consistent is much weaker.
Another natural test for (\[eq:problem\]) can be based on a reduction to a well studied sparse signal detection problem. For convenience of discussion, let us suppose $\ell(z,\sigma)=\frac{1}{n}\sum_{i=1}^n{{\mathbf{1}_{\left\{{z_i\neq \sigma_i}\right\}}}}$ so that there is no ambiguity due to label switching. Note that $$D(X_i,Y_i)=\frac{(\|\eta\|/\|\theta\|)\theta^TX_i-(\|\theta\|/\|\eta\|)\eta^TY_i}{\sqrt{\|\theta\|^2+\|\eta\|^2}}\sim N\left(\frac{\|\theta\|\|\eta\|(z_i-\sigma_i)}{\sqrt{\|\theta\|^2+\|\eta\|^2}},1\right).$$ Then, we have $D(X_i,Y_i)\sim N(0,1)$ for all $i\in[n]$ under the null hypothesis and there are at least an $\epsilon$ fraction of coordinates distributed by $N\left(\pm\frac{2\|\theta\|\|\eta\|}{\sqrt{\|\theta\|^2+\|\eta\|^2}},1\right)$ under the alternative hypothesis. This setting is the sparse signal detection problem studied by [@ingster1997some; @ingster2012nonparametric; @donoho2004higher] under the asymptotic setting of $\epsilon=n^{-\beta}$ and $\frac{2\|\theta\|\|\eta\|}{\sqrt{\|\theta\|^2+\|\eta\|^2}}=\sqrt{2r\log n}$ with some constants $\beta,r>0$. It was shown in [@donoho2004higher] that the higher criticism test is consistent as long as $$r > \begin{cases}
\beta-\frac{1}{2}, & \frac{1}{2}<\beta\leq \frac{3}{4}, \\
(1-\sqrt{1-\beta})^2, & \frac{3}{4} < \beta <1.
\end{cases} \label{eq:r-beta-IDJ}$$ Moreover, the condition (\[eq:r-beta-IDJ\]) cannot be improved if only the sequence $\{D(X_i,Y_i)\}_{1\leq i\leq n}$ is observed [@ingster1997some]. It can be checked that the condition (\[eq:r-beta-IDJ\]) is always weaker than (\[eq:very-very-bad\]) that is required by the test based on estimating $z$ and $\sigma$ first. However, we shall show later that one loses information by working only with the sequence $\{D(X_i,Y_i)\}_{1\leq i\leq n}$ for testing (\[eq:problem\]), and we can further improve condition (\[eq:r-beta-IDJ\]). announce the precise result here?
The main result of the present paper is an entire phase diagram of the testing problem (\[eq:problem\]) under a natural asymptotic setting comparable to that used in [@donoho2004higher]. It turns out this seemingly simple testing problem (\[eq:problem\]) has a complicated phase diagram parametrized by three parameters. The detection boundary of the diagram is characterized by five different functions over five disjoint regions in the space of signal-to-noise ratios. We also derive an asymptotically optimal test that achieves the detection boundary adaptively. At the heart of our construction of the optimal test is a precise likelihood ratio approximation. This leads to a sequence of asymptotically sufficient statistics, based on which a higher criticism type test can be proved to be optimal.
#### Related works
In addition to the literature on integrative clustering that we have previously mentioned, the testing problem is related to feature selection in clustering analysis. In the literature, this has mainly been investigated in the context of sparse clustering [@azizyan2013minimax; @jin2016influential; @jin2017phase; @verzelen2017detection; @cai2019chime], where it is assumed that only a small subset of covariates are useful for finding clusters, and so it is important to identify them. In comparison, the testing problem (\[eq:problem\]) can be interpreted as testing whether inclusion of an additional set of covariates $\{Y_i\}_{1\leq i\leq n}$ can potentially lead to smaller clustering errors than using $\{X_i\}_{1\leq i\leq n}$ alone. The major difference is that the additional set of covariates may admit a completely different clustering structure in our setting, while in sparse clustering, covariates that are not useful have no clustering structure.
In addition to testing whether clustering structures in multiple datasets are equal, it is of interest to approach the problem from the opposite direction. In other word, one could also test whether the clustering structures share anything in common. We refer the readers to [@gao2019clusterings] and [@gao2019testing] for studies along this line.
#### Paper organization
The rest of the paper is organized as the following. Section \[sec:balanced\] studies (\[eq:problem\]) with an additional equal SNR assumption. This simplified setting demonstrates essence of the problem while reducing a lot of technicalities. The general version of the problem [without equal SNR assumption]{} is studied in Section \[sec:general\]. In Section \[sec:equality\], we consider (\[eq:problem\]) with $\epsilon=0$, which is testing for exact equality. Optimal adaptive tests with unknown parameters for both $\epsilon > 0$ and $\epsilon = 0$ are discussed in Section \[sec:adaptive-t\]. Finally, technical proofs are given in Section \[sec:all-pf\].
#### Notation
For $d \in \mathbb{N}$, we write $[d] = \{1,\dotsc,d\}$. Given $a,b\in\mathbb{R}$, we write $a\vee b=\max(a,b)$, $a\wedge b=\min(a,b)$ and $a_+ = \max(a,0)$. For two positive sequences $\{a_n\}$ and $\{b_n\}$, we write $a_n\lesssim b_n$ to mean that there exists a constant $C > 0$ independent of $n$ such that $a_n\leq Cb_n$ for all $n$. Moreover, $a_n \asymp b_n$ means $a_n\lesssim b_n$ and $b_n\lesssim a_n$. For a set $S$, we use ${{\mathbf{1}_{\left\{{S}\right\}}}}$ and $|S|$ to denote its indicator function and cardinality respectively. For any matrix $A$, $A^T$ stands for its transpose. Any vector $v\in \mathbb{R}^d$ is by default a $d\times 1$ matrix. For a vector $v = (v_1,\ldots,v_d)^T \in\mathbb{R}^d$, we define ${\left\|{v} \right\|}^2=\sum_{\ell=1}^d v_\ell^2$. The trace inner product between two matrices $A,B\in\mathbb{R}^{d_1\times d_2}$ is defined as ${\left \langle A, B \right\rangle} =\sum_{\ell=1}^{d_1}\sum_{\ell'=1}^{d_2}A_{\ell \ell'}B_{\ell \ell'}$, while the Frobenius and operator norms of $A$ are given by ${\|A\|_{\rm F}}=\sqrt{{\left \langle A, A \right\rangle}}$ and ${\|A\|_{\rm op}}=s_{\max}(A)$ respectively, where $s_{\max}(\cdot)$ denotes the largest singular value. The notation $\mathbb{P}$ and $\mathbb{E}$ are generic probability and expectation operators whose distribution is determined by the context.
[Testing with Equal Signal-to-Noise Ratios]{} {#sec:balanced}
==============================================
Recall that we have two independent datasets $X_i\stackrel{ind}{\sim} N(z_i\theta,I_p)$ and $Y_i \stackrel{ind}{\sim} N(\sigma_i\eta,I_q)$ for $i\in[n]$. In this section, we [first]{} assume that [SNRs]{} of the two datasets are [equal]{}. In other words, $\|\theta\|=\|\eta\|$. The general case of potentially unequal SNRs is more complicated and will be studied in Section \[sec:general\].
First, we show that we can apply dimension reduction to both datasets without losing any information for testing . Consider $\{X_i\}_{1\leq i\leq n}$. Since the clustering structure only appears in the direction of $\theta$, we can project all $X_i$’s to the one-dimensional subspace spanned by the unit vector $\theta/\|\theta\|$. After projection, we obtain $\theta^TX_i/\|\theta\|\sim N(z_i\|\theta\|,1)$ for $i\in[n]$. Moreover, for any vector $u\in\mathbb{R}^p$ such that $u^T\theta=0$, we have $u^TX_i\sim N(0,1)$ for $i\in[n]$. Therefore, we conclude that the projected dataset $\{\theta^T X_i/\|\theta\|\}_{1\leq i\leq n}$ preserves all clustering information. The same argument also applies to $\{Y_i\}_{1\leq i\leq n}$. In the rest of this section, we write $$\label{eq:proj-1d}
{\widetilde}{X}_i=\theta^TX_i/\|\theta\|\quad
\mbox{and} \quad
{\widetilde}{Y}_i=\eta^TY_i/\|\eta\|$$ for $i\in[n]$ and work with these one-dimensional random variables when constructing tests. On the other hand, we shall establish lower bounds of the testing problem directly in the original multi-dimensional setting.
A Connection to Sparse Signal Detection
---------------------------------------
#### A related sparse mixture detection problem
For simplicity, let us suppose for now that $\frac{1}{n}\sum_{i=1}^n{{\mathbf{1}_{\left\{{z_i\neq \sigma_i}\right\}}}}\leq \frac{1}{2}$, so that $\ell(z,\sigma)=\frac{1}{n}\sum_{i=1}^n{{\mathbf{1}_{\left\{{z_i\neq \sigma_i}\right\}}}}$. Under [$H_0$ in ]{}, we have both ${\widetilde}{X}_i\sim N(z_i\|\theta\|,1)$ and ${\widetilde}{Y}_i\sim N(z_i\|\theta\|,1)$ for all $i\in[n]$, which motivates us to compute [scaled differences]{} $({\widetilde}{X}_i-{\widetilde}{Y}_i)/{\sqrt{2}}$, $i\in [n]$.
Note that the distributions of $\{({\widetilde}{X}_i-{\widetilde}{Y}_i)/\sqrt{2}\}_{1\leq i\leq n}$ [under $H_0$ and under $H_1$ in ]{} are the same as [those in]{} a sparse signal detection problem. Indeed, $({\widetilde}{X}_i-{\widetilde}{Y}_i)/\sqrt{2} \stackrel{iid}{\sim} N(0,1)$ for $i\in[n]$ under the null, and at least an $\epsilon$ fraction of the statistics follow either $N(\sqrt{2}\|\theta\|,1)$ or $N(-\sqrt{2}\|\theta\|,1)$ under the alternative. A well studied [Bayesian]{} version of the sparse signal detection problem is given by the following form: $$\begin{aligned}
\label{eq:equal-sparse-null} H_0: && U_1,\cdots,U_n\stackrel{iid}{\sim} N(0,1), \\
\label{eq:equal-sparse-alt} H_1: && U_1,\cdots,U_n\stackrel{iid}{\sim} (1-\epsilon)N(0,1)+\frac{\epsilon}{2}N(-\sqrt{2}\|\theta\|,1)+\frac{\epsilon}{2}N(\sqrt{2}\|\theta\|,1).\end{aligned}$$ [In what follows, we refers to – (and any such Bayesian version of the problem) as a sparse mixture detection problem.]{} There are two [noticeable]{} differences between (\[eq:equal-sparse-alt\]) and the distribution of $\{({\widetilde}{X}_i-{\widetilde}{Y}_i)/\sqrt{2}\}_{1\leq i\leq n}$ under [$H_1$ in ]{}:
1. The number of [non-null]{} signals in (\[eq:equal-sparse-alt\]) is a binomial random variable [while it is deterministic in ]{};
2. The probabilities that a [non-null]{} signal is from $N(\sqrt{2}\|\theta\|,1)$ and from $N(-\sqrt{2}\|\theta\|,1)$ are equal [in while there is no restriction on how many non-null signals follow either of the two distributions in .]{}
However, these differences are inconsequential [as long as our focus is on the phase diagrams of these testing problems with the calibration we now introduce]{}.
For either the hypothesis testing problem (\[eq:equal-sparse-null\])-(\[eq:equal-sparse-alt\]) [or with $\|\theta\| = \|\eta\|$]{}, [introduce the calibration]{} $$\epsilon=n^{-\beta}\quad \mbox{and} \quad {\sqrt{2}}\,\|\theta\|=\sqrt{ {2} r\log n}. \label{eq:equal-cali}$$ [For -]{}[^1], it was proved in [@ingster1997some; @ingster2012nonparametric] for that the likelihood ratio test is consistent when $\beta<\beta_{\rm IDJ}^*(r)$[^2] and no test is consistent when $\beta>\beta_{\rm IDJ}^*(r)$, where the threshold function is $$\beta_{\rm IDJ}^*(r)=\begin{cases}
\frac{1}{2}+r, & 0<r\leq \frac{1}{4}, \\
1-(1-\sqrt{r})_+^2, & r>\frac{1}{4}.
\end{cases}\label{eq:equal-thresh}$$ Note that $\beta<\beta^*_{\rm IDJ}(r)$ is equivalent to (\[eq:r-beta-IDJ\]). Moreover, Donoho and Jin [@donoho2004higher] proposed a higher-criticism (HC) test that rejects $H_0$ when $$\sup_{t>0}\frac{\left|\sum_{i=1}^n{{\mathbf{1}_{\left\{{|U_i|^2> t}\right\}}}}-n\mathbb{P}(\chi_1^2>t)\right|}{\sqrt{n\mathbb{P}(\chi_1^2>t)(1-\mathbb{P}(\chi_1^2>t))}}>\sqrt{2(1+\delta)\log\log n},$$ where $\chi_m^2$ denotes a chi-square distribution with $m$ degrees of freedom and $\delta>0$ is some arbitrary fixed constant. They proved that the HC test adaptively achieves consistency when $\beta<\beta_{\rm IDJ}^*(r)$. We refer interested readers to [@donoho2015higher; @jin2016rare] for more discussions on HC tests.
#### [Result for testing equivalence of clustering]{}
[Turn to]{} (\[eq:problem\]). We need to slightly modify the HC test to accommodate the possibility of label switching in the clustering context. Define $$\begin{aligned}
T_n^- &=& \sup_{t>0}\frac{\left|\sum_{i=1}^n{{\mathbf{1}_{\left\{{|{\widetilde}{X}_i-{\widetilde}{Y}_i|^2/2> t}\right\}}}}-n\mathbb{P}(\chi_1^2>t)\right|}{\sqrt{n\mathbb{P}(\chi_1^2>t)(1-\mathbb{P}(\chi_1^2>t))}}, \\
T_n^+ &=& \sup_{t>0}\frac{\left|\sum_{i=1}^n{{\mathbf{1}_{\left\{{|{\widetilde}{X}_i+{\widetilde}{Y}_i|^2/2> t}\right\}}}}-n\mathbb{P}(\chi_1^2>t)\right|}{\sqrt{n\mathbb{P}(\chi_1^2>t)(1-\mathbb{P}(\chi_1^2>t))}}.\end{aligned}$$ Based on these two statistics, we define $$\psi={{\mathbf{1}_{\left\{{T_n^-\wedge T_n^+>\sqrt{2(1+\delta)\log\log n}}\right\}}}}, \label{eq:equal-test-diff}$$ where $\delta>0$ is an arbitrary fixed constant. [Taking the minimum of $T_n^+$ and $T_n^-$ makes the test invariant to label switching.]{}
\[prop:equal-diff\] [For testing ]{} with the assumption that $\|\theta\|=\|\eta\|$ and the calibration in (\[eq:equal-cali\]), the test (\[eq:equal-test-diff\]) satisfies $\lim_{n\rightarrow\infty}R_n(\psi,\theta,\eta,\epsilon)=0$ as long as $\beta<\beta_{\rm IDJ}^*(r)$.
Proposition \[prop:equal-diff\] shows that the test (\[eq:equal-test-diff\]) consistently distinguishes two clustering structures under the same condition that implies [consistency in the sparse mixture detection problem -]{}. [This being said, it is not clear at this point whether $\beta^*_{\rm IDJ}(r)$ is the detection boundary for under the equal SNR assumption and the calibration , which, if were true, would require that no consistent test exists when $\beta > \beta^*_{\rm IDJ}(r)$. ]{}
\[rem:est\] Another straightforward way to testing (\[eq:problem\]) is to first estimate $z$ and $\sigma$ and then reject $H_0$ if the two estimators are not sufficiently close. Let ${\widehat}{z}$ and ${\widehat}{\sigma}$ be minimax optimal estimators of $z$ and $\sigma$ that satisfy the error bounds (\[eq:yu-lu-diao\]). A natural test is then $\psi_{\rm estimation}={{\mathbf{1}_{\left\{{\ell({\widehat}{z},{\widehat}{\sigma})>\epsilon/2}\right\}}}}$. It can be shown that $\lim_{n\rightarrow\infty}R_n(\psi_{\rm estimation},\theta,\eta)=0$ when $\beta<r/2$ under the calibration [(\[eq:equal-cali\])]{}. Compared with the condition $\beta<\beta_{\rm IDJ}^*(r)$ required by the test (\[eq:equal-test-diff\]), $\psi_{\rm estimation}$ requires a stronger SNR to achieve consistency.
The Lost Information
--------------------
The natural [follow-up]{} question is whether the condition $\beta<\beta_{\rm IDJ}^*(r)$ in Proposition \[prop:equal-diff\] is necessary for consistently testing (\[eq:problem\]) [with the equal SNR assumption and the calibration ]{}. In order to address this lower bound question, let us continue [to suppose]{} $\frac{1}{n}\sum_{i=1}^n{{\mathbf{1}_{\left\{{z_i\neq \sigma_i}\right\}}}}\leq \frac{1}{2}$ so that we ignore label switching temporarily. [A key observation is that]{} by reducing the data from $({\widetilde}{X}_i,{\widetilde}{Y}_i)$ to $({\widetilde}{X}_i-{\widetilde}{Y}_i)/ {\sqrt{2}} $, we [have thrown]{} away all the information in $({\widetilde}{X}_i+{\widetilde}{Y}_i) / {\sqrt{2}}$. [Therefore, we now study the sequence $\{({\widetilde}{X}_i+{\widetilde}{Y}_i)/\sqrt{2}\}_{1\leq i\leq n}$.]{}
We note that whether $z_i=\sigma_i$ not only changes the distribution of $({\widetilde}{X}_i-{\widetilde}{Y}_i) / {\sqrt{2}}$, but also the distribution of $({\widetilde}{X}_i+{\widetilde}{Y}_i) / {\sqrt{2}}$. In fact, we have $$\frac{1}{\sqrt{2}}({\widetilde}{X}_i+{\widetilde}{Y}_i)\sim \begin{cases}
N(\pm\sqrt{2}\|\theta\|,1), & z_i=\sigma_i, \\
N(0,1), & z_i\neq \sigma_i.
\end{cases}$$ Since there is at least [an]{} $\epsilon$ fraction of clustering labels that do not match, [a natural corresponding sparse mixture detection problem is the following:]{} $$\begin{aligned}
\label{eq:equal-sum-null} H_0: && V_1,\cdots,V_n\stackrel{iid}{\sim} \frac{1}{2}N(-\sqrt{2}\|\theta\|,1)+\frac{1}{2}N(\sqrt{2}\|\theta\|,1), \\
\label{eq:equal-sum-alt} H_1: && V_1,\cdots,V_n\stackrel{iid}{\sim} \frac{1-\epsilon}{2}N(-\sqrt{2}\|\theta\|,1)+\frac{1-\epsilon}{2}N(\sqrt{2}\|\theta\|,1) + \epsilon N(0,1).\end{aligned}$$ Compared with (\[eq:equal-sparse-null\])-(\[eq:equal-sparse-alt\]), the roles of $N(0,1)$ and $\frac{1}{2}N(-\sqrt{2}\|\theta\|,1)+\frac{1}{2}N(\sqrt{2}\|\theta\|,1)$ are switched in (\[eq:equal-sum-null\])-(\[eq:equal-sum-alt\]). [To our limited knowledge,]{} the testing problem (\[eq:equal-sum-null\])-(\[eq:equal-sum-alt\]) has not been studied in the literature before. With the same calibration (\[eq:equal-cali\]), its fundamental limit is given by the following theorem.
\[thm:V-equal\] Consider testing (\[eq:equal-sum-null\])-(\[eq:equal-sum-alt\]) with calibration (\[eq:equal-cali\]). Define $$\bar{\beta}^*(r)=1\wedge\frac{r+1}{2}. \label{eq:threshold-V-equal}$$ When $\beta<\bar{\beta}^*(r)$, the likelihood ratio test is consistent. When $\beta>\bar{\beta}^*(r)$, no test is consistent.
Theorem \[thm:V-equal\] shows that the optimal threshold [(in terms of the calibration )]{} for the testing problem (\[eq:equal-sum-null\])-(\[eq:equal-sum-alt\]) is $\bar{\beta}^*(r)$. It is easy to check that $$\bar{\beta}^*(r)\leq\beta_{\rm IDJ}^*(r),
\qquad \mbox{for all $r>0$}.$$ This [indicates that]{} the sequence $\{({\widetilde}{X}_i+{\widetilde}{Y}_i)/\sqrt{2}\}_{1\leq i\leq n}$ does contain information, but not as much as that in $\{({\widetilde}{X}_i-{\widetilde}{Y}_i)/\sqrt{2}\}_{1\leq i\leq n}$. Similar to (\[eq:equal-test-diff\]), we can also design an HC-type test [as motivated by]{} (\[eq:equal-sum-null\])-(\[eq:equal-sum-alt\]). Define $$\begin{aligned}
\nonumber \bar{T}_n^+ &=& \sup_{t>0}\frac{\left|\sum_{i=1}^n{{\mathbf{1}_{\left\{{({\widetilde}{X}_i+{\widetilde}{Y}_i)^2/2\leq t}\right\}}}}-\mathbb{P}(\chi_{1,2\|\theta\|^2}^2\leq t)\right|}{\sqrt{n\mathbb{P}(\chi_{1,2\|\theta\|^2}^2\leq t)(1-\mathbb{P}(\chi_{1,2\|\theta\|^2}^2\leq t))}}, \\
\nonumber \bar{T}_n^- &=& \sup_{t>0}\frac{\left|\sum_{i=1}^n{{\mathbf{1}_{\left\{{({\widetilde}{X}_i-{\widetilde}{Y}_i)^2/2\leq t}\right\}}}}-\mathbb{P}(\chi_{1,2\|\theta\|^2}^2\leq t)\right|}{\sqrt{n\mathbb{P}(\chi_{1,2\|\theta\|^2}^2\leq t)(1-\mathbb{P}(\chi_{1,2\|\theta\|^2}^2\leq t) )}}.\end{aligned}$$ In addition to $\bar{T}_n^+$, we need $\bar{T}_n^-$ to accommodate the possibility of $\frac{1}{n}\sum_{i=1}^n{{\mathbf{1}_{\left\{{z_i\neq \sigma_i}\right\}}}}> \frac{1}{2}$. The overall test for our original problem is then $$\bar{\psi}={{\mathbf{1}_{\left\{{\bar{T}_n^-\wedge \bar{T}_n^+>\sqrt{2(1+\delta)\log\log n}}\right\}}}}, \label{eq:equal-test-sum}$$ where $\delta>0$ is an arbitrary fixed constant.
\[thm:equal-sum\] [For testing ]{} with the assumption that $\|\theta\|=\|\eta\|$ and the calibration in (\[eq:equal-cali\]), the test (\[eq:equal-test-sum\]) satisfies $\lim_{n\rightarrow\infty}R_n(\bar{\psi},\theta,\eta,\epsilon)=0$ as long as $\beta<\bar{\beta}^*(r)$.
Combining the Two Views {#sec:combine-equal}
-----------------------
Proposition \[prop:equal-diff\] and Theorem \[thm:equal-sum\] show that the original testing problem (\[eq:problem\]) [is connected]{} to [both]{} (\[eq:equal-sparse-null\])-(\[eq:equal-sparse-alt\]) [and]{} (\[eq:equal-sum-null\])-(\[eq:equal-sum-alt\]). [The two views are complementary.]{} Both are [non-trivial]{} and lead to tests [for the original problem ]{} that achieve consistency under appropriate conditions. However, to achieve optimality in the original testing problem (\[eq:problem\]) [under equal SNR assumption with the calibration ]{}, we need to combine the two views. [In what follows, we first explain how this can be done in sparse mixture detection. This is followed by our main result for testing equivalence of clustering as in with equal SNR assumption.]{} Interestingly, @tony2019covariate discovered a similar phenomenon that one achieves additional power by using a complementary sequence in a different context, namely two sample multiple testing.
#### Sparse mixture detection
We now study the combination of the two views (\[eq:equal-sparse-null\])-(\[eq:equal-sparse-alt\]) and (\[eq:equal-sum-null\])-(\[eq:equal-sum-alt\]), which can be formulated [as testing]{} $$\begin{aligned}
\label{eq:equal-comb-null} H_0:~& (U_i,V_i)\stackrel{iid}{\sim} \frac{1}{2}N(0,1)\otimes N(-\sqrt{2}\|\theta\|,1)+\frac{1}{2}N(0,1)\otimes N(\sqrt{2}\|\theta\|,1),\,\, i\in[n], \,\, \mbox{vs.} \\
\label{eq:equal-comb-alt} H_1:~& (U_i,V_i)\stackrel{iid}{\sim} \frac{1-\epsilon}{2}N(0,1)\otimes N(-\sqrt{2}\|\theta\|,1)+\frac{1-\epsilon}{2}N(0,1)\otimes N(\sqrt{2}\|\theta\|,1) \\
\nonumber & \quad\quad\qquad+\frac{\epsilon}{2}N(-\sqrt{2}\|\theta\|,1)\otimes N(0,1)+\frac{\epsilon}{2}N(\sqrt{2}\|\theta\|,1)\otimes N(0,1),\,\, i\in[n].\end{aligned}$$ [The critical values and can now be viewed as detection boundaries for testing - when only $\{U_i\}_{1\leq i\in n}$ and $\{V_i\}_{1\leq i\in n}$ are used, respectively.]{} The two components $U_i$ and $V_i$ behave very differently under [null and alternative]{}. The value of $|U_i|$ [tends to be smaller]{} under $H_0$ and [larger]{} under $H_1$, while the value of $|V_i|$ [behaves in the opposite way]{}. [This motivates us to]{} combine the two pieces of information by [working with]{} $|U_i|-|V_i|$, which [tends to be smaller]{} under $H_0$ and [larger]{} under $H_1$.
![An example of the distributions of $|U_i|$, $|V_i|$, and $|U_i|-|V_i|$ under $H_0$ and $H_1$. Each histogram plot is overlapped with the red density curve of the distribution under the other hypothesis.[]{data-label="fig:histogram"}](histogram){width="\textwidth"}
We illustrate this combination of the information in Figure \[fig:histogram\]. We observe that the differences between the distributions of $|U_i|$ and $|V_i|$ under the two hypotheses have been greatly magnified by those of $|U_i|-|V_i|$. Since there is [on average]{} an $\epsilon$ fraction of [non-nulls]{} under $H_1$, we [may]{} reject $H_0$ [if]{} $\sum_{i=1}^n{{\mathbf{1}_{\left\{{|U_i|-|V_i|>t}\right\}}}}$ is above some threshold. This intuition motivates us to consider the following HC-type test. Define the survival function $$S_{\|\theta\|}(t)=\mathbb{P}_{(U^2,V^2)\sim \chi_1^2\otimes \chi_{1,2\|\theta\|^2}^2}\left(|U|-|V|>t\right).$$ We reject $H_0$ when $$\sup_{t\in\mathbb{R}}\frac{\left|\sum_{i=1}^n{{\mathbf{1}_{\left\{{|U_i|-|V_i|>t}\right\}}}}-nS_{\|\theta\|}(t)\right|}{\sqrt{nS_{\|\theta\|}(t)(1-S_{\|\theta\|}(t))}}>\sqrt{2(1+\delta)\log\log n},\label{eq:HC-comb}$$ where $\delta>0$ is an arbitrary fixed constant.
\[thm:U-V-equal\] Consider testing (\[eq:equal-comb-null\])-(\[eq:equal-comb-alt\]) with calibration (\[eq:equal-cali\]). Define $$\beta^*(r)=\begin{cases}
\frac{1}{2}({1+3r}), & 0<r\leq \frac{1}{5}, \\
\sqrt{1-(1-2r)_+^2}, & r>\frac{1}{5}.
\end{cases}\label{eq:freestyle}$$ When $\beta<\beta^*(r)$, the likelihood ratio test and the HC-type test (\[eq:HC-comb\]) are consistent. When $\beta>\beta^*(r)$, no test is consistent.
We plot the three threshold functions (a.k.a. detection boundaries) [$\bar{\beta}^*(r)$ [(red)]{}, $\beta_{\rm IDJ}^*(r)$ [(orange)]{} and $\beta^*(r)$ [(blue)]{}]{} in Figure \[fig:phase-equal\].
![ [Comparison of three detection boundaries.]{} []{data-label="fig:phase-equal"}](phase-equal){width="60.00000%"}
Since $\bar{\beta}^*(r)\leq\beta_{\rm IDJ}^*(r)\leq\beta^*(r)$ for all $r>0$, [in view of the discussion following (\[eq:equal-comb-null\])-(\[eq:equal-comb-alt\]) we can conclude that pooling information in $\{U_i\}_{1\leq i\in n}$ and $\{V_i\}_{1\leq i\in n}$ leads to a more powerful test than using either single sequence.]{}
#### Testing equivalence of clustering
[We are now in a position to show that $\beta^*(r)$ in is also the detection boundary for testing under the equal SNR assumption and the calibration .]{} [Motivated by and taking into account possible label switching, we define]{} $$\begin{aligned}
\check{T}_n^- &=& \sup_{t\in\mathbb{R}}\frac{\left|\sum_{i=1}^n{{\mathbf{1}_{\left\{{|{\widetilde}{X}_i-{\widetilde}{Y}_i|-|{\widetilde}{X}_i+{\widetilde}{Y}_i|>\sqrt{2}t}\right\}}}}-nS_{\|\theta\|}(t)\right|}{\sqrt{nS_{\|\theta\|}(t)(1-S_{\|\theta\|}(t))}}, \\
\check{T}_n^+ &=& \sup_{t\in\mathbb{R}}\frac{\left|\sum_{i=1}^n{{\mathbf{1}_{\left\{{|{\widetilde}{X}_i+{\widetilde}{Y}_i|-|{\widetilde}{X}_i-{\widetilde}{Y}_i|>\sqrt{2}t}\right\}}}}-nS_{\|\theta\|}(t)\right|}{\sqrt{nS_{\|\theta\|}(t)(1-S_{\|\theta\|}(t))}},\end{aligned}$$ and $$\check{\psi}={{\mathbf{1}_{\left\{{\check{T}_n^-\wedge \check{T}_n^+>\sqrt{2(1+\delta)\log\log n}}\right\}}}}, \label{eq:equal-test-comb}$$ where $\delta>0$ is an arbitrary fixed constant.
\[thm:main-equal\] [For testing ]{} with the assumption that $\|\theta\|=\|\eta\|$ and the calibration in (\[eq:equal-cali\]), the test (\[eq:equal-test-comb\]) satisfies $\lim_{n\rightarrow\infty}R_n(\check{\psi},\theta,\eta,\epsilon)=0$ as long as $\beta<{\beta}^*(r)$. Moreover, we have $\liminf_{n\rightarrow\infty}R_n(\theta,\eta,\epsilon)>0$, [that is no test is consistent]{}, when $\beta>{\beta}^*(r)$.
[We conclude this section with]{} three remarks on Theorem \[thm:main-equal\]. First, the theorem shows that the two-dimensional sparse mixture testing problem (\[eq:equal-comb-null\])-(\[eq:equal-comb-alt\]) contains the mathematical essence of the original [testing equivalence of clustering]{} problem (\[eq:problem\]), because [they share the same detection boundary]{}. [In addition]{}, it shows that either the view of (\[eq:equal-sparse-null\])-(\[eq:equal-sparse-alt\]) or (\[eq:equal-sum-null\])-(\[eq:equal-sum-alt\]) results in a suboptimal solution (see Figure \[fig:phase-equal\]). The testing problem (\[eq:problem\]) is fundamentally different from the sparse mixture detection problem [(\[eq:equal-sparse-null\])-(\[eq:equal-sparse-alt\])]{} that [has been]{} well studied in the literature. [Furthermore]{}, it [suffices]{} to work with the one-dimensional [projected datasets]{} $\{(\theta^TX_i/\|\theta\|,\eta^TY_i/\|\eta\|)\}_{1\leq i\leq n}$ [when constructing tests]{}, as [the upper and the lower bounds match]{} in Theorem \[thm:main-equal\].
The General Phase Diagram {#sec:general}
=========================
In this section, we study the general case [testing ]{} where $\|\theta\|$ and $\|\eta\|$ are not necessarily equal. This is a more complicated problem than the [equal SNR]{} case studied in Section \[sec:balanced\]. [For the general case,]{} we adopt the following calibration: $$\epsilon=n^{-\beta},\quad \frac{2\|\theta\|\|\eta\|}{\sqrt{\|\theta\|^2+\|\eta\|^2}} = \sqrt{2r\log n}, \quad \frac{|\|\theta\|^2-\|\eta\|^2|}{\sqrt{\|\theta\|^2+\|\eta\|^2}} = \sqrt{2s\log n}. \label{eq:general-cali}$$ [With this calibration, $(r,s)$ can take any value in $(0,\infty)\times [0,\infty)$.]{} [Although]{} there are other ways to parametrize $\|\theta\|$ and $\|\eta\|$, [we find (\[eq:general-cali\]) convenient and interpretable]{}. In , $r$ [characterizes]{} overall signal strength and $s$ [quantifies the level of difference in SNRs [of the two samples]{}]{}. When $s=0$, (\[eq:general-cali\]) [reduces to]{} (\[eq:equal-cali\]). [With this natural reduction, all results in Section \[sec:balanced\] can be obtained by setting $s = 0$ in results for the general case which we shall derive in this section.]{} Furthermore, the following expressions can be derived from (\[eq:general-cali\]): $$\begin{aligned}
{\|\theta\|^2+\|\eta\|^2} &=& {2(r+s)\log n}, \\
\|\theta\|^2\vee\|\eta\|^2 &=& \left(r+s+\sqrt{s}\sqrt{r+s}\right)\log n, \\
\label{eq:min-sig} \|\theta\|^2\wedge\|\eta\|^2 &=& \left(r+s-\sqrt{s}\sqrt{r+s}\right)\log n.\end{aligned}$$
A Related Sparse Mixture Detection Problem
------------------------------------------
With ${\widetilde}{X}_i\sim N(z_i\|\theta\|,1)$ and ${\widetilde}{Y}_i\sim N(\sigma_i\|\eta\|,1)$ [as defined in ]{}, it is natural to consider $$\frac{\|\eta\|{\widetilde}{X}_i-\|\theta\|{\widetilde}{Y}_i}{\sqrt{\|\theta\|^2+\|\eta\|^2}}\sim N\left(\frac{\|\theta\|\|\eta\|(z_i-\sigma_i)}{\sqrt{\|\theta\|^2+\|\eta\|^2}},1\right). \label{eq:seq1}$$ Moreover, to [avoid]{} information loss, we also consider the following complementary sequence to (\[eq:seq1\]), $$\frac{\|\theta\|{\widetilde}{X}_i+\|\eta\|{\widetilde}{Y}_i}{\sqrt{\|\theta\|^2+\|\eta\|^2}}\sim N\left(\frac{\|\theta\|^2z_i+\|\eta\|^2\sigma_i}{\sqrt{\|\theta\|^2+\|\eta\|^2}},1\right). \label{eq:seq2}$$ The sequences (\[eq:seq1\]) and (\[eq:seq2\]) are [mutually]{} independent. Since $({\widetilde}{X}_i,{\widetilde}{Y}_i)$ [and]{} $\left(\frac{\|\eta\|{\widetilde}{X}_i-\|\theta\|{\widetilde}{Y}_i}{\sqrt{\|\theta\|^2+\|\eta\|^2}},\frac{\|\theta\|{\widetilde}{X}_i+\|\eta\|{\widetilde}{Y}_i}{\sqrt{\|\theta\|^2+\|\eta\|^2}}\right)$ [have one-to-one correspondence]{}, there is no information loss.
Without loss of generality[^3], [let us further]{} [assume]{} $\|\theta\|\geq\|\eta\|$. We note that when $z_i=\sigma_i$, the two sequences have means $0$ and $\pm\sqrt{\|\theta\|^2+\|\eta\|^2}$, respectively. When $z_i\neq \sigma_i$, [they]{} have means $\pm \frac{2\|\theta\|\|\eta\|}{\sqrt{\|\theta\|^2+\|\eta\|^2}}$ and $\pm \frac{|\|\theta\|^2-\|\eta\|^2|}{\sqrt{\|\theta\|^2+\|\eta\|^2}}$, respectively. Therefore, a [natural corresponding]{} sparse mixture detection problem to (\[eq:problem\]) is $$\begin{aligned}
\label{eq:general-comb-null} H_0: \quad (U_i,V_i) &\stackrel{iid}{\sim}& \frac{1}{2}N(0,1)\otimes N(-\sqrt{2(r+s)\log n},1) \\
\nonumber && +\frac{1}{2}N(0,1)\otimes N(\sqrt{2(r+s)\log n},1),\quad i\in[n], \\
\label{eq:general-comb-alt} H_1: \quad (U_i,V_i) &\stackrel{iid}{\sim}& \frac{1-\epsilon}{2}N(0,1)\otimes N(-\sqrt{2(r+s)\log n},1) \\
\nonumber && +\frac{1-\epsilon}{2}N(0,1)\otimes N(\sqrt{2(r+s)\log n},1) \\
\nonumber && +\frac{\epsilon}{2}N(\sqrt{2r\log n},1)\otimes N(\sqrt{2s\log n},1) \\
\nonumber && +\frac{\epsilon}{2}N(-\sqrt{2r\log n},1)\otimes N(-\sqrt{2s\log n},1), \quad i\in[n].\end{aligned}$$ When $s=0$, the testing problem (\[eq:general-comb-null\])-(\[eq:general-comb-alt\]) [reduces to]{} (\[eq:equal-comb-null\])-(\[eq:equal-comb-alt\]).
[Similar to Section \[sec:balanced\], as a first step,]{} we derive the detection boundaries of tests that only use $\{U_i\}_{1\leq i\leq n}$ or $\{V_i\}_{1\leq i\leq n}$.
\[thm:general-separate\] Consider testing (\[eq:general-comb-null\])-(\[eq:general-comb-alt\]) with $\epsilon=n^{-\beta}$. Define $$\bar{\beta}^*(r,s)=\begin{cases}
\frac{1}{2}+r-2\sqrt{s}(\sqrt{r+s}-\sqrt{s}), & 3s>r {~and~} (\sqrt{r+s}-\sqrt{s})^2\leq\frac{1}{4}, \\
\frac{1+r-s}{2}, & 3s\leq r {~and~} r+s\leq 1, \\
r-2(\sqrt{r+s}-\sqrt{s})(\sqrt{r+s}-1), & r+s > 1 {~and~} \frac{1}{4}<(\sqrt{r+s}-\sqrt{s})^2\leq 1, \\
1, & (\sqrt{r+s}-\sqrt{s})^2 > 1.
\end{cases}$$ [For any fixed constant $\delta>0$,]{} we have the following two conclusions:
1. When $\beta<\beta^*_{\rm IDJ}(r)$, the test with rejection region $$\sup_{t>0}\frac{\left|\sum_{i=1}^n{{\mathbf{1}_{\left\{{|U_i|^2>t}\right\}}}}-n\mathbb{P}(\chi_1^2>t)\right|}{\sqrt{n\mathbb{P}(\chi_1^2>t) (1-\mathbb{P}(\chi_1^2>t) )}}>\sqrt{2(1+\delta)\log\log n}$$ is consistent. When $\beta>\beta^*_{\rm IDJ}(r)$, no test that only uses $\{U_i\}_{1\leq i\leq n}$ is consistent.
2. When $\beta<\bar{\beta}^*(r,s)$, the test with rejection region $$\sup_{t>0}\frac{\left|\sum_{i=1}^n{{\mathbf{1}_{\left\{{|V_i|^2\leq t}\right\}}}}-n\mathbb{P}(\chi_{1,2(r+s)\log n}^2\leq t )\right|}{\sqrt{n\mathbb{P}(\chi_{1,2(r+s)\log n}^2\leq t ) (1-\mathbb{P}(\chi_{1,2(r+s)\log n}^2\leq t ) )}}>\sqrt{2(1+\delta)\log\log n}$$ is consistent. When $\beta>\bar{\beta}^*(r,s)$, no test that only uses $\{V_i\}_{1\leq i\leq n}$ is consistent.
The first conclusion of Theorem \[thm:general-separate\] is obvious, since the marginal distributions of $\{U_i\}_{1\leq i\leq n}$ under [ and ]{} are exactly the same as [those under (\[eq:equal-sparse-null\]) and (\[eq:equal-sparse-alt\]), respectively.]{} In contrast, the second conclusion shows an intricate behavior of the two-dimensional threshold function $\bar{\beta}^*(r,s)$. We note that $\bar{\beta}^*(r,s)$ can be viewed as an extension of $\bar{\beta}^*(r)$ defined in (\[eq:threshold-V-equal\]) in the sense that [setting $s = 0$ in $\bar{\beta}^*(r,0)$ gives .]{} The definition of $\bar{\beta}^*(r,s)$ involves four [disjoint regions in $(0,\infty) \times [0,\infty)$]{}. When $s=0$, the second and the third [cases]{} become degenerate. Moreover, we also have the relation $\bar{\beta}^*(r,s)\leq \bar{\beta}^*(r)$ for all $r,s>0$, which suggests that the testing problem becomes harder as $\|\theta\|$ and $\|\eta\|$ [become more different]{}. [Last but not least]{}, as $s\rightarrow\infty$, we have $\bar{\beta}^*(r,s)\rightarrow\frac{1}{2}$.
Which Event Shall We Count? {#sec:which-count}
---------------------------
Now let us try to solve the testing problem (\[eq:general-comb-null\])-(\[eq:general-comb-alt\]) by considering both $\{U_i\}_{1\leq i\leq n}$ and $\{V_i\}_{1\leq i\leq n}$. In order to derive the sharp detection boundary of (\[eq:general-comb-null\])-(\[eq:general-comb-alt\]) and also of the original problem (\[eq:problem\]), we need to first find the optimal testing statistic. [By]{} Theorem \[thm:general-separate\], [the detection boundary of either single sequence can be achieved by an appropriate HC-type test.]{} [For $\{U_i\}_{1\leq i\leq n}$ the test counts the number of large $|U_i|$’s by $\sum_{i=1}^n{{\mathbf{1}_{\left\{{|U_i|^2>t}\right\}}}}$, and for $\{V_i\}_{1\leq i\leq n}$ the corresponding test counts the number of small $|V_i|$’s by $\sum_{i=1}^n{{\mathbf{1}_{\left\{{|V_i|^2\leq t}\right\}}}}$.]{} [These]{} tests suggest that for testing (\[eq:general-comb-null\])-(\[eq:general-comb-alt\]) we should count the event that either $|U_i|$ is large or $|V_i|$ is small. When the [SNRs are equal]{}, we have used $\sum_{i=1}^n{{\mathbf{1}_{\left\{{|U_i|-|V_i|>t}\right\}}}}$ in Section \[sec:combine-equal\] [for this purpose]{}. However, [such an event may no longer be]{} appropriate when $\|\theta\|\neq \|\eta\|$.
In order to find out the [appropriate]{} event to count, we present the following heuristic argument from a more general perspective. Let consider [the following]{} abstract sparse mixture testing problem: $$\begin{aligned}
\label{eq:abstract-null} H_0: && W_1,\cdots,W_n\stackrel{iid}{\sim} P, \quad \mbox{vs.}\\
\label{eq:abstract-alt} H_1: && W_1,\cdots,W_n\stackrel{iid}{\sim} (1-\epsilon)P+\epsilon Q,\end{aligned}$$ where $\epsilon=n^{-\beta}$ for some constant $\beta\in(0,1)$. Then, the general HC-type testing statistic can be written as $$\sup_{A\in\mathcal{A}}\frac{\left|\sum_{i=1}^n{{\mathbf{1}_{\left\{{W_i\in A}\right\}}}}-nP(A)\right|}{\sqrt{nP(A)(1-P(A))}}, \label{eq:HC-abstract}$$ where $\mathcal{A}$ is some collection of events. [As we shall show,]{} the reason to take supreme over [the collection]{} $\mathcal{A}$ is [mostly]{} for the sake of adaptation. When [one has knowledge of $P$ and $Q$,]{} [let us start with]{} the statistic $$T_n(A)=\frac{\sum_{i=1}^n{{\mathbf{1}_{\left\{{W_i\in A}\right\}}}}-nP(A)}{\sqrt{nP(A)(1-P(A))}}.$$ Now the question becomes how to choose $A$. Since the mean and the variance of $T_n(A)$ are $0$ and $1$ under $H_0$, the test ${{\mathbf{1}_{\left\{{|T_n(A)|>c_n}\right\}}}}$ for some slowly diverging sequence $c_n$ will be consistent if $\frac{(\mathbb{E}_{H_1}T_n(A))^2}{\Var_{H_1}(T_n(A))}\rightarrow\infty$ by [applying]{} Chebyshev’s inequality. A direct calculation gives $$\frac{(\mathbb{E}_{H_1}T_n(A))^2}{\Var_{H_1}(T_n(A))}=\frac{\left(n\epsilon(Q(A)-P(A))\right)^2}{n\left((1-\epsilon)P(A)+\epsilon Q(A)\right)\left(1-\left((1-\epsilon)P(A)+\epsilon Q(A)\right)\right)}.$$ By [symmetry of the righthand side]{}, we may consider $(1-\epsilon)P(A)+\epsilon Q(A)\leq \frac{1}{2}$ without loss of generality. This leads to the simplification $$\frac{(\mathbb{E}_{H_1}T_n(A))^2}{\Var_{H_1}(T_n(A))}\asymp \frac{\left(n\epsilon(Q(A)-P(A))\right)^2}{nP(A)+n\epsilon Q(A)}. \label{eq:ratio-mean-var}$$ In order that this ratio tends to infinity, we require either $\frac{(n\epsilon P(A))^2}{n P(A)+n\epsilon Q(A)}\rightarrow\infty$ or $\frac{(n\epsilon Q(A))^2}{n P(A)+n\epsilon Q(A)}\rightarrow\infty$. Suppose $\frac{(n\epsilon P(A))^2}{n P(A)+n\epsilon Q(A)}\rightarrow\infty$ holds, and then we have $n\epsilon^2 P(A)\rightarrow\infty$, which requires $\beta<\frac{1}{2}$, [which is too strong a condition to be of our interest.]{} Therefore, we require $\frac{(n\epsilon Q(A))^2}{n P(A)+n\epsilon Q(A)}\rightarrow\infty$, which can be equivalently written as two conditions $$\frac{n\epsilon^2 Q(A)^2}{P(A)}\rightarrow\infty\quad\text{and}\quad n\epsilon Q(A)\rightarrow\infty.$$ [With the calibration $\epsilon=n^{-\beta}$,]{} these two conditions [are equivalent to]{} $$\beta<\frac{1}{2}+\frac{\log Q(A)}{\log n}+\frac{1}{2}\min\left(1,\frac{\log\frac{1}{P(A)}}{\log n}\right). \label{eq:beta-abstract}$$ To maximize the detection region, we shall consider some event $A$ that makes the righthand side of (\[eq:beta-abstract\]) as large as possible. Since the righthand side of (\[eq:beta-abstract\]) is increasing in $Q(A)$ and decreasing in $P(A)$, the maximum is achieved by $A=\{\frac{dQ}{dP}(W)>t \}$ for some [appropriate choice of]{} $t$ according to the Neyman–Pearson lemma. This fact naturally motivates the choice $$\mathcal{A}=\left\{ \Big\{\frac{dQ}{dP}(W)>t \Big\}: t>0\right\}$$ in (\[eq:HC-abstract\]), [which]{} results in the HC-type statistic $$\sup_{t>0}\frac{\left|\sum_{i=1}^n{{\mathbf{1}_{\left\{{(dQ/dP)(W_i)>t}\right\}}}}-nP((dQ/dP)(W)>t)\right|}{\sqrt{nP((dQ/dP)(W)>t)P((dQ/dP)(W)\leq t)}}. \label{eq:HC-abstract-optimal}$$
Likelihood Ratio Approximation
------------------------------
The heuristic argument in Section \[sec:which-count\] suggests [that we use]{} the statistic $\sum_{i=1}^n{{\mathbf{1}_{\left\{{(dQ/dP)(W_i)>t}\right\}}}}$. We specify $P$ and $Q$ to the setting of (\[eq:general-comb-null\])-(\[eq:general-comb-alt\]) [to obtain that]{} $$\frac{dQ}{dP}(W_i)=\frac{q(U_i,V_i)}{p(U_i,V_i)},$$ where $$\begin{aligned}
\label{eq:general-P-den} p(u,v) &=& \frac{1}{2}\phi(u)\phi(v-\sqrt{2(r+s)\log n}) + \frac{1}{2}\phi(u)\phi(v+\sqrt{2(r+s)\log n}), \\
\label{eq:general-Q-den} q(u,v) &=& \frac{1}{2}\phi(u-\sqrt{2r\log n})\phi(v-\sqrt{2s\log n}) \\
\nonumber && + \frac{1}{2}\phi(u+\sqrt{2r\log n})\phi(v+\sqrt{2s\log n}).\end{aligned}$$ [Here $\phi(\cdot)$ is the probability density function of $N(0,1)$.]{} The following [key]{} lemma greatly simplifies the calculation of the likelihood ratio statistic.
\[lem:LR-approx\] For $p(u,v)$ and $q(u,v)$ defined above, we have $$\sup_{r,s>0}\sup_{u,v\in\mathbb{R}}\left|\log\frac{q(u,v)}{p(u,v)}-\sqrt{2\log n}\left(|\sqrt{r}u+\sqrt{s}v|-\sqrt{r+s}|v|\right)\right|\leq \log 2.$$
By Lemma \[lem:LR-approx\], $\sqrt{2\log n}\left(|\sqrt{r}u+\sqrt{s}v|-\sqrt{r+s}|v|\right)$ is the leading term of $\log\frac{q(u,v)}{p(u,v)}$ as $n\rightarrow\infty$. Therefore, [from an asymptotic viewpoint, we could simply focus on the sequence]{} $$\{|\sqrt{r}U_i+\sqrt{s}V_i|-\sqrt{r+s}|V_i|\}_{1\leq i\leq n}$$ which combines the information of $\{U_i\}_{1\leq i\leq n}$ and $\{V_i\}_{1\leq i\leq n}$. When $s=0$, [it reduces to]{} $\{\sqrt{r}(|U_i|-|V_i|)\}_{1\leq i\leq n}$, which [further]{} justifies the optimality of the test (\[eq:HC-comb\]) when $\|\theta\|=\|\eta\|$. As $s\rightarrow\infty$, we have $\sqrt{r+s}-\sqrt{s}=\frac{r}{\sqrt{r+s}+\sqrt{s}}\rightarrow 0$, and [it can shown that]{} the sequence becomes $\{\sqrt{r}U_i{\mathop{\sf sign}}(V_i)\}_{1\leq i\leq n}$. [In other word,]{} asymptotically only the sign information of the sequence $\{V_i\}_{1\leq i\leq n}$ [matters as $s\to\infty$]{}.
The Three-Dimensional Phase Diagram
-----------------------------------
[We now move on to determine detection boundaries for - and for in general.]{}
#### Sparse mixture detection
[Consider the sparse mixture detection problem - first.]{} Inspired by Lemma \[lem:LR-approx\], we consider the following HC-type test with rejection region $$\sup_{t\in\mathbb{R}}\frac{\left|\sum_{i=1}^n{{\mathbf{1}_{\left\{{|\sqrt{r}U_i+\sqrt{s}V_i|-\sqrt{r+s}|V_i|>t}\right\}}}}-nS_{(r,s)}(t)\right|}{\sqrt{n S_{(r,s)}(t)(1-S_{(r,s)}(t))}}>\sqrt{2(1+\delta)\log\log n}, \label{eq:HC-comb-general}$$ where $\delta>0$ is some arbitrary fixed constant, and $S_{(r,s)}(t)$ is the survival function of $|\sqrt{r}U_i+\sqrt{s}V_i|-\sqrt{r+s}|V_i|$ under the null distribution, defined by $$S_{(r,s)}(t)=
\mathbb{P}_{ {H_0}}
\left(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|>t\right), \label{eq:survival-r-s}$$ [where $H_0$ is defined in .]{} By Lemma \[lem:LR-approx\] and our heuristic argument in Section \[sec:which-count\], the [test]{} statistic in (\[eq:HC-comb-general\]) is asymptotically [equivalent to]{} (\[eq:HC-abstract-optimal\]). Indeed, the test [with rejection region]{} (\[eq:HC-comb-general\]) achieves the optimal detection boundary of the testing problem (\[eq:general-comb-null\])-(\[eq:general-comb-alt\]), [which is summarized as the following theorem.]{}
\[thm:general-HC\] Consider [testing]{} (\[eq:general-comb-null\])-(\[eq:general-comb-alt\]) with $\epsilon=n^{-\beta}$. Define $$\beta^*(r,s)=\begin{cases}
\frac{1}{2}+2(r+s-\sqrt{s}\sqrt{r+s}), & 3s>r \mathrm{~and~} r+s-\sqrt{s}\sqrt{r+s}\leq \frac{1}{8}, \\
\frac{1}{2}({1+3r-s}), & 3s\leq r \mathrm{~and~} 5r+s\leq 1, \\
2\sqrt{r}\sqrt{1-r-s}, & 5r+s>1, ~\frac{1}{8}<r+s-\sqrt{s}\sqrt{r+s}\leq \frac{1}{2}, \\
& ~~\mathrm{~and~}2(1-r-s)(r+s-\sqrt{s}\sqrt{r+s})>r, \\
\big[2\sqrt{2(r+s-\sqrt{s}\sqrt{r+s})} & 5r+s>1, ~\frac{1}{8}<r+s-\sqrt{s}\sqrt{r+s}\leq \frac{1}{2}, \\
~~~ -2(r+s-\sqrt{s}\sqrt{r+s})\big], & ~~\mathrm{~and~} 2(1-r-s)(r+s-\sqrt{s}\sqrt{r+s})\leq r, \\
1, & r+s-\sqrt{s}\sqrt{r+s} > \frac{1}{2}.
\end{cases}$$ When $\beta<\beta^*(r,s)$, the test [with rejection region]{} (\[eq:HC-comb-general\]) is consistent. When $\beta>\beta^*(r,s)$, no test is consistent.
#### Testing equivalence of clustering
[Turn to]{} the original testing problem (\[eq:problem\]). Note that the two sequences (\[eq:seq1\]) and (\[eq:seq2\]) play the same roles as $\{U_i\}_{1\leq i\leq n}$ and $\{V_i\}_{1\leq i\leq n}$ [do in sparse mixture detection]{}. [In view of the parameterization in]{} (\[eq:general-cali\])-(\[eq:min-sig\]), we define $$\begin{aligned}
C^-(X_i,Y_i,\theta,\eta) &=& |\theta^TX_i-\eta^TY_i|-|\theta^TX_i+\eta^TY_i|,
\label{eq:cminus}
\\
C^+(X_i,Y_i,\theta,\eta) &=& |\theta^TX_i+\eta^TY_i|-|\theta^TX_i-\eta^TY_i|.
\label{eq:cplus}\end{aligned}$$ [For testing ,]{} we need both $\{C^-(X_i,Y_i,\theta,\eta)\}_{1\leq i\leq n}$ and $\{C^+(X_i,Y_i,\theta,\eta)\}_{1\leq i\leq n}$ to accommodate the possibility of label switching. Then, the HC-type statistics for testing (\[eq:problem\]) [can be]{} defined as $$\begin{aligned}
\label{eq:stat-t-dot--} \dot{T}_n^- &=& \sup_{t\in\mathbb{R}}\frac{\left|\sum_{i=1}^n{{\mathbf{1}_{\left\{{C^-(X_i,Y_i,\theta,\eta)>t\sqrt{2\log n}}\right\}}}}-nS_{(r,s)}(t)\right|}{\sqrt{n S_{(r,s)}(t)(1-S_{(r,s)}(t))}}, \\
\label{eq:stat-t-dot-+} \dot{T}_n^+ &=& \sup_{t\in\mathbb{R}}\frac{\left|\sum_{i=1}^n{{\mathbf{1}_{\left\{{C^+(X_i,Y_i,\theta,\eta)>t\sqrt{2\log n}}\right\}}}}-nS_{(r,s)}(t)\right|}{\sqrt{n S_{(r,s)}(t)(1-S_{(r,s)}(t))}}.\end{aligned}$$ They lead to the test $$\dot{\psi}={{\mathbf{1}_{\left\{{\dot{T}_n^-\wedge \dot{T}_n^+>\sqrt{2(1+\delta)\log\log n}}\right\}}}}, \label{eq:general-test-comb}$$ for some arbitrary fixed constant $\delta>0$.
\[thm:main-general\] [For testing]{} (\[eq:problem\]) with calibration (\[eq:general-cali\]), the test (\[eq:general-test-comb\]) satisfies $\lim_{n\rightarrow\infty}R_n(\dot{\psi},\theta,\eta,\epsilon)=0$ as long as $\beta<{\beta}^*(r,s)$. Moreover, when $\beta>{\beta}^*(r,s)$, we have $\liminf_{n\rightarrow\infty}R_n(\theta,\eta,\epsilon)>0$.
![ [3D plot of the detection boundary $\beta^*(r,s)$.]{}[]{data-label="fig:3d"}](3d){width="\textwidth"}
With Theorem \[thm:main-general\], we completely characterize the detection boundary of the testing problem (\[eq:problem\]) by the function $\beta^*(r,s)$. To [help understanding]{} the behavior of $\beta^*(r,s)$, Figure \[fig:3d\] [demonstrates its 3D plot]{} from various angles. In addition, we plot the five regions [that divide the domain of]{} $\beta^*(r,s)$, [that is $(0,\infty)\times [0,\infty)$]{}, on the left panel of Figure \[fig:combined\]. [Furthermore, we fix $s$]{} and study the behavior of the function $\beta_s^*(r)=\beta^*(r,s)$ [as a function of $r$ at some fixed $s$ value]{}.
![The five regions of the $(r,s)$-plane with the contour of $\beta^*(r,s)$ (Left Panel). The detection boundaries $\beta_s^*(r)=\beta^*(r,s)$ with $s$ fixed (Right Panel). The curve moves to the right as the fixed value of $s$ increases. The five colors of the two plots correspond to the five regions of $\beta^*(r,s)$ in the order of green, blue, cyan, magenta, and yellow.[]{data-label="fig:combined"}](combined){width="\textwidth"}
We start with $s=0$. In this case, [the problem reduces to]{} the [equal SNR]{} situation, and we are able to recover $\beta_s^*(r)=\beta^*(r)$, where $\beta^*(r)$ is defined in (\[eq:freestyle\]). [For any fixed]{} $s\in \left(0,\frac{1}{16}\right)$, the definition of $\beta_s^*(r)$ involves all the five areas in [the left panel of]{} Figure \[fig:combined\], and we have $$\beta_s^*(r)=\begin{cases}
\frac{1}{2}+2(r+s-\sqrt{s}\sqrt{r+s}), & 0<r<3s, \\
\frac{1}{2}(1+3r-s), & 3s\leq r< \frac{1-s}{5}, \\
2\sqrt{r}\sqrt{1-r-s}, & \frac{1-s}{5}\leq r < {\mathop{\sf root}}(s), \\
2\sqrt{2(r+s-\sqrt{s}\sqrt{r+s})}-2(r+s-\sqrt{s}\sqrt{r+s}), & {\mathop{\sf root}}(s) \leq r <\left(\sqrt{\frac{1}{2}+\frac{s}{4}}+\sqrt{\frac{s}{4}}\right)^2-s, \\
1, & r>\left(\sqrt{\frac{1}{2}+\frac{s}{4}}+\sqrt{\frac{s}{4}}\right)^2-s.
\end{cases}$$ Here $r={\mathop{\sf root}}(s)$ is a root of the equation $2(1-r-s)(r+s-\sqrt{s}\sqrt{r+s})=r$. We note that when $s\in \left(0,\frac{1}{16}\right)$, the equation has a unique [real]{} root between $\frac{3}{16}$ and $\frac{1}{2}$. Next, we consider [any fixed]{} $s\geq\frac{1}{16}$. [In this case, two regions become degenerate]{}, and we have $$\beta^*_s(r)=\begin{cases}
\frac{1}{2}+2(r+s-\sqrt{s}\sqrt{r+s}), & 0<r< \left(\sqrt{\frac{1}{8}+\frac{s}{4}}+\sqrt{\frac{s}{4}}\right)^2-s,\\
\big[2\sqrt{2(r+s-\sqrt{s}\sqrt{r+s})} & \left(\sqrt{\frac{1}{8}+\frac{s}{4}}+\sqrt{\frac{s}{4}}\right)^2-s \leq r < \left(\sqrt{\frac{1}{2}+\frac{s}{4}}+\sqrt{\frac{s}{4}}\right)^2-s, \\
~~~ -2(r+s-\sqrt{s}\sqrt{r+s})\big], \\
1, & r\geq \left(\sqrt{\frac{1}{2}+\frac{s}{4}}+\sqrt{\frac{s}{4}}\right)^2-s.
\end{cases}$$ Last but not least, we would like to point out that when $s=\infty$, we obtain the Ingster–Donoho–Jin threshold $\beta_s^*(r)=\beta^*_{\rm IDJ}(r)$. This agrees with the intuition that the sequence $\{V_i\}_{1\leq i\leq n}$ is asymptotically non-informative for the testing problem (\[eq:general-comb-null\])-(\[eq:general-comb-alt\]) as $s\rightarrow\infty$. The functions $\{\beta_s^*(r)\}$ with various choices of $s$ are shown on the right panel of Figure \[fig:combined\], and all the curves are between $\beta^*(r)$ and $\beta_{\rm IDJ}^*(r)$ (also see Figure \[fig:phase-equal\]). It is clear that for a fixed $s$, a larger $r$ makes the testing problem easier. On the other hand, increasing $s$ always makes the problem harder in the sense that $\beta^*_{s_1}(r)\geq\beta^*_{s_2}(r)$ [for all $r>0$]{} when $s_1<s_2$.
Testing for Exact Equality {#sec:equality}
==========================
[The most stringent]{} version of the testing problem (\[eq:problem\]) is whether or not the two clustering structures are exactly equal. This can be formulated into the following hypothesis testing problem: $$H_0:\ell(z,\sigma)=0\quad \mbox{ {vs.}}\quad
H_1:\ell(z,\sigma)> 0. \label{eq:problem-exact}$$ Since the loss function $\ell(z,\sigma)$ only takes value in the set $\{0,n^{-1},2n^{-1},\cdots\}$, the alternative hypothesis of (\[eq:problem-exact\]) is equivalent to $\ell(z,\sigma)\geq n^{-1}$. Therefore, the testing problem (\[eq:problem-exact\]) is a special case of (\[eq:problem\]) with $\beta=1$. However, Theorem \[thm:main-general\] only covers $\beta<1$. Since the lower bound proof of Theorem \[thm:main-general\] is based on the connection between (\[eq:problem\]) and (\[eq:general-comb-null\])-(\[eq:general-comb-alt\]), which requires $n\epsilon\rightarrow\infty$, the [boundary case of]{} $\beta=1$ is thus excluded.
In this section, we rigorously study the testing problem (\[eq:problem-exact\]). Given a testing procedure $\psi$, we define its worst-case testing error by $$R_n^{\rm exact}(\psi,\theta,\eta)=\sup_{\substack{z\in\{-1,1\}^n\\ \sigma\in\{z,-z\}~\,}}P^{(n)}_{(\theta,\eta,z,\sigma)}\psi + \sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n\\\ell(z,\sigma)> 0}}P^{(n)}_{(\theta,\eta,z,\sigma)}(1-\psi).$$ The minimax testing error is [then]{} defined by $$R_n^{\rm exact}(\theta,\eta)=\inf_{\psi}R_n^{\rm exact}(\psi,\theta,\eta).$$ Our first result gives a necessary and sufficient condition for the existence of a consistent test.
\[thm:exact\] Consider [testing]{} (\[eq:problem-exact\]) with calibration (\[eq:general-cali\]). When $r+s-\sqrt{s}\sqrt{r+s}<\frac{1}{2}$, we have $\liminf_{n\rightarrow\infty}R_n^{\rm exact}(\theta,\eta)>0$. When $r+s-\sqrt{s}\sqrt{r+s}>\frac{1}{2}$, the HC-type test $\dot{\psi}$ defined in (\[eq:general-test-comb\]) satisfies $\lim_{n\rightarrow\infty}R_n^{\rm exact}(\dot{\psi},\theta,\eta)=0$.
Theorem \[thm:exact\] shows that whether [$r+s-\sqrt{s}\sqrt{r+s}$ is above or below $\frac{1}{2}$]{} determines the existence of a consistent test. This is compatible with the last regime of the threshold function $\beta^*(r,s)$. See the yellow area in the left panel of Figure \[fig:combined\]. Given the relation (\[eq:min-sig\]), it is required that both $\|\theta\|^2$ and $\|\eta\|^2$ [are]{} greater than $\frac{1}{2}\log n$ [for separating]{} the null and the alternative hypotheses. Moreover, the same optimal HC-type test in Theorem \[thm:main-general\] continues to work for testing exact equality.
In addition to the HC-type test, we introduce a Bonferroni-type test that is also optimal for (\[eq:problem-exact\]). [To this end, define]{} $$t^*(r,s)=\begin{cases}
\sqrt{r(1-r-s)}, & 2(r+s)(r+s+\sqrt{s}\sqrt{r+s})\leq r, \\
\sqrt{2(r+s-\sqrt{s}\sqrt{r+s})}-(r+s-\sqrt{s}\sqrt{r+s}), & 2(r+s)(r+s+\sqrt{s}\sqrt{r+s})> r.
\end{cases}$$ [The following lemma shows that]{} it characterizes the largest element of the sequence $\{|\sqrt{r}U_i+\sqrt{s}V_i|-\sqrt{r+s}|V_i|\}_{1\leq i\leq n}$ under the null distribution.
\[lem:max-order\] Suppose $\{(U_i,V_i)\}_{1\leq i\leq n}$ are generated according to (\[eq:general-comb-null\]). Then, we have $$\frac{\max_{1\leq i\leq n}\left(|\sqrt{r}U_i+\sqrt{s}V_i|-\sqrt{r+s}|V_i|\right)}{\sqrt{2\log n}}\rightarrow t^*(r,s),$$ in probability.
Lemma \[lem:max-order\] shows that the largest element of the sequence $\{|\sqrt{r}U_i+\sqrt{s}V_i|-\sqrt{r+s}|V_i|\}_{1\leq i\leq n}$ is asymptotically $t^*(r,s)\sqrt{2\log n}$ under $H_0$. It is therefore natural to reject $H_0$ when the random variable $\max_{1\leq i\leq n}\left(|\sqrt{r}U_i+\sqrt{s}V_i|-\sqrt{r+s}|V_i| \right)$ is larger than $t^*(r,s)\sqrt{2\log n}$. [In view of the connection between sparse mixture detection and testing clustering equivalence,]{} applying the result to the sequences $\{C^-(X_i,Y_i,\theta,\eta)\}_{1\leq i\leq n}$ and $\{C^+(X_i,Y_i,\theta,\eta)\}_{1\leq i\leq n}$, we obtain the following testing procedure, $$\psi_{\rm Bonferroni}={{\mathbf{1}_{\left\{{\left(\max_{1\leq i\leq n}C^-(X_i,Y_i,\theta,\eta)\right)\wedge \left(\max_{1\leq i\leq n}C^+(X_i,Y_i,\theta,\eta)\right)>2t^*(r,s)\log n}\right\}}}}. \label{eq:test-exact-Bonf}$$
\[thm:Bonf\] Consider testing (\[eq:problem-exact\]) with calibration (\[eq:general-cali\]). When $r+s-\sqrt{s}\sqrt{r+s}>\frac{1}{2}$, we have $\lim_{n\rightarrow\infty}R_n^{\rm exact}(\psi_{\rm Bonferroni},\theta,\eta)=0$.
Adaptive Tests {#sec:adaptive-t}
==============
In this section, we [investigate]{} how to [test]{} (\[eq:problem\]) and (\[eq:problem-exact\]) when the model parameters $\theta\in\mathbb{R}^p$ and $\eta\in\mathbb{R}^q$ are unknown. We will show that both the HC-type test and the Bonferroni test can be modified into adaptive procedures, as long as [some]{} mild [growth rate]{} conditions on the dimensions $p$ and $q$ are satisfied.
Adaptive Bonferroni Test
------------------------
[We start with testing .]{} When designing the adaptive procedures, we adopt a random data splitting [scheme]{}. We first draw $d_1,\cdots, d_n\stackrel{iid}{\sim}\text{Bernoulli}(\frac{1}{2})$, and then define $\mathcal{D}_0=\{i\in[n]: d_i=0\}$ and $\mathcal{D}_1=\{i\in[n]:d_i=1\}$. Then, $\{\mathcal{D}_0,\mathcal{D}_1\}$ [forms]{} a random partition of $[n]$. Given some algorithms ${\widehat}{\theta}(\cdot)$ and ${\widehat}{\eta}(\cdot)$ that compute estimators of $\theta$ and $\eta$, we define [ ${\widehat}{\theta}^{(m)}={\widehat}{\theta}(\{(X_i,Y_i)\}_{i\in\mathcal{D}_m})$ and ${\widehat}{\eta}^{(m)}={\widehat}{\eta}(\{(X_i,Y_i)\}_{i\in\mathcal{D}_m})$ for $m = 0$ and $1$.]{} [For $m = 0$ and $1$, by plugging ${\widehat}{\theta}^{(m)}$ and ${\widehat}{\eta}^{(m)}$ into the relation (\[eq:general-cali\]), we obtain ${\widehat}{r}^{(m)}$ and ${\widehat}{s}^{(m)}$. ]{} Given these estimators of $\theta$ and $\eta$, we can modify (\[eq:test-exact-Bonf\]) into an adaptive procedure. We replace $\max_{1\leq i\leq n}C^-(X_i,Y_i,\theta,\eta)$ and $\max_{1\leq i\leq n}C^+(X_i,Y_i,\theta,\eta)$ by $$\begin{aligned}
{\widehat}{C}_m^- &=& \max_{i\in\mathcal{D}_m}C^-(X_i,Y_i,{\widehat}{\theta}^{(1-m)},{\widehat}{\eta}^{(1-m)}),\quad m=0,1, \\
{\widehat}{C}_m^+ &=& \max_{i\in\mathcal{D}_m}C^+(X_i,Y_i,{\widehat}{\theta}^{(1-m)},{\widehat}{\eta}^{(1-m)}),\quad m=0,1.\end{aligned}$$ Then, we combine these statistics by $$\begin{aligned}
\label{eq:levivsbeast} {\widehat}{C}^- &=& \begin{cases}
{\widehat}{C}_0^- \vee {\widehat}{C}_1^-, & {{\mathbf{1}_{\left\{{\|{\widehat}{\theta}^{(0)}-{\widehat}{\theta}^{(1)}\|\leq 1, \|{\widehat}{\eta}^{(0)}-{\widehat}{\eta}^{(1)}\|\leq 1}\right\}}}} + {{\mathbf{1}_{\left\{{\|{\widehat}{\theta}^{(0)}-{\widehat}{\theta}^{(1)}\|> 1, \|{\widehat}{\eta}^{(0)}-{\widehat}{\eta}^{(1)}\|> 1}\right\}}}}=1, \\
{\widehat}{C}_0^- \vee {\widehat}{C}_1^+, & {{\mathbf{1}_{\left\{{\|{\widehat}{\theta}^{(0)}-{\widehat}{\theta}^{(1)}\|> 1, \|{\widehat}{\eta}^{(0)}-{\widehat}{\eta}^{(1)}\|\leq 1}\right\}}}} + {{\mathbf{1}_{\left\{{\|{\widehat}{\theta}^{(0)}-{\widehat}{\theta}^{(1)}\|\leq 1, \|{\widehat}{\eta}^{(0)}-{\widehat}{\eta}^{(1)}\|> 1}\right\}}}}=1,
\end{cases} \\
\label{eq:elvinisgreat} {\widehat}{C}^+ &=& \begin{cases}
{\widehat}{C}_0^+ \vee {\widehat}{C}_1^+, & {{\mathbf{1}_{\left\{{\|{\widehat}{\theta}^{(0)}-{\widehat}{\theta}^{(1)}\|\leq 1, \|{\widehat}{\eta}^{(0)}-{\widehat}{\eta}^{(1)}\|\leq 1}\right\}}}} + {{\mathbf{1}_{\left\{{\|{\widehat}{\theta}^{(0)}-{\widehat}{\theta}^{(1)}\|> 1, \|{\widehat}{\eta}^{(0)}-{\widehat}{\eta}^{(1)}\|> 1}\right\}}}}=1, \\
{\widehat}{C}_0^+ \vee {\widehat}{C}_1^-, & {{\mathbf{1}_{\left\{{\|{\widehat}{\theta}^{(0)}-{\widehat}{\theta}^{(1)}\|> 1, \|{\widehat}{\eta}^{(0)}-{\widehat}{\eta}^{(1)}\|\leq 1}\right\}}}} + {{\mathbf{1}_{\left\{{\|{\widehat}{\theta}^{(0)}-{\widehat}{\theta}^{(1)}\|\leq 1, \|{\widehat}{\eta}^{(0)}-{\widehat}{\eta}^{(1)}\|> 1}\right\}}}}=1.
\end{cases}\end{aligned}$$ The adaptive Bonferroni test is defined by $$\psi_{\rm ada-Bonferroni}={{\mathbf{1}_{\left\{{{\widehat}{C}^-\wedge {\widehat}{C}^+>2\left(1+\frac{1}{\sqrt{\log n}}\right){\widehat}{t}\log n}\right\}}}},$$ where $${\widehat}{t}=\frac{t^*({\widehat}{r}^{(0)},{\widehat}{s}^{(0)})+t^*({\widehat}{r}^{(1)},{\widehat}{s}^{(1)})}{2}.$$ The additional factor $\left(1+\frac{1}{\sqrt{\log n}}\right)$ is to accommodate the error caused by estimators of $\theta$ and $\eta$. Before writing down the theorem that gives the desired theoretical guarantee for $\psi_{\rm ada-Bonferroni}$, let us define the loss functions $$L({\widehat}{\theta},\theta)=\|{\widehat}{\theta}-\theta\|\wedge\|{\widehat}{\theta}+\theta\|,\quad L({\widehat}{\eta},\eta)=\|{\widehat}{\eta}-\eta\|\wedge\|{\widehat}{\eta}+\eta\|.$$ Though $\theta$ and $\eta$ can be of different dimensions, we use the same notation $L(\cdot,\cdot)$ for the two loss functions simplicity.
\[thm:ada-Bonf\] We consider the testing problem (\[eq:problem-exact\]) with the calibration (\[eq:general-cali\]). Assume that there is some constant $\gamma>0$, such that $$\lim_{n\rightarrow\infty}\sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n}}P^{(n)}_{(\theta,\eta,z,\sigma)}\left(L({\widehat}{\theta}^{(0)},\theta)\vee L({\widehat}{\theta}^{(1)},\theta)\vee L({\widehat}{\eta}^{(0)},\eta)\vee L({\widehat}{\eta}^{(1)},\eta)>n^{-\gamma}\right) =0. \label{eq:estimation-error-weak}$$ When $r+s-\sqrt{s}\sqrt{r+s}>\frac{1}{2}$, we have $\lim_{n\rightarrow\infty}R_n^{\rm exact}(\psi_{\rm ada-Bonferroni},\theta,\eta)=0$.
[The condition may seem abstract at first sight. Later in Section \[sec:para-est\], we shall give concrete estimators so that it is met under a mild growth condition on $p$ and $q$. For full details, see Corollary \[cor:adaptive-dim\].]{}
Adaptive HC-Type Test
---------------------
To modify (\[eq:general-test-comb\]) into an adaptive procedure is more involved. This is [due to the fact that]{} we [not only]{} need to estimate the statistics $\{C^-(X_i,Y_i,\theta,\eta)\}_{1\leq i\leq n}$ and $\{C^+(X_i,Y_i,\theta,\eta)\}_{1\leq i\leq n}$, [but]{} also need to estimate the survival function $S_{(r,s)}(t)$ defined in (\[eq:survival-r-s\]). Our proposed strategy starts with a random data splitting step. This time we split the data into three parts [instead of two]{}. Draw $d_1,\cdots, d_n\stackrel{iid}{\sim}\text{Uniform}(\{0,1,2\})$, and then define $\mathcal{D}_m=\{i\in[n]:d_i=m\}$ for $m\in\{0,1,2\}$.
Given some algorithms ${\widehat}{\theta}(\cdot)$ and ${\widehat}{\eta}(\cdot)$ that compute estimators of $\theta$ and $\eta$, we first define ${\widehat}{\theta}={\widehat}{\theta}(\{(X_i,Y_i)\}_{i\in\mathcal{D}_0})$ and ${\widehat}{\eta}={\widehat}{\eta}(\{(X_i,Y_i)\}_{i\in\mathcal{D}_0})$. We then use ${\widehat}{\theta}$ and ${\widehat}{\eta}$ for [projection]{} and compute ${\widehat}{X}_i={\widehat}{\theta}^TX_i/\|{\widehat}{\theta}\|$ and ${\widehat}{Y}_i={\widehat}{\eta}^TY_i/\|{\widehat}{\eta}\|$ for all $i\in\mathcal{D}_1\cup\mathcal{D}_2$. Note that conditioning on $\{d_i\}_{1\leq i\leq n}$ and $\{(X_i,Y_i)\}_{i\in\mathcal{D}_0}$, ${\widehat}{X}_i$ and ${\widehat}{Y}_i$ are distributed according to $N(z_ia,1)$ and $N(\sigma_ib,1)$, respectively, where $a={\widehat}{\theta}^T\theta/\|{\widehat}{\theta}\|$ and $b={\widehat}{\eta}^T\eta/\|{\widehat}{\eta}\|$. [Given the projected data,]{} we will use [those in]{} $\mathcal{D}_1$ to estimate the one-dimensional parameters $|a|$ and $|b|$, and [those in]{} $\mathcal{D}_2$ to construct the [test]{} statistic. Define $${\widehat}{a}=\sqrt{\left(\frac{1}{|\mathcal{D}_1|}\sum_{i\in\mathcal{D}_1}X_i^2-1\right)_+}\quad\text{and}\quad{\widehat}{b}=\sqrt{\left(\frac{1}{|\mathcal{D}_1|}\sum_{i\in\mathcal{D}_1}Y_i^2-1\right)_+}.$$ With ${\widehat}{a}$ and ${\widehat}{b}$, we define $${\widehat}{r}=\frac{(2|{\widehat}{a}||{\widehat}{b}|)^2}{(2\log n)({\widehat}{a}^2+{\widehat}{b}^2)}\quad\text{and}\quad {\widehat}{s}=\frac{|{\widehat}{a}^2-{\widehat}{b}^2|^2}{(2\log n)({\widehat}{a}^2+{\widehat}{b}^2)}.$$ Then, the adaptive HC-type statistics are $$\begin{aligned}
{\widehat}{T}_n^- &=& \sup_{|t|\leq \log n}\frac{\left|\sum_{i\in\mathcal{D}_2}{{\mathbf{1}_{\left\{{C^-({\widehat}{X}_i,{\widehat}{Y}_i,{\widehat}{a},{\widehat}{b})>t\sqrt{2\log n}}\right\}}}}-|\mathcal{D}_2|S_{({\widehat}{r},{\widehat}{s})}(t)\right|}{\sqrt{|\mathcal{D}_2|S_{({\widehat}{r},{\widehat}{s})}(t)}}, \\
{\widehat}{T}_n^+ &=& \sup_{|t|\leq \log n}\frac{\left|\sum_{i\in\mathcal{D}_2}{{\mathbf{1}_{\left\{{C^+({\widehat}{X}_i,{\widehat}{Y}_i,{\widehat}{a},{\widehat}{b})>t\sqrt{2\log n}}\right\}}}}-|\mathcal{D}_2|S_{({\widehat}{r},{\widehat}{s})}(t)\right|}{\sqrt{|\mathcal{D}_2|S_{({\widehat}{r},{\widehat}{s})}(t)}}.\end{aligned}$$ This leads to the adaptive test $${\widehat}{\psi}_{\rm ada-HC}={{\mathbf{1}_{\left\{{{\widehat}{T}_n^-\wedge{\widehat}{T}_n^+>(\log n)^3}\right\}}}}.$$ Compared with (\[eq:stat-t-dot–\]) and (\[eq:stat-t-dot-+\]), the adaptive versions ${\widehat}{T}_n^-$ and ${\widehat}{T}_n^+$ restrict the supremum to the range $|t|\leq \log n$ and does not have an estimator of $1-S_{(r,s)}(t)$ in the denominator. Moreover, the test uses the threshold $(\log n)^3$ instead of the smaller $\sqrt{2(1+\delta)\log\log n}$. These changes are [adopted]{} to accommodate the [additional errors]{} caused by estimating the unknown parameters.
We present a way to modify (\[eq:general-test-comb\]) into an adaptive procedure. Define $${\widehat}{C}^-(X_i,Y_i)=\begin{cases}
C^-(X_i,Y_i,{\widehat}{\theta}^{(1)},{\widehat}{\eta}^{(1)}), & i\in\mathcal{D}_0, \\
C^-(X_i,Y_i,{\widehat}{\theta}^{(0)},{\widehat}{\eta}^{(0)}), & i\in\mathcal{D}_1,
\end{cases}$$ and $${\widehat}{C}^+(X_i,Y_i)=\begin{cases}
C^+(X_i,Y_i,{\widehat}{\theta}^{(1)},{\widehat}{\eta}^{(1)}), & i\in\mathcal{D}_0, \\
C^+(X_i,Y_i,{\widehat}{\theta}^{(0)},{\widehat}{\eta}^{(0)}), & i\in\mathcal{D}_1.
\end{cases}$$ Then, the sequences $\{{\widehat}{C}^-(X_i,Y_i)\}_{1\leq i\leq n}$ and $\{{\widehat}{C}^+(X_i,Y_i)\}_{1\leq i\leq n}$ are fully data-driven, and can be used as estimators of $\{C^-(X_i,Y_i,\theta,\eta)\}_{1\leq i\leq n}$ and $\{C^+(X_i,Y_i,\theta,\eta)\}_{1\leq i\leq n}$.
An additional problem for the HC-type test is that the null distribution depends on the unknown parameters. This is very different from various forms of sparse mixture detection problems studied in the literature. We therefore also need to estimate the survival function $S_{(r,s)}(t)$ defined in (\[eq:survival-r-s\]). So we define ${\widehat}{S}^{(0)}(t)=S_{({\widehat}{r}^{(0)},{\widehat}{s}^{(0)})}(t)$ and ${\widehat}{S}^{(1)}(t)=S_{({\widehat}{r}^{(1)},{\widehat}{s}^{(1)})}(t)$. Now we are ready to construct the adaptive HC-type statistics. We define $$\begin{aligned}
{\widehat}{T}_{(0),n}^{-} &=& \sup_{t\in\mathbb{R}}\frac{\left|\sum_{i\in\mathcal{D}_0}{{\mathbf{1}_{\left\{{{\widehat}{C}^-(X_i,Y_i)>t\sqrt{\log n}}\right\}}}}-n{\widehat}{S}^{(1)}(t)\right|}{\sqrt{n {\widehat}{S}^{(1)}(t)(1-{\widehat}{S}^{(1)}(t))}}, \\
{\widehat}{T}_{(1),n}^{-} &=& \sup_{t\in\mathbb{R}}\frac{\left|\sum_{i\in\mathcal{D}_1}{{\mathbf{1}_{\left\{{{\widehat}{C}^-(X_i,Y_i)>t\sqrt{\log n}}\right\}}}}-n{\widehat}{S}^{(0)}(t)\right|}{\sqrt{n {\widehat}{S}^{(0)}(t)(1-{\widehat}{S}^{(0)}(t))}}, \\
{\widehat}{T}_{(0),n}^{+} &=& \sup_{t\in\mathbb{R}}\frac{\left|\sum_{i\in\mathcal{D}_0}{{\mathbf{1}_{\left\{{{\widehat}{C}^+(X_i,Y_i)>t\sqrt{\log n}}\right\}}}}-n{\widehat}{S}^{(1)}(t)\right|}{\sqrt{n {\widehat}{S}^{(1)}(t)(1-{\widehat}{S}^{(1)}(t))}}, \\
{\widehat}{T}_{(1),n}^{+} &=& \sup_{t\in\mathbb{R}}\frac{\left|\sum_{i\in\mathcal{D}_1}{{\mathbf{1}_{\left\{{{\widehat}{C}^+(X_i,Y_i)>t\sqrt{\log n}}\right\}}}}-n{\widehat}{S}^{(0)}(t)\right|}{\sqrt{n {\widehat}{S}^{(0)}(t)(1-{\widehat}{S}^{(0)}(t))}}.\end{aligned}$$
\[thm:HC-adaptive\] Consider testing (\[eq:problem\]) with calibration (\[eq:general-cali\]). Assume that there is some constant $\gamma>0$, such that $$\lim_{n\rightarrow\infty}\sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n}}P^{(n)}_{(\theta,\eta,z,\sigma)}\left(L({\widehat}{\theta},\theta)\vee L({\widehat}{\eta},\eta)>n^{-\gamma}\right) =0. \label{eq:estimation-error-strong}$$ When $\beta<\beta^*(r,s)$, we have $\lim_{n\rightarrow\infty}R_n(\psi_{\rm ada-HC},\theta,\eta)=0$.
[As before, for estimators such that condition can be fulfilled, see Section \[sec:para-est\].]{}
[Randomly splitting]{} the data into three parts is [needed for technical details in the proofs]{}. [In order for our proof to go through]{}, we need to estimate the statistics $\{C^-(X_i,Y_i,\theta,\eta)\}_{1\leq i\leq n}$ and $\{C^+(X_i,Y_i,\theta,\eta)\}_{1\leq i\leq n}$ and the survival function $S_{(r,s)}(t)$ [at]{} different [levels of]{} accuracy. In particular, we require that the estimation error of $S_{(r,s)}(t)$ to be at most $\frac{(\log n)^{O(1)}}{\sqrt{n}}$, independent of the dimensions $p$ and $q$. Therefore, in addition to the two-part data splitting strategy used in building the adaptive Bonferroni test, we need an additional part to [estimate projection directions of two one-dimensional subspaces]{}.
One may wonder whether the adaptive HC-type test [can]{} also achieve the optimal detection boundary for the problem (\[eq:problem-exact\]). The answer would be no for the current definition of $\psi_{\rm ada-HC}$, because [under $H_1$]{} with a [non-trivial]{} probability, $\mathcal{D}_2$ does not contain the coordinate that has the signal. However, a modification of $\psi_{\rm ada-HC}$ [can]{} resolve this issue. The modification requires rotating the roles of the three datasets $\mathcal{D}_0$, $\mathcal{D}_1$, and $\mathcal{D}_2$, and then we can define analogous versions of the HC-type statistics ${\widehat}{T}_n^-$ and ${\widehat}{T}_n^+$ on $\mathcal{D}_0$ and $\mathcal{D}_1$. These statistics can be combined in a similar way to (\[eq:levivsbeast\]) and (\[eq:elvinisgreat\]). We omit the details.
#### Computation
We now discuss computation of ${\widehat}{\psi}_{\rm ada-HC}$. Note that both ${\widehat}{T}_n^-$ and ${\widehat}{T}_n^+$ can be computed efficiently using the $p$-value interpretation of the HC statistic in [@donoho2004higher]. In the ideal situation where $\theta$ and $\eta$ are known, the two sets of $p$-values are $\left\{S_{(r,s)}\left(\frac{C^-(X_i,Y_i,\theta,\eta)}{\sqrt{2\log n}}\right)\right\}_{1\leq i\leq n}$ and $\left\{S_{(r,s)}\left(\frac{C^+(X_i,Y_i,\theta,\eta)}{\sqrt{2\log n}}\right)\right\}_{1\leq i\leq n}$, which are involved in the computation of the test (\[eq:general-test-comb\]). When $\theta$ and $\eta$ are unknown, the following proposition suggests a similar [computation]{} strategy.
\[prop:comp-p-value\] Define ${\widehat}{p}_i^-=S_{({\widehat}{r},{\widehat}{s})}\left(\frac{C^-({\widehat}{X}_i,{\widehat}{Y}_i,{\widehat}{a},{\widehat}{b})}{\sqrt{2\log n}}\right)$ and ${\widehat}{p}_i^+=S_{({\widehat}{r},{\widehat}{s})}\left(\frac{C^+({\widehat}{X}_i,{\widehat}{Y}_i,{\widehat}{a},{\widehat}{b})}{\sqrt{2\log n}}\right)$ for $i\in\mathcal{D}_2$. Then, with probability tending to $1$, we have $$\begin{aligned}
\label{eq:haha} {\widehat}{T}_n^- &=& \max_{1\leq i\leq |\mathcal{D}_2|}\frac{\sqrt{|\mathcal{D}_2|}\left|\frac{i}{|\mathcal{D}_2|}-{\widehat}{p}_{(i,\mathcal{D}_2)}^-\right|}{\sqrt{{\widehat}{p}_{(i,\mathcal{D}_2)}^-}}, \\
\label{eq:hehe} {\widehat}{T}_n^+ &=& \max_{1\leq i\leq |\mathcal{D}_2|}\frac{\sqrt{|\mathcal{D}_2|}\left|\frac{i}{|\mathcal{D}_2|}-{\widehat}{p}_{(i,\mathcal{D}_2)}^+\right|}{\sqrt{{\widehat}{p}_{(i,\mathcal{D}_2)}^+}},\end{aligned}$$ where the subscript $(i,\mathcal{D}_2)$ indicates the $i$th order statistic within the set $\mathcal{D}_2$.
The statistics ${\widehat}{p}_i^-$ and ${\widehat}{p}_i^+$ can be regarded as estimators of $p$-values, which is a useful interpretation of ${\widehat}{\psi}_{\rm ada-HC}$. Since the formulas (\[eq:haha\]) and (\[eq:hehe\]) hold with high probability, $\lim_{n\rightarrow\infty}R_n(\psi_{\rm ada-HC},\theta,\eta)=0$ will continue to hold when $\beta<\beta^*(r,s)$ if (\[eq:haha\]) and (\[eq:hehe\]) are used in the computation of ${\widehat}{\psi}_{\rm ada-HC}$.
Parameter Estimation {#sec:para-est}
--------------------
We close this section by [presenting]{} a simple estimator for $\theta$ and $\eta$. Since we have that $X_i\sim N(z_i\theta,I_p)$, the [empirical second moment]{} $\frac{1}{n}\sum_{i=1}^nX_iX_i^T$ is a consistent estimator of the [population counterpart]{} $\theta\theta^T+I_p$. Apply eigenvalue decomposition and we get $\frac{1}{n}\sum_{i=1}^nX_iX_i^T=\sum_{j=1}^p{\widehat}{\lambda}_j{\widehat}{u}_j{\widehat}{u}_j^T$, and then a natural estimator for $\theta$ is ${\widehat}{\theta}=\sqrt{{\widehat}{\lambda}_1 {-1}}\,{\widehat}{u}_1$. This simple estimator enjoys the following property.
\[prop:parameter-estimation\] Consider independent observations $X_1,\cdots, X_n\sim N(z_i\theta,I_p)$ with some $z_i\in\{-1,1\}$ for all $i\in[n]$. Assume $p\leq n$, and then there exist universal constants $C,C'>0$, such that $$L({\widehat}{\theta},\theta)\leq C\sqrt{\frac{p}{n}},$$ with probability at least $1-e^{-C'p}$ uniformly over all $z\in\{-1,1\}^n$ and all $\theta\in\mathbb{R}^p$ that satisfies $\|\theta\|\geq 1$.
\[cor:adaptive-dim\] Consider the calibration (\[eq:general-cali\]). Suppose $p\vee q<n^{1-\delta}$ for some constant $\delta\in(0,1)$, then there exists some constant $\gamma>0$ depending on $\delta$, such that the conditions (\[eq:estimation-error-weak\]) and (\[eq:estimation-error-strong\]) hold.
Combine Theorem \[thm:ada-Bonf\], Theorem \[thm:HC-adaptive\], and Corollary \[cor:adaptive-dim\], and we can conclude that the optimal detection boundaries of the testing problems (\[eq:problem\]) and (\[eq:problem-exact\]) can be achieved adaptively without the knowledge of $(\theta,\eta)$, as long as the dimensions do not grow too fast in the sense that $p\vee q<n^{1-\delta}$.
In a more general setting, one may have $X_i\sim N(z_i\theta,\sigma^2)$ with both $\theta$ and $\sigma^2$ unknown. The proposed algorithm still works for estimating $\theta$. To estimate $\sigma^2$, one can use the estimator ${\widehat}{\sigma}^2=\frac{1}{p}{\mathop{\sf Tr}}\left(\frac{1}{n}\sum_{i=1}^nX_iX_i^T-{\widehat}{\theta}{\widehat}{\theta}^T\right)$. The theoretical analysis can be easily generalized to this case, and we omit the details here.
Last but not least, we remark that the condition $p\vee q<n^{1-\delta}$ can be extended if additional sparsity assumptions on $\theta$ and $\eta$ are imposed. This is related to the sparse clustering setting studied in the literature [@azizyan2013minimax; @jin2016influential; @jin2017phase], and a sparse PCA algorithm [@johnstone2009consistency; @ma2013sparse; @birnbaum2013minimax; @cai2013sparse; @vu2013minimax] can be applied to estimate $\theta$ and $\eta$.
Discussion {#sec:disc}
==========
suboptimality of Estimating the Clustering Labels
-------------------------------------------------
Perhaps the most obvious way to solve the testing problems (\[eq:problem\]) and (\[eq:problem-exact\]) is to first estimate $z$ and $\sigma$ and then reject $H_0$ if the two estimators are not close to each other. We formulate this idea and argue that this strategy is suboptimal. It is known that the minimax rates for estimating $z$ and $\sigma$ are given by $$\begin{aligned}
\inf_{{\widehat}{z}}\sup_{z\in\{-1,1\}^n}\mathbb{E}_{(z,\theta)}\ell({\widehat}{z},z) &=& \exp\left(-(1+o(1))\frac{\|\theta\|^2}{2}\right), \\
\inf_{{\widehat}{\sigma}}\sup_{\sigma\in\{-1,1\}^n}\mathbb{E}_{(\sigma,\eta)}\ell({\widehat}{\sigma},\sigma) &=& \exp\left(-(1+o(1))\frac{\|\eta\|^2}{2}\right),\end{aligned}$$ as long as $\|\theta\|^2\wedge\|\eta\|^2\rightarrow\infty$. See, for example, [@lu2016statistical]. Moreover, when $\|\theta\|^2\wedge\|\eta\|^2>2\log n$, we have $$\inf_{{\widehat}{z},{\widehat}{\sigma}}\sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n}}P_{(\theta,\eta,z,\sigma)}^{(n)}\left(\ell({\widehat}{z},z)\vee\ell({\widehat}{\sigma},\sigma)>0\right)\rightarrow 0.$$ Let ${\widehat}{z}$ and ${\widehat}{\sigma}$ be minimax optimal estimators of $z$ and $\sigma$. Then, we consider the tests $$\begin{aligned}
\psi_{\rm estimation} &=& {{\mathbf{1}_{\left\{{\ell({\widehat}{z},{\widehat}{\sigma})>\epsilon/2}\right\}}}}, \\
\psi_{\rm estimation}^{\rm exact} &=& {{\mathbf{1}_{\left\{{\ell({\widehat}{z},{\widehat}{\sigma})>0}\right\}}}}.\end{aligned}$$ The theoretical properties of the two tests are given by the following proposition.
\[prop:suboptimal-estimation\] We consider the testing problems (\[eq:problem\]) and (\[eq:problem-exact\]) with the calibration (\[eq:general-cali\]). When $\beta<\frac{1}{2}(r+s-\sqrt{s}\sqrt{r+s})$, we have $\lim_{n\rightarrow\infty}R_n(\psi_{\rm estimation},\theta,\eta)=0$. Moreover, when $r+s-\sqrt{s}\sqrt{r+s}>2$, we have $\lim_{n\rightarrow\infty}R_n^{\rm exact}(\psi_{\rm estimation}^{\rm exact},\theta,\eta)=0$.
Compared with Theorem \[thm:main-general\] and Theorem \[thm:exact\], the tests based on estimating the clustering labels require much stronger signal conditions to be consistent, and are clearly suboptimal.
Some Future Directions
----------------------
There are a few important open problems that we would like to discuss. The immediate extension of our setting would be testing two clustering structures that can have more than two groups. More generally, we can consider testing whether two clustering structures with at most $k$ groups are close, and we expect the number of clusters $k$ to play an important role in the optimal detection boundary. Another important question is whether the Gaussian assumption can be relaxed. More interestingly, we can consider testing the closeness of two clustering structures of data with different types. For example, in covariate-assisted network clustering [@deshpande2018contextual], it is implicitly assumed that the clustering structure of a stochastic block model is the same as that of the covariates, usually modeled by a Gaussian mixture distribution. One can check whether such an assumption is correct by testing the equivalence of the two clustering structures. Last but not least, an interesting application of our results is the technique of adaptive clustering. Suppose one has a set of additional variables that are potentially important for identifying an underlying clustering structure of a group of covariates. Then whether or not to include the additional variables for clustering can be determined by the testing procedure proposed in our paper. This can potentially lead to a safe but adaptive clustering algorithm that exploits the information of as many variables as possible.
Proofs {#sec:all-pf}
======
We give proofs of all the results of the paper in this section. After stating some technical lemmas in Section \[sec:pf-lemma-tech\], the proofs are organized in Sections \[sec:pf-org1\]-\[sec:pf-org2\]. The technical lemmas stated in Section \[sec:pf-lemma-tech\] will be proved in Section \[sec:pf-last\].
Some Technical Lemmas {#sec:pf-lemma-tech}
---------------------
\[prop:standard-normal\] Let $\phi(\cdot)$ and $\Phi(\cdot)$ be the density function and the cumulative distribution function of $N(0,1)$. The following facts hold.
1. For any $t>0$, $$(1-t^{-2})\frac{e^{-t^2/2}}{t\sqrt{2\pi}}<1-\Phi(t)<(1-t^{-2}+3t^{-4})\frac{e^{-t^2/2}}{t\sqrt{2\pi}}.\label{eq:Gaussian-tail}$$
2. For any $t_1,t_2\in\mathbb{R}$ such that $|t_1-t_2|\leq 1$, we have $\left|\int_{t_1}^{t_2}\phi(x)dx\right|\leq 2|t_1-t_2|\left(\phi(t_1)\vee\phi(t_2)\right)$.
3. We have $\sup_{t\in\mathbb{R}}\frac{\phi(t)/(1\vee t)}{1-\Phi(t)}\leq 20$.
4. For any constant $c>0$, we have $\sup_{\substack{|t_1|,|t_2|\leq (\log n)^2 \\ |t_1-t_2|\leq n^{-c}}}\frac{\phi(t_1)}{\phi(t_2)}\leq 2$ for a sufficiently large $n$.
5. For any constant $c>0$, we have $\sup_{\substack{|t_1|,|t_2|\leq (\log n)^2 \\ |t_1-t_2|\leq n^{-c}}}\frac{1-\Phi(t_1)}{1-\Phi(t_2)}\leq 2$ for a sufficiently large $n$.
\[lem:easy-tail\] Consider independent $U^2\sim \chi_{1,2r\log n}^2$ and $V^2\sim \chi_{1,2s\log n}^2$. We have $$\mathbb{P}\left(U^2\leq 2t\log n\right)\asymp
\begin{cases}
\frac{1}{\sqrt{\log n}}n^{-(\sqrt{r}-\sqrt{t})^2}, & 0<t<r, \\
1, & t\geq r,
\end{cases}$$ and $$\mathbb{P}\left(|U|-|V|>t\sqrt{2\log n}\right)\asymp
\begin{cases}
\frac{1}{\log n}n^{-\left[(t-\sqrt{r})^2+s\right]}, & t>\sqrt{r}+\sqrt{s}, \\
\frac{1}{\sqrt{\log n}}n^{-\frac{1}{2}(t-\sqrt{r}+\sqrt{s})^2}, & \sqrt{r}-\sqrt{s}<t\leq\sqrt{r}+\sqrt{s}, \\
1, & t\leq \sqrt{r}-\sqrt{s}.
\end{cases}$$
\[lem:comp-tail-0\] Consider independent [$U\sim N(0,1)$ and $V\sim N(\sqrt{2(r+s)\log n},1)$]{}. We have $$\begin{aligned}
&& \mathbb{P}\left(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|>t\sqrt{2\log n}\right) \\
&\asymp& \mathbb{P}\left(\sqrt{r}|U|-(\sqrt{r+s}-\sqrt{s})|V|>t\sqrt{2\log n}\right) \\
&\asymp& \begin{cases}
\frac{1}{\log n}n^{-\frac{t^2+r(r+s)}{r}}, & t>r+s+\sqrt{s}\sqrt{r+s}, \\
\frac{1}{\sqrt{\log n}}n^{-\frac{(t+r+s-\sqrt{s}\sqrt{r+s})^2}{2(r+s-\sqrt{s}\sqrt{r+s})}},& -(r+s)+\sqrt{s}\sqrt{r+s}<t\leq r+s+\sqrt{s}\sqrt{r+s}, \\
1, & t\leq -(r+s)+\sqrt{s}\sqrt{r+s}.
\end{cases}\end{aligned}$$
\[lem:comp-tail-1\] Consider independent $U\sim N(\sqrt{2r\log n},1)$ and $V\sim N(\sqrt{2s\log n},1)$. We have $$\begin{aligned}
&& \mathbb{P}\left(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|>t\sqrt{2\log n}\right) \\
&\asymp& \begin{cases}
\frac{1}{\log n}n^{-\frac{(t-r)^2+rs}{r}}, & t>r+s+\sqrt{s}\sqrt{r+s}, \\
\frac{1}{\sqrt{\log n}}n^{-\frac{(t-(r+s)+\sqrt{s}\sqrt{r+s})^2}{2(r+s-\sqrt{s}\sqrt{r+s})}}, & r+s-\sqrt{s}\sqrt{r+s}<t \leq r+s+\sqrt{s}\sqrt{r+s}, \\
1, & t\leq r+s-\sqrt{s}\sqrt{r+s}.
\end{cases}\end{aligned}$$ Moreover, for $t<r+s-\sqrt{s}\sqrt{r+s}$, we have $$\mathbb{P}\left(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|>t\sqrt{2\log n}\right)\rightarrow 1.$$
\[lem:Laplace\] Let $f:\mathcal{X}\rightarrow\mathbb{R}$ be a measurable function on the measurable space $(\mathcal{X},\mathcal{F},\nu)$ that satisfies $\int_{\mathcal{X}}e^{n_0f}d\nu<\infty$ for some $n_0>0$. Then, we have $$\lim_{n\rightarrow\infty}\frac{1}{n}\log\int_{\mathcal{X}}e^{n f}d\nu={\mathop{\rm ess}}\sup_{x\in\mathcal{X}}f(x),$$ where ${\mathop{\rm ess}}\sup_{x\in\mathcal{X}}f(x)=\inf\{a\in\mathbb{R}:\nu(\{x:f(x)>a\})=0\}$ is the essential supremum.
\[lem:Yihong-Wu\] For any $t\geq 0$, $(\sqrt{2}-1)^2 t\wedge t^2\leq (\sqrt{1+t}-1)^2\leq t\wedge t^2$. For any $t\geq -1$, $\sqrt{1+t}\geq 1+\frac{t}{2}-t^2$.
Proofs of Theorem \[thm:V-equal\] and Theorem \[thm:U-V-equal\] {#sec:pf-org1}
---------------------------------------------------------------
According to the Neyman-Pearson lemma, the optimal test for the testing problem (\[eq:equal-sum-null\])-(\[eq:equal-sum-alt\]) is the likelihood ratio test. So it is sufficient to prove the consistency of any test, and then the consistency of the likelihood ratio test is implied. We define the statistic $$T_n(t)=\frac{\sum_{i=1}^n{{\mathbf{1}_{\left\{{V_i^2\leq 2t\log n}\right\}}}}-n\mathbb{P}(\chi_{1,2r\log n}^2\leq 2t\log n)}{\sqrt{n\mathbb{P}(\chi_{1,2r\log n}^2\leq 2t\log n)\left(1-\mathbb{P}(\chi_{1,2r\log n}^2\leq 2t\log n)\right)}},$$ with $t\in(0,r)$ to be chosen later. Since $T_n(t)$ has mean $0$ and variance $1$ under $H_0$, the test ${{\mathbf{1}_{\left\{{|T_n(t)|>\sqrt{\log n}}\right\}}}}$ has a vanishing Type-I error by applying Chebyshev’s inequality. To analyze the Type-II error, we need to study the expectation and the variance of $T_n(t)$ under $H_1$. By Lemma \[lem:easy-tail\], we have $$\mathbb{P}(\chi_{1,2r\log n}^2\leq 2t\log n)\asymp \frac{1}{\sqrt{\log n}}n^{-(\sqrt{r}-\sqrt{t})^2},\quad\text{and}\quad \mathbb{P}(\chi_1^2\leq 2t\log n)\asymp 1,$$ which implies $$\begin{aligned}
&& \mathbb{E}_{H_1}\left(\sum_{i=1}^n{{\mathbf{1}_{\left\{{V_i^2\leq 2t\log n}\right\}}}}-n\mathbb{P}(\chi_{1,2r\log n}^2\leq 2t\log n)\right) \\
&=& n\epsilon\left(\mathbb{P}(\chi_1^2\leq 2t\log n)-\mathbb{P}(\chi_{1,2r\log n}^2\leq 2t\log n)\right) \\
&\asymp& n^{1-\beta},\end{aligned}$$ and $$\begin{aligned}
&& \Var_{H_1}\left(\sum_{i=1}^n{{\mathbf{1}_{\left\{{V_i^2\leq 2t\log n}\right\}}}}-n\mathbb{P}(\chi_{1,2r\log n}^2\leq 2t\log n)\right) \\
&\asymp& n(1-\epsilon)\mathbb{P}(\chi_{1,2r\log n}^2\leq 2t\log n) + n\epsilon \mathbb{P}(\chi_1^2\leq 2t\log n) \\
&\asymp& \frac{1}{\sqrt{\log n}}n^{1-(\sqrt{r}-\sqrt{t})^2} + n^{1-\beta}.\end{aligned}$$ Hence, $$\frac{(\mathbb{E}_{H_1}T_n(t))^2}{\Var_{H_1}(T_n(t))}\asymp n^{1-\beta} + (\sqrt{\log n})n^{1-2\beta+(\sqrt{r}-\sqrt{t})^2}.\label{eq:E-Var-ratio-V-equal}$$ Therefore, as long as $1-\beta>0$ and $1-2\beta+r>0$, we can choose a sufficiently small constant $t\in(0,r)$, such that $\frac{(\mathbb{E}_{H_1}T_n(t))^2}{\Var_{H_1}(T_n(t))}$ diverges to infinity at some polynomial rate. This implies a vanishing Type-II error by Chebyshev’s inequality. Finally, note that the conditions $1-\beta>0$ and $1-2\beta+r>0$ can be equivalently written as $\beta<1\wedge\frac{r+1}{2}$, and the proof is complete.
Let us write the null distribution (\[eq:equal-sum-null\]) and the alternative distribution (\[eq:equal-sum-alt\]) as $P$ and $(1-\epsilon)P+\epsilon Q$, where the densities of $P$ and $Q$ are given by $$p(v)=\frac{1}{2}\phi(v-\sqrt{2r\log n})+\frac{1}{2}\phi(v+\sqrt{2r\log n}),\quad\text{and}\quad q(v)=\phi(v).$$ We use $\phi(\cdot)$ for the density function of $N(0,1)$. It suffices to prove that $H(P,(1-\epsilon)P+\epsilon Q)^2=o(n^{-1})$ so that no consistent test exists (see [@cai2014optimal]). We adapt an argument in [@tony2011optimal] to upper bound the Hellinger distance. We have [for any measurable set $D$]{}, $$\begin{aligned}
\nonumber H(P,(1-\epsilon)P+\epsilon Q)^2 &=& 1-\int\sqrt{p((1-\epsilon) p+\epsilon q)} \\
\nonumber &=& 1-\mathbb{E}_P\sqrt{1+\epsilon\left(\frac{q}{p}-1\right)} \\
\nonumber &\leq& 1-\mathbb{E}_P\sqrt{1+\epsilon\left(\frac{q}{p}\mathbf{1}_D-1\right)} \\
\label{eq:used-Jeng} &\leq& -\frac{1}{2}\epsilon\mathbb{E}_P\left(\frac{q}{p}\mathbf{1}_D-1\right) + \epsilon^2\mathbb{E}_P\left(\frac{q}{p}\mathbf{1}_D-1\right)^2 \\
\nonumber &\leq& \frac{1}{2}\epsilon\int_{D^c}q + 2\epsilon^2\int_D\frac{q^2}{p} + 2\epsilon^2,\end{aligned}$$ where the inequality (\[eq:used-Jeng\]) is by Lemma \[lem:Yihong-Wu\] and the fact that $\epsilon\left(\frac{q}{p}\mathbf{1}_D-1\right)\geq -\epsilon\geq -1$. Since $\beta>1\wedge\frac{r+1}{2}$ implies $n\epsilon^2=n^{1-2\beta}=o(n^{-1})$, it suffices to prove $$\epsilon\int_{D^c}q=o(n^{-1}),\quad\text{and}\quad \epsilon^2\int_D\frac{q^2}{p}=o(n^{-1}). \label{eq:two-conditions-D}$$ To this end, we choose $D=\mathbb{R}$, which implies $\epsilon\int_{D^c}q=0$. For the second term, we have $$\begin{aligned}
\epsilon^2\int\frac{q^2}{p} &=& 4\epsilon^2\int_0^{\infty}\frac{\phi(v)^2}{\phi(v-\sqrt{2r\log n})+\phi(v+\sqrt{2r\log n})}dv \\
&\leq& 4\epsilon^2\int_0^{\infty}\frac{\phi(v)^2}{\phi(v-\sqrt{2r\log n})}dv \\
&=& 4\epsilon^2n^{2r}\int_0^{\infty}\phi(v+\sqrt{2r\log n})dv \\
&\leq& 4\epsilon^2n^{r},\end{aligned}$$ where the last inequality is a standard Gaussian tail bound (\[eq:Gaussian-tail\]). Therefore, when $\beta>\frac{r+1}{2}$, we have $\epsilon^2\int\frac{q^2}{p}\leq 4\epsilon^2n^{r}=4n^{-2\beta+r}=o(n^{-1})$.
When $\beta>1$, we can choose $D=\varnothing$ in (\[eq:two-conditions-D\]), which implies $\epsilon^2\int_D\frac{q^2}{p}=0$. Then we have $\epsilon^2\int_D\frac{q^2}{p}=\epsilon=n^{-\beta}=o(n^{-1})$. The proof is completed by combining the two cases.
Let us write the null distribution (\[eq:equal-comb-null\]) and the alternative distribution (\[eq:equal-comb-alt\]) by $P$ and $(1-\epsilon)P+\epsilon Q$, respectively, where the density functions of $P$ and $Q$ are given by $$\begin{aligned}
\label{eq:P-up} p(u,v) &=& \frac{1}{2}\phi(u)\phi(v-\sqrt{2r\log n}) + \frac{1}{2}\phi(u)\phi(v+\sqrt{2r\log n}), \\
\label{eq:Q-up} q(u,v) &=& \frac{1}{2}\phi(u-\sqrt{2r\log n})\phi(v) + \frac{1}{2}\phi(u+\sqrt{2r\log n})\phi(v).\end{aligned}$$ We only need to prove consistency of (\[eq:HC-comb\]), and then consistency of the likelihood ratio test is implied by the Neyman–Pearson lemma. We can equivalently write the test (\[eq:HC-comb\]) as ${{\mathbf{1}_{\left\{{\sup_{t\in\mathbb{R}}|T_n(t)|>\sqrt{2(1+\delta)\log\log n}}\right\}}}}$, where $$T_n(t)=\frac{\sum_{i=1}^n{{\mathbf{1}_{\left\{{|U_i|-|V_i|>t\sqrt{2\log n}}\right\}}}}-nS_{\|\theta\|}(t\sqrt{2\log n})}{\sqrt{nS_{\|\theta\|}(t\sqrt{2\log n})(1-S_{\|\theta\|}(t\sqrt{2\log n}))}}.$$ By [@shorack2009empirical] and a standard argument in [@donoho2004higher], $\frac{\sup_{t\in\mathbb{R}}|T_n(t)|}{\sqrt{2\log\log n}}$ converges to $1$ in probability under $H_0$, which then implies a vanishing Type-I error. The Type-II error can be bounded by $$\mathbb{P}_{H_1}\left(\sup_{t\in\mathbb{R}}|T_n(t)|\leq\sqrt{2(1+\delta)\log\log n}\right)\leq \mathbb{P}_{H_1}\left(|T_n(\bar{t})|\leq\sqrt{2(1+\delta)\log\log n}\right),$$ for some $\bar{t}\in\mathbb{R}$ to be chosen appropriately. So it suffices to choose a $\bar{t}$ so that $\frac{(\mathbb{E}_{H_1}T_n(\bar{t}))^2}{\Var_{H_1}(T_n(\bar{t}))}$ diverges to infinity at [some]{} polynomial rate. By Lemma \[lem:easy-tail\], we have $$\begin{aligned}
P\left(|U|-|V|>\bar{t}\sqrt{2\log n}\right) &\asymp& \frac{1}{\log n}n^{-(\bar{t}^2+r)}, \\
Q\left(|U|-|V|>\bar{t}\sqrt{2\log n}\right) &\asymp& \frac{1}{\log n}n^{-(\bar{t}-\sqrt{r})^2}, \\\end{aligned}$$ for any constant $\bar{t}>\sqrt{r}$. Therefore, $$\begin{aligned}
&& \mathbb{E}_{H_1}\left(\sum_{i=1}^n{{\mathbf{1}_{\left\{{|U_i|-|V_i|>t\sqrt{2\log n}}\right\}}}}-nS_{\|\theta\|}(t\sqrt{2\log n})\right) \\
&=& n\epsilon\left(Q\left(|U|-|V|>\bar{t}\sqrt{2\log n}\right)-P\left(|U|-|V|>\bar{t}\sqrt{2\log n}\right)\right) \\
&\asymp& \frac{1}{\log n}n^{1-\beta-(\bar{t}-\sqrt{r})^2},\end{aligned}$$ and $$\begin{aligned}
&& \Var_{H_1}\left(\sum_{i=1}^n{{\mathbf{1}_{\left\{{|U_i|-|V_i|>t\sqrt{2\log n}}\right\}}}}-nS_{\|\theta\|}(t\sqrt{2\log n})\right) \\
&\asymp& n(1-\epsilon)P\left(|U|-|V|>\bar{t}\sqrt{2\log n}\right) + n\epsilon Q\left(|U|-|V|>\bar{t}\sqrt{2\log n}\right) \\
&\asymp& \frac{1}{\log n}n^{1-(\bar{t}^2+r)} + \frac{1}{\log n}n^{1-\beta-(\bar{t}-\sqrt{r})^2},\end{aligned}$$ which implies $$\frac{(\mathbb{E}_{H_1}T_n(\bar{t}))^2}{\Var_{H_1}(T_n(\bar{t}))}\asymp \frac{1}{\log n}\left(n^{1-2\beta-2(\bar{t}-\sqrt{r})^2+\bar{t}^2+r}+n^{1-\beta-(\bar{t}-\sqrt{r})^2}\right).$$ We choose $\bar{t}=2\sqrt{r}\wedge(\sqrt{r}+\sqrt{1-\beta^*(r)})$, and then $\frac{(\mathbb{E}_{H_1}T_n(\bar{t}))^2}{\Var_{H_1}(T_n(\bar{t}))}\rightarrow\infty$ at some polynomial rate as long as $\beta<\beta^*(r)$. This implies a vanishing Type-II error by Chebyshev’s inequality. The proof is complete.
Recall the notation $p$ and $q$ in (\[eq:P-up\]) and (\[eq:Q-up\]). By the same argument [as]{} in the proof of Theorem \[thm:V-equal\] (lower bound), it suffices to show (\[eq:two-conditions-D\]) for some set $D$. To this end, we choose $D=\{(u,v):|u|\leq (\sqrt{r}+\sqrt{1-\beta^*(r)})\sqrt{2\log n}\}$. Then, $$\begin{aligned}
\epsilon\int_{D^c}q &=& \epsilon \mathbb{P}\left(|N(\sqrt{2r\log n},1)|>(\sqrt{r}+\sqrt{1-\beta})\sqrt{2\log n}\right) \\
&\leq& 2\epsilon \mathbb{P}\left(N(0,1)>\sqrt{1-\beta^*(r)}\sqrt{2\log n}\right) \\
&\leq& 2\epsilon n^{-(1-\beta^*(r))},\end{aligned}$$ which implies $\epsilon\int_{D^c}q=o(n^{-1})$ when $\beta>\beta^*(r)$. We also have $$\begin{aligned}
\nonumber \epsilon^2\int_D\frac{q^2}{p} &=& 4\epsilon^2 \int_{D\cap\{(u,v):u>0,v>0\}}\frac{q(u,v)^2}{p(u,v)}dudv \\
\label{eq:bi-da-xiao} &\leq& 8\epsilon^2\int_{D\cap\{(u,v):u>0,v>0\}}\frac{\phi(u-\sqrt{2r\log n})^2\phi(v)^2}{\phi(u)\phi(v-\sqrt{2r\log n})}dudv \\
\nonumber &=& 8\epsilon^2\int_0^{(\sqrt{r}+\sqrt{1-\beta^*(r)})\sqrt{2\log n}}\frac{\phi(u-\sqrt{2r\log n})^2}{\phi(u)}du\int_0^{\infty}\frac{\phi(v)^2}{\phi(v-\sqrt{2r\log n})}dv \\
\nonumber &\leq& 8\epsilon^2n^{3r}\int_0^{(\sqrt{r}+\sqrt{1-\beta^*(r)})\sqrt{2\log n}}\phi(u-2\sqrt{2r\log n})du \\
\nonumber &=& 8\epsilon^2n^{3r}\mathbb{P}\left(N(0,1)\leq -(\sqrt{r}-\sqrt{1-\beta^*(r)})\sqrt{2\log n}\right) \\
\nonumber &\leq& \begin{cases}
8n^{-2\beta+3r-(\sqrt{r}-\sqrt{1-\beta^*(r)})^2}, & r>1-\beta^*(r), \\
8n^{-2\beta+3r}, & r\leq 1-\beta^*(r),
\end{cases} \\
\nonumber &=& \begin{cases}
8n^{-2\beta+3r-(\sqrt{r}-\sqrt{1-\beta^*(r)})^2}, & r>\frac{1}{5}, \\
8n^{-2\beta+3r}, & r\leq \frac{1}{5},
\end{cases}\end{aligned}$$ where we have used the fact that $\phi(u-\sqrt{2r\log n})>\phi(u+\sqrt{2r\log n})$ when $u>0$ in (\[eq:bi-da-xiao\]). When $r\leq \frac{1}{5}$, we have $-2\beta+3r< -2\beta^*(r)+3r=-1$. When $\frac{1}{5}<r< \frac{1}{2}$, we have $-2\beta+3r-(\sqrt{r}-\sqrt{1-\beta^*(r)})^2<-2\beta^*(r)+3r-(\sqrt{r}-\sqrt{1-\beta^*(r)})^2\leq -1$ by the definition of $\beta^*(r)$. Therefore, we have $\epsilon^2\int_D\frac{q^2}{p}=o(n^{-1})$ and thus (\[eq:two-conditions-D\]) holds whenever $r<\frac{1}{2}$. When $r\geq\frac{1}{2}$, we have $\beta^*(r)=1$, and we need to establish (\[eq:two-conditions-D\]) for $\beta>1$. This can be done by choosing $D=\varnothing$ in (\[eq:two-conditions-D\]), which implies $\epsilon^2\int_D\frac{q^2}{p}=0$. Then, when $\beta>1$, we have $\epsilon^2\int_D\frac{q^2}{p}=\epsilon=n^{-\beta}=o(n^{-1})$. The proof is complete.
Proofs of Proposition \[prop:equal-diff\], Theorem \[thm:equal-sum\], and Theorem \[thm:main-equal\]
----------------------------------------------------------------------------------------------------
We first bound the Type-I error. For any $z,\sigma\in\{-1,1\}^n$ such that $\ell(z,\sigma)=0$, we either have $z=\sigma$ or $z=-\sigma$. By a union bound, the Type-I error can be bounded from above by $$\sup_{z\in\{-1,1\}^n}P_{(\theta,\eta,z,z)}^{(n)}\psi + \sup_{z\in\{-1,1\}^n}P_{(\theta,\eta,z,-z)}^{(n)}\psi.\label{eq:type-1-3.1}$$ By the definition of $\psi$ in (\[eq:equal-test-diff\]), the first term of (\[eq:type-1-3.1\]) satisfies $$\sup_{z\in\{-1,1\}^n}P_{(\theta,\eta,z,z)}^{(n)}\psi\leq \sup_{z\in\{-1,1\}^n}P_{(\theta,\eta,z,z)}^{(n)}\left(T_n^->\sqrt{2(1+\delta)\log \log n}\right)\rightarrow 0, \label{eq:type-1-3.1-part1}$$ because $T_n^-/\sqrt{2\log\log n}\rightarrow 0$ in $P_{(\theta,\eta,z,z)}^{(n)}$-probability for any $\theta,\eta,z$ [@donoho2004higher; @shorack2009empirical]. Similarly, for the second term in (\[eq:type-1-3.1\]), we have $$\sup_{z\in\{-1,1\}^n}P_{(\theta,\eta,z,-z)}^{(n)}\psi\leq \sup_{z\in\{-1,1\}^n}P_{(\theta,\eta,z,-z)}^{(n)}\left(T_n^+>\sqrt{2(1+\delta)\log \log n}\right)\rightarrow 0, \label{eq:type-1-3.1-part2}$$ and thus the Type-I error is vanishing.
To analyze the Type-II error, we notice that by the definition of $\ell(z,z^*)$, we have $$\begin{aligned}
\nonumber && \sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n\\\ell(z,\sigma)> \epsilon}}P^{(n)}_{(\theta,\eta,z,\sigma)}(1-\psi) \\
\nonumber &\leq& \sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n\\ \frac{1}{n}\sum_{i=1}^n{{\mathbf{1}_{\left\{{z_i\neq \sigma_i}\right\}}}}> \epsilon}}P^{(n)}_{(\theta,\eta,z,\sigma)}\left(T_n^-\leq\sqrt{2(1+\delta)\log \log n}\right) \\
\label{eq:type-2-3.1} && + \sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n\\ \frac{1}{n}\sum_{i=1}^n{{\mathbf{1}_{\left\{{z_i\neq -\sigma_i}\right\}}}}> \epsilon}}P^{(n)}_{(\theta,\eta,z,\sigma)}\left(T_n^+\leq\sqrt{2(1+\delta)\log \log n}\right).\end{aligned}$$ By symmetry, the analyses of the two terms in the above are the same, and thus we only analyze the first term. For any $z,\sigma\in\{-1,1\}^n$ that satisfy $\frac{1}{n}\sum_{i=1}^n{{\mathbf{1}_{\left\{{z_i\neq \sigma_i}\right\}}}}>\epsilon$, we have $$P^{(n)}_{(\theta,\eta,z,\sigma)}\left(T_n^-\leq\sqrt{2(1+\delta)\log \log n}\right)\leq P^{(n)}_{(\theta,\eta,z,\sigma)}\left(T_n^-((4r\wedge 1)2\log n)\leq\sqrt{2(1+\delta)\log \log n}\right), \label{eq:type-2-3.1-part}$$ where for any $t {>0}$, we use the notation $$T_n^-(t)=\frac{\left|\sum_{i=1}^n{{\mathbf{1}_{\left\{{|{\widetilde}{X}_i-{\widetilde}{Y}_i|^2/2> 2t\log n}\right\}}}}-n\mathbb{P}(\chi_1^2>2t\log n)\right|}{\sqrt{n\mathbb{P}(\chi_1^2>2t\log n)(1-\mathbb{P}(\chi_1^2>2t\log n))}}.$$ We can follow the same analysis in [@donoho2004higher; @cai2014optimal] and show that $$\frac{\Var(T_n^-((4r\wedge 1)))}{(\mathbb{E}T_n^-((4r\wedge 1)))^2}\rightarrow 0$$ at some polynomial rate as $n\rightarrow\infty$ whenever $\beta<\beta_{\rm IDJ}^*(r)$, where the variance and expectation above are under $P^{(n)}_{(\theta,\eta,z,\sigma)}$. This implies a vanishing Type-II error by Chebyshev’s inequality, and thus the proof is complete.
The Type-I error is vanishing by the same arguments [as]{} used in (\[eq:type-1-3.1\])-(\[eq:type-1-3.1-part2\]). For the Type-II error, we follow (\[eq:type-2-3.1\]) and (\[eq:type-2-3.1-part\]), and thus it suffices to prove $$P^{(n)}_{(\theta,\eta,z,\sigma)}\left(\bar{T}_n^+(t)\leq\sqrt{2(1+\delta)\log \log n}\right)\rightarrow 0, \label{eq:want-to-prove-eq-sum}$$ uniformly over any $z,\sigma\in\{-1,1\}^n$ that satisfy $\frac{1}{n}\sum_{i=1}^n{{\mathbf{1}_{\left\{{z_i\neq \sigma_i}\right\}}}}>\epsilon$. The $\bar{T}_n^+(t)$ in (\[eq:want-to-prove-eq-sum\]) is defined by $$\bar{T}_n^+(t)=\frac{\sum_{i=1}^n{{\mathbf{1}_{\left\{{({\widetilde}{X}_i+{\widetilde}{Y}_i)^2/2\leq 2t\log n}\right\}}}}-\mathbb{P}(\chi_{1,2\|\theta\|^2}^2\leq 2t\log n)}{\sqrt{n\mathbb{P}(\chi_{1,2\|\theta\|^2}^2\leq 2t\log n)\left(1-\mathbb{P}(\chi_{1,2\|\theta\|^2}^2\leq 2t\log n)\right)}}.$$ The mean and variance of $\bar{T}_n^+(t)$ can be analyzed by following the same argument in the proof of Theorem \[thm:V-equal\], and thus we can obtain (\[eq:E-Var-ratio-V-equal\]) with $T_n(t)$ replaced by $\bar{T}_n^+(t)$. With an appropriate choice of $t$ and an application of Chebyshev’s inequality, we can show the Type-II error is vanishing, and thus the proof is complete.
Similar to the proof of Theorem \[thm:equal-sum\], the upper bound conclusion directly follows the arguments used in the proofs of Proposition \[prop:equal-diff\] and Theorem \[thm:U-V-equal\]. Thus, we only prove the lower bound. For the first term of (\[eq:def-worst-risk\]), we have $$\sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n\\\ell(z,\sigma)=0}}P^{(n)}_{(\theta,\eta,z,\sigma)}\psi\geq \sup_{z\in {\{-1,1\}^n} }P^{(n)}_{(\theta,\eta,z,z)}\psi. \label{eq:remove-label-switch}$$ To analyze the second term of (\[eq:def-worst-risk\]), we note that the condition $\beta>\beta^*(r)$ implies that there exists some small constant $\delta>0$ such that $\beta>\beta^*(r)+\delta$. We use the notation $\bar{\epsilon}=n^{-(\beta-\delta)}$ so that $\bar{\epsilon}>\epsilon$. Now we [use ${\widetilde}{P}^{(n)}_{(\theta,\eta,\bar{\epsilon})}$ to denote a joint distribution over $z,\sigma,X,Y$, the sampling process of which is described below]{}:
1. Draw $z_i$ uniformly from $\{-1,1\}$ independently over all $i\in[n]$.
2. Conditioning on $z$, draw $\sigma_i$ independently over all $i\in[n]$ so that $\sigma_i=z_i$ with probability $1-\bar{\epsilon}$ and $\sigma_i=-z_i$ with probability $\bar{\epsilon}$.
3. Conditioning on $z$ and $\sigma$, independently sample $X_i|z_i\sim N(z_i\theta,I_p)$ and $Y_i|\sigma_i\sim N(\sigma_i\eta,I_q)$ for all $i\in[n]$.
Consider the event $$G=\left\{\left|\sum_{i=1}^n{{\mathbf{1}_{\left\{{z_i\neq \sigma_i}\right\}}}}-n\bar{\epsilon}\right|\leq\sqrt{n\bar{\epsilon}\log n}\right\}.$$ We can check by Chebyshev’s inequality that ${\widetilde}{P}^{(n)}_{(\theta,\eta,\bar{\epsilon})}(G^c)\rightarrow 0$ as $n\rightarrow\infty$. We also have $$G\subset\left\{\frac{1}{n}\sum_{i=1}^n{{\mathbf{1}_{\left\{{z_i\neq \sigma_i}\right\}}}}\in(\epsilon,1-\epsilon)\right\},$$ as long as $n$ is sufficiently large by the definition of $\bar{\epsilon}$. Now we can lower bound the second term of (\[eq:def-worst-risk\]) by $$\begin{aligned}
\sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n\\\ell(z,\sigma)> \epsilon}}P^{(n)}_{(\theta,\eta,z,\sigma)}(1-\psi) &=& \sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n\\\frac{1}{n}\sum_{i=1}^n{{\mathbf{1}_{\left\{{z_i\neq \sigma_i}\right\}}}}\in(\epsilon,1-\epsilon)}}P^{(n)}_{(\theta,\eta,z,\sigma)}(1-\psi) \\
&\geq& {\widetilde}{P}^{(n)}_{(\theta,\eta,\bar{\epsilon})}(1-\psi){{\mathbf{1}_{\left\{{G}\right\}}}} \\
&\geq& {\widetilde}{P}^{(n)}_{(\theta,\eta,\bar{\epsilon})}(1-\psi)-{\widetilde}{P}^{(n)}_{(\theta,\eta,\bar{\epsilon})}(G^c).\end{aligned}$$ Together with (\[eq:remove-label-switch\]), this implies $$\begin{aligned}
&& \inf_{\psi}\left(\sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n\\\ell(z,\sigma)=0}}P^{(n)}_{(\theta,\eta,z,\sigma)}\psi+\sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n\\\ell(z,\sigma)> \epsilon}}P^{(n)}_{(\theta,\eta,z,\sigma)}(1-\psi)\right) \\
&\geq& \inf_{\psi}\left({\widetilde}{P}^{(n)}_{(\theta,\eta,0)}\psi+{\widetilde}{P}^{(n)}_{(\theta,\eta,\bar{\epsilon})}(1-\psi)\right)-{\widetilde}{P}^{(n)}_{(\theta,\eta,\bar{\epsilon})}(G^c).\end{aligned}$$ Since the second term in the above bound is vanishing, it is sufficient to lower bound $\inf_{\psi}\left({\widetilde}{P}^{(n)}_{(\theta,\eta,0)}\psi+{\widetilde}{P}^{(n)}_{(\theta,\eta,\bar{\epsilon})}(1-\psi)\right)$ by a constant. Define $U_i=\frac{1}{\sqrt{2}}(\theta^TX_i/\|\theta\|-\eta^TY_i/\|\eta\|)$, $V_i=\frac{1}{\sqrt{2}}(\theta^TX_i/\|\theta\|+\eta^TY_i/\|\eta\|)$, and $W_i=R^T\begin{pmatrix}
X_i \\
Y_i
\end{pmatrix}$ for all $i\in[n]$, where $R\in\mathbb{R}^{(p+q)\times (p+q-2)}$ is a matrix the columns of which form an orthonormal basis of $\mathbb{R}^{p+q}$ together with $\frac{1}{\sqrt{2}}\begin{pmatrix}
\theta/\|\theta\| \\
-\eta/\|\eta\|
\end{pmatrix}$ and $\frac{1}{\sqrt{2}}\begin{pmatrix}
\theta/\|\theta\| \\
\eta/\|\eta\|
\end{pmatrix}$. We note that the distributions of $\{W_i\}_{i\in[n]}$ under ${\widetilde}{P}^{(n)}_{(\theta,\eta,0)}$ and ${\widetilde}{P}^{(n)}_{(\theta,\eta,\bar{\epsilon})}$ are the same. Moreover, $\{W_i\}_{i\in[n]}$ is independent from both $\{U_i\}_{i\in [n]}$ and $\{V_i\}_{i\in[n]}$ under both ${\widetilde}{P}^{(n)}_{(\theta,\eta,0)}$ and ${\widetilde}{P}^{(n)}_{(\theta,\eta,\bar{\epsilon})}$. Therefore, [by the connection]{} between testing error and total variation distance, we have $$\begin{aligned}
&& \inf_{\psi}\left({\widetilde}{P}^{(n)}_{(\theta,\eta,0)}\psi+{\widetilde}{P}^{(n)}_{(\theta,\eta,\bar{\epsilon})}(1-\psi)\right) \\
&=& 1-\frac{1}{2}\int |p_0(x,y)-p_1(x,y)| \\
&=& 1-\frac{1}{2}\int|p_0(u,v)p_0(w)-p_1(u,v)p_1(w)| \\
&=& 1-\frac{1}{2}|p_0(u,v)-p_1(u,v)|,\end{aligned}$$ where we abuse the notation $p_0$ and $p_1$ for the density functions of $X,Y,U,V,W$ under ${\widetilde}{P}^{(n)}_{(\theta,\eta,0)}$ and ${\widetilde}{P}^{(n)}_{(\theta,\eta,\bar{\epsilon})}$, respectively. The last equality above uses the fact that $p_0(w)=p_1(w)$. Note that $1-\frac{1}{2}|p_0(u,v)-p_1(u,v)|$ is exactly the testing error of (\[eq:equal-comb-null\])-(\[eq:equal-comb-alt\]) with $\epsilon$ replaced by $\bar{\epsilon}$. Since $\beta-\delta>\beta^*(r)$, we can apply Theorem \[thm:U-V-equal\] and get $$\liminf_{n\rightarrow\infty}\left(1-\frac{1}{2}|p_0(u,v)-p_1(u,v)|\right)>c,$$ for some constant $c>0$, and this completes the proof.
Proofs of Theorem \[thm:general-separate\], Theorem \[thm:general-HC\], and Theorem \[thm:main-general\]
--------------------------------------------------------------------------------------------------------
To facilitate the proofs of these theorems, we first state and prove several propositions. Each solves a small optimization problem that will be used in the arguments.
\[prop:opt1\] We have $$\max_u\left(2\sqrt{r}|u|-\frac{u^2}{2}\right)=2r,$$ and the maximum is achieved at $|u|=2\sqrt{r}$.
Obvoius.
\[prop:opt2\] We have $$\max_v\left(-2(\sqrt{r+s}-\sqrt{s})|v+\sqrt{r+s}|-\frac{v^2}{2}\right)=
\begin{cases}
-\frac{r+s}{2}, & 3s\leq r, \\
-2(\sqrt{r+s}-\sqrt{s})\sqrt{s}, & 3s> r.
\end{cases}$$ The maximum is achieved at $v=-\sqrt{r+s}$ and at $v=-2(\sqrt{r+s}-\sqrt{s})$ in the two cases respectively.
We write the objective function by $f(v)$, and then $$f'(v)=\begin{cases}
2(\sqrt{r+s}-\sqrt{s})-v, & v<-\sqrt{r+s}, \\
-2(\sqrt{r+s}-\sqrt{s})-v, & v\geq-\sqrt{r+s}.
\end{cases}$$ It is easy to see that $f'(v)$ is a decreasing function, and it [goes from positive to negative as its argument goes from $-\infty$ to $\infty$]{}. This implies that $f(v)$ is first increasing and then decreasing. When $3s<r$, we have $$-\sqrt{r+s}>-2(\sqrt{r+s}-\sqrt{s}),$$ so that the point where $f(v)$ changes from increasing to decreasing is $-\sqrt{r+s}$, and thus the maximum is $$f(-\sqrt{r+s})=-\frac{r+s}{2}.$$ When $3s\geq r$, we have $$-\sqrt{r+s}\leq-2(\sqrt{r+s}-\sqrt{s}),$$ which implies that the point where $f(v)$ changes from increasing to decreasing is $-2(\sqrt{r+s}-\sqrt{s})$, and thus the maximum is $$f(-2(\sqrt{r+s}-\sqrt{s}))=-2(\sqrt{r+s}-\sqrt{s})\sqrt{s}.$$
\[prop:opt3\] We have $$\max_u\left(2\sqrt{r}|u|-u^2\right)=r,$$ and the maximum is achieved at $|u|=\sqrt{r}$.
Obvious.
\[prop:opt4\] We have $$\max_v\left(-2(\sqrt{r+s}-\sqrt{s})|v+\sqrt{r+s}|-v^2\right)=-r,$$ and the maximum is achieved at $v=-(\sqrt{r+s}-\sqrt{r})$.
We write the objective function as $f(v)$, and then $$f'(v)=\begin{cases}
2(\sqrt{r+s}-\sqrt{s})-2v, & v<-\sqrt{r+s}, \\
-2(\sqrt{r+s}-\sqrt{s})-2v, & v\geq-\sqrt{r+s}.
\end{cases}$$ Since $f'(v)$ [goes from positive to negative as its argument goes from $-\infty$ to $\infty$]{}, $f(v)$ is first increasing and then decreasing. The point where it changes from increasing to decreasing is at $v=-(\sqrt{r+s}-\sqrt{s})$, and thus the maximum is $f(-(\sqrt{r+s}-\sqrt{s}))=-r$.
\[prop:opt5\] We have $$\begin{aligned}
&& \max_{u^2+v^2=1}\left(2\sqrt{r}|u|-2(\sqrt{r+s}-\sqrt{s})|v+\sqrt{r+s}|\right) \\
&=& \begin{cases}
2\sqrt{r}\sqrt{1-r-s}, & 2(1-r-s)(r+s-\sqrt{s}\sqrt{r+s})>r, \\
\Big[2\sqrt{2(r+s-\sqrt{s}\sqrt{r+s})} & \\
~~~ -2(r+s-\sqrt{s}\sqrt{r+s})\Big], & 2(1-r-s)(r+s-\sqrt{s}\sqrt{r+s})\leq r.
\end{cases}\end{aligned}$$
The constraint $u^2+v^2=1$ implies $|u|=\sqrt{1-v^2}$. Then, we can equivalently write the optimization problem as $$\max_{|v|\leq 1}\left(2\sqrt{r}\sqrt{1-v^2}-2(\sqrt{r+s}-\sqrt{s})|v+\sqrt{r+s}|\right).$$ Denote the above object function as $f(v)$, and we have $$f'(v)=\begin{cases}
-2\sqrt{r}\frac{v}{\sqrt{1-v^2}} + 2(\sqrt{r+s}-\sqrt{s}), & v<-\sqrt{r+s}, \\
-2\sqrt{r}\frac{v}{\sqrt{1-v^2}} - 2(\sqrt{r+s}-\sqrt{s}), & v\geq-\sqrt{r+s}. \\
\end{cases}$$ We observe that $f'(v)$ is a decreasing function on $(-1,1)$. Moreover, $f'(v)$ [goes from positive to negative as its argument goes from $-\infty$ to $\infty$]{}. This implies $f(v)$ is first increasing and then decreasing on $(-1,1)$, and we just need to find the point that the derivative changes its sign. First, the solution to the equation $$-2\sqrt{r}\frac{v}{\sqrt{1-v^2}} - 2(\sqrt{r+s}-\sqrt{s})=0$$ is $$v=-\frac{\sqrt{r+s}-\sqrt{s}}{\sqrt{r+(\sqrt{r+s}-\sqrt{s})^2}}.$$ There are two cases. In the first case, $$-\frac{\sqrt{r+s}-\sqrt{s}}{\sqrt{r+(\sqrt{r+s}-\sqrt{s})^2}}<-\sqrt{r+s},$$ which is equivalently to $$2(1-r-s)(r+s-\sqrt{s}\sqrt{r+s})>r,$$ and then the $-\sqrt{r+s}$ is the point where $f'(v)$ changes its sign. Thus, the maximum is $$f(-\sqrt{r+s})=2\sqrt{r}\sqrt{1-r-s}.$$ In the second case, $$-\frac{\sqrt{r+s}-\sqrt{s}}{\sqrt{r+(\sqrt{r+s}-\sqrt{s})^2}}\geq-\sqrt{r+s},$$ which is equivalently to $$2(1-r-s)(r+s-\sqrt{s}\sqrt{r+s})\leq r,$$ and then $-\frac{\sqrt{r+s}-\sqrt{s}}{\sqrt{r+(\sqrt{r+s}-\sqrt{s})^2}}$ is the point where $f'(v)$ changes its sign. Thus, the maximum is $$f\left(-\frac{\sqrt{r+s}-\sqrt{s}}{\sqrt{r+(\sqrt{r+s}-\sqrt{s})^2}}\right)=2\sqrt{2(r+s-\sqrt{s}\sqrt{r+s})}-2(r+s-\sqrt{s}\sqrt{r+s}).$$
Conclusion 1 is the result in [@donoho2004higher]. We only need to prove Conclusion 2. We define the statistic $$T_n(t)=\frac{\sum_{i=1}^n{{\mathbf{1}_{\left\{{|V_i|^2\leq 2t\log n}\right\}}}}-n\mathbb{P}\left(\chi_{1,2(r+s)\log n}^2\leq 2t\log n\right)}{\sqrt{n\mathbb{P}\left(\chi_{1,2(r+s)\log n}^2\leq 2t\log n\right)\left(1-\mathbb{P}\left(\chi_{1,2(r+s)\log n}^2\leq 2t\log n\right)\right)}},$$ and then we can write the test as ${{\mathbf{1}_{\left\{{\sup_{t>0}|T_n(t)|>\sqrt{2(1+\delta)\log\log n}}\right\}}}}$. By [@shorack2009empirical], $\frac{\sup_{t>0}|T_n(t)|}{\sqrt{2\log\log n}}$ converges to $1$ in probability under $H_0$, which then implies a vanishing Type-I error. The Type-II error can be bounded by $$\mathbb{P}_{H_1}\left(\sup_{t>0}|T_n(t)|\leq\sqrt{2(1+\delta)\log\log n}\right)\leq \inf_{t>0}\mathbb{P}_{H_1}\left(|T_n(t)|\leq\sqrt{2(1+\delta)\log\log n}\right).$$ [To control the rightside of the last display]{}, it suffices to show there exists some $t>0$ so that $\frac{(\mathbb{E}_{H_1}T_n(t))^2}{\Var_{H_1}(T_n(t))}$ diverges to infinity at a polynomial rate. By Lemma \[lem:easy-tail\], we have $$\begin{aligned}
\label{eq:g-s-l81} \mathbb{P}\left(\chi_{1,2(r+s)\log n}^2\leq 2t\log n\right) &\asymp& \begin{cases}
\frac{1}{\sqrt{\log n}}n^{-(\sqrt{r+s}-\sqrt{t})^2}, & 0<t<r+s, \\
1, & t \geq r+s,
\end{cases} \\
\label{eq:g-s-l82}
\mathrm{~~and~~}\mathbb{P}\left(\chi_{1,2s\log n}^2\leq 2t\log n\right) &\asymp& \begin{cases}
\frac{1}{\sqrt{\log n}}n^{-(\sqrt{s}-\sqrt{t})^2}, & 0<t<s, \\
1, & t \geq s.
\end{cases}\end{aligned}$$ Since $$\frac{(\mathbb{E}_{H_1}T_n(t))^2}{\Var_{H_1}(T_n(t))}\asymp \frac{n^2\epsilon^2\mathbb{P}\left(\chi_{1,2s\log n}^2\leq 2t\log n\right)^2}{n\mathbb{P}\left(\chi_{1,2(r+s)\log n}^2\leq 2t\log n\right) + n\epsilon \mathbb{P}\left(\chi_{1,2s\log n}^2\leq 2t\log n\right)},$$ we have $\frac{(\mathbb{E}_{H_1}T_n(t))^2}{\Var_{H_1}(T_n(t))}\rightarrow\infty$ is equivalent to $\frac{n\epsilon^2\mathbb{P}\left(\chi_{1,2s\log n}^2\leq 2t\log n\right)^2}{\mathbb{P}\left(\chi_{1,2(r+s)\log n}^2\leq 2t\log n\right)}\rightarrow\infty$ and $n\epsilon \mathbb{P}\left(\chi_{1,2s\log n}^2\leq 2t\log n\right)\rightarrow\infty$. By (\[eq:g-s-l81\]) and (\[eq:g-s-l82\]), we have $$\frac{n\epsilon^2\mathbb{P}\left(\chi_{1,2s\log n}^2\leq 2t\log n\right)^2}{\mathbb{P}\left(\chi_{1,2(r+s)\log n}^2\leq 2t\log n\right)}\asymp \begin{cases}
\frac{1}{\sqrt{\log n}}n^{1-2\beta-2(\sqrt{s}-\sqrt{t})^2+(\sqrt{r+s}-\sqrt{t})^2}, & 0<t<s, \\
(\sqrt{\log n})n^{1-2\beta+(\sqrt{r+s}-\sqrt{t})^2}, & s\leq t<r+s, \\
n^{1-2\beta}, & t>r+s,
\end{cases}$$ and $$n\epsilon \mathbb{P}\left(\chi_{1,2s\log n}^2\leq 2t\log n\right)\asymp \begin{cases}
\frac{1}{\sqrt{\log n}}n^{1-\beta-(\sqrt{s}-\sqrt{t})^2}, & 0<t<s, \\
n^{1-\beta}, & t \geq s.
\end{cases}$$ Therefore, a sufficient condition for the existence of $t>0$ such that $\frac{(\mathbb{E}_{H_1}T_n(t))^2}{\Var_{H_1}(T_n(t))}$ diverges to infinity at a polynomial rate is that $$\beta<\sup_{t\in T_1}(f_1(t)\wedge g_1(t)) \vee \sup_{t\in T_2}(1\wedge g_2(t)) \vee \frac{1}{2}, \label{eq:min-max-1/2-V}$$ where $$\begin{aligned}
f_1(t) &=& 1-(\sqrt{s}-\sqrt{t})^2, \\
g_1(t) &=& \frac{1}{2}-(\sqrt{s}-\sqrt{t})^2 + \frac{1}{2}(\sqrt{r+s}-\sqrt{t})^2, \\
g_2(t) &=& \frac{1}{2}+(\sqrt{r+s}-\sqrt{t})^2,\end{aligned}$$ and $T_1=(0,s)$ and $T_2=[s,r+s)$. We need to show that $\beta<\bar{\beta}^*(r,s)$ is a sufficient condition for (\[eq:min-max-1/2-V\]) by calculating $\sup_{t\in T_1}(f_1(t)\wedge g_1(t))$. Note that the maximizers of $f_1(t)$ and $g_1(t)$ are $\sqrt{t}=\sqrt{s}$ and $\sqrt{t}=2\sqrt{s}-\sqrt{r+s}$, respectively. Let us consider the following four cases.
*Case 1.* $3s\leq r$ and $r+s\leq 1$. Since $r+s\leq 1$, we have $f_1(t)\wedge g_1(t)=g_1(t)$. Moreover, the condition $3s\leq r$ guarantees that $g_1(t)$ is decreasing on $T_1$. Therefore, $$\sup_{t\in T_1}(f_1(t)\wedge g_1(t))=g_1(0)=\frac{1+r-s}{2}.$$
*Case 2.* $3s>r$ and $(\sqrt{r+s}-\sqrt{s})^2\leq\frac{1}{4}$. Note that $f_1(t)\wedge g_1(t)=f_1(t)$ for $\sqrt{t}\leq \sqrt{r+s}-1$ and $f_1(t)\wedge g_1(t)=g_1(t)$ for $\sqrt{r+s}-1<\sqrt{t}<\sqrt{s}$. The condition $(\sqrt{r+s}-\sqrt{s})^2\leq\frac{1}{4}$ implies that $\sqrt{r+s}-1\leq 2\sqrt{s}-\sqrt{r+s}<\sqrt{s}$, and the condition $3s>r$ guarantees that $(2\sqrt{s}-\sqrt{r+s})^2\in T_1$. Therefore, $f_1(t)\wedge g_1(t)$ is increasing when $\sqrt{t}\leq 2\sqrt{s}-\sqrt{r+s}$ and decreasing when $2\sqrt{s}-\sqrt{r+s}<\sqrt{t}<\sqrt{s}$. We thus have $$\sup_{t\in T_1}(f_1(t)\wedge g_1(t))=g_1((2\sqrt{s}-\sqrt{r+s})^2)=\frac{1}{2}+r-2\sqrt{s}(\sqrt{r+s}-\sqrt{s}).$$
*Case 3.* $r+s>1$ and $\frac{1}{4}<(\sqrt{r+s}-\sqrt{s})^2\leq 1$. The condition $\frac{1}{4}<(\sqrt{r+s}-\sqrt{s})^2\leq 1$ implies that $2\sqrt{s}-\sqrt{r+s}<\sqrt{r+s}-1\leq\sqrt{s}$, and the condition $r+s>1$ guarantees that $\sqrt{r+s}-1\in (0,\sqrt{s}]$. Therefore, $f_1(t)\wedge g_1(t)=f_1(t)$ for $\sqrt{t}\leq \sqrt{r+s}-1$ and is increasing. We also have $f_1(t)\wedge g_1(t)=g_1(t)$ for $\sqrt{r+s}-1<\sqrt{t}<\sqrt{s}$ and is decreasing. Hence, $$\sup_{t\in T_1}(f_1(t)\wedge g_1(t))=g_1((\sqrt{r+s}-1)^2)=r-2(\sqrt{r+s}-\sqrt{s})(\sqrt{r+s}-1).$$
*Case 4.* $(\sqrt{r+s}-\sqrt{s})^2> 1$. This condition implies $\sqrt{s}<\sqrt{r+s}-1$, and thus $f_1(t)\wedge g_1(t)=f_1(t)$ for all $t\in T_1$, which leads to $$\sup_{t\in T_1}(f_1(t)\wedge g_1(t))=f_1(s)=1.$$
Combine the four cases above, and we conclude that $\beta<\bar{\beta}^*(r,s)$ is a sufficient condition for (\[eq:min-max-1/2-V\]), which completes the proof.
Conclusion 1 is a result of [@ingster1997some]. We only need to prove Conclusion 2. If we only use $\{V_i\}_{1\leq i\leq n}$, the testing problem (\[eq:general-comb-null\])-(\[eq:general-comb-alt\]) becomes $$H_0: V_i \stackrel{iid}{\sim} P, \quad i \in[n],\qquad H_1: V_i \stackrel{iid}{\sim} (1-\epsilon)P+\epsilon Q, \quad i\in[n],$$ where the densities of $P$ and $Q$ are given by $$p(v)=\frac{1}{2}\phi(v-\sqrt{2(r+s)\log n})+\frac{1}{2}\phi(v+\sqrt{2(r+s)\log n}),$$ and $$q(v)=\frac{1}{2}\phi(v-\sqrt{2s\log n})+\frac{1}{2}\phi(v+\sqrt{2s\log n}).$$ Suppose $v\geq 0$, and then we have $\phi(v-\sqrt{2(r+s)\log n})\geq \phi(v+\sqrt{2(r+s)\log n})$ and $\phi(v-\sqrt{2s\log n})\geq \phi(v+\sqrt{2s\log n})$. These two inequalities imply $$p(v)\leq \phi(v-\sqrt{2(r+s)\log n})\leq 2p(v),$$ and $$q(v)\leq \phi(v-\sqrt{2s\log n})\leq 2q(v).$$ Thus, we have $$\frac{q(v)}{2p(v)}\leq e^{-(\sqrt{r+s}-\sqrt{s})v\sqrt{2\log n}}n^r\leq \frac{2q(v)}{p(v)},$$ [which is]{} due to the fact that $\frac{\phi(v-\sqrt{2s\log n})}{\phi(v-\sqrt{2(r+s)\log n})}=e^{-(\sqrt{r+s}-\sqrt{s})v\sqrt{2\log n}}n^r$. [By symmetry, we obtain that]{} $$\frac{q(v)}{2p(v)}\leq e^{-(\sqrt{r+s}-\sqrt{s})|v|\sqrt{2\log n}}n^r\leq \frac{2q(v)}{p(v)}, \label{eq:lh-ratio-V}$$ for all $v\in\mathbb{R}$.
Now we proceed to bound the Hellinger distance, and it is sufficient to show that $H(P,(1-\epsilon)P+\epsilon Q)^2=o(n^{-1})$. By direct calculation, we have $$\begin{aligned}
\nonumber H(P,(1-\epsilon)P+\epsilon Q)^2 &=& \mathbb{E}_P\left(\sqrt{1+\epsilon\left(\frac{q}{p}-1\right)}-1\right)^2 \\
\label{eq:Hell-V1} &=& \mathbb{E}_P\left[\left(\sqrt{1+\epsilon\left(\frac{q}{p}-1\right)}-1\right)^2{{\mathbf{1}_{\left\{{q\leq p}\right\}}}}\right] \\
\label{eq:Hell-V2} && + \mathbb{E}_P\left[\left(\sqrt{1+\epsilon\left(\frac{q}{p}-1\right)}-1\right)^2{{\mathbf{1}_{\left\{{ {q>p}}\right\}}}}\right].\end{aligned}$$ By Equation (88) of [@cai2014optimal], the first term (\[eq:Hell-V1\]) can be bounded by $n^{-2\beta}$, which is $o(n^{-1})$ as long as $\beta>\frac{1}{2}$. For (\[eq:Hell-V2\]), we have $$\begin{aligned}
\nonumber && \mathbb{E}_P\left[\left(\sqrt{1+\epsilon\left(\frac{q}{p}-1\right)}-1\right)^2{{\mathbf{1}_{\left\{{ {q > p}}\right\}}}}\right] \\
\nonumber &\leq& \mathbb{E}_P\left(\sqrt{1+\epsilon\frac{q}{p}}-1\right)^2 \\
\label{eq:He-bound-V-CW} &\leq& \mathbb{E}_{V\sim P}\left(\sqrt{1+2\epsilon e^{-(\sqrt{r+s}-\sqrt{s})| {V}|\sqrt{2\log n}}n^r}-1\right)^2,\end{aligned}$$ where the last inequality uses (\[eq:lh-ratio-V\]). Let us define the function $$\alpha(v)=-2(\sqrt{r+s}-\sqrt{s})|v+\sqrt{r+s}|+r,$$ and then we can [rewrite]{} (\[eq:He-bound-V-CW\]) as $\mathbb{E}\left(\sqrt{1+2n^{-\beta+\alpha(V)}}-1\right)^2$, where $V\sim N(0,(2\log n)^{-1})$. By Lemma \[lem:Yihong-Wu\], we have $$\begin{aligned}
\nonumber \mathbb{E}\left(\sqrt{1+2n^{-\beta+\alpha(V)}}-1\right)^2 &\leq& 4\mathbb{E}n^{(\alpha(V)-\beta)\wedge(2\alpha(V)-2\beta)} \\
\label{eq:intetral-V-lower} &=& 4\sqrt{\frac{\log n}{\pi}}\int n^{(\alpha(v)-\beta)\wedge(2\alpha(v)-2\beta)-v^2}dv.\end{aligned}$$ Then, by Lemma \[lem:Laplace\], a sufficient condition for (\[eq:intetral-V-lower\]) to be $o(n^{-1})$ is $$\max_v\left[(\alpha(v)-\beta)\wedge(2\alpha(v)-2\beta)-v^2\right] < -1. \label{eq:half-baked}$$
[In the rest of this proof]{}, we show that condition (\[eq:half-baked\]) [is equivalent to]{} $\beta>\bar{\beta}^*(r,s)$. First, we show (\[eq:half-baked\]) is equivalent to $$\beta > \frac{1}{2} + \max_v\left[\alpha(v)-v^2+\frac{v^2\wedge 1}{2}\right]. \label{eq:almost-done}$$ Suppose (\[eq:almost-done\]) is true. Then, for any $v\in\mathbb{R}$, either $\beta>\alpha(v)-v^2+\frac{v^2}{2}$, which is equivalent to $$2\alpha(v)-2\beta-v^2<-1, \label{eq:eren}$$ or $\beta>\alpha(v)-v^2+\frac{1}{2}$, which is equivalent to $$\alpha(v) - \beta -v^2 < -1. \label{eq:armin}$$ Since one of the two inequalities (\[eq:eren\]) and (\[eq:armin\]) must hold, we have $(\alpha(v)-\beta)\wedge(2\alpha(v)-2\beta)-v^2<-1$. Taking maximum over $v\in\mathbb{R}$, we obtain (\[eq:half-baked\]). For the other direction, suppose (\[eq:half-baked\]) is true. Then, for any $v\in\mathbb{R}$, we have either (\[eq:eren\]) or (\[eq:armin\]), which is equivalent to either $\beta>\alpha(v)-v^2+\frac{v^2}{2}$ or $\beta>\alpha(v)-v^2+\frac{1}{2}$. This implies $\beta>\frac{1}{2}+\alpha(v)-v^2+\frac{v^2\wedge 1}{2}$. Taking maximum over $v\in\mathbb{R}$, we obtain (\[eq:almost-done\]). So we have established the equivalence between (\[eq:half-baked\]) and (\[eq:almost-done\]). To solve the righthand side of (\[eq:almost-done\]), let us write $$\begin{aligned}
&& \frac{1}{2} + \max_v\left[\alpha(v)-v^2+\frac{v^2\wedge 1}{2}\right] \\
&=& \left(\frac{1}{2} + \max_{|v|\leq 1}\left[\alpha(v)-\frac{v^2}{2}\right]\right)\vee\left(1 + \max_{|v|\geq 1}\left[\alpha(v)-v^2\right]\right).\end{aligned}$$ By Proposition \[prop:opt2\], when $3s\leq r$ and $r+s\leq 1$, we have $$\frac{1}{2} + \max_{|v|\leq 1}\left[\alpha(v)-\frac{v^2}{2}\right]=\frac{1+r-s}{2}.$$ When $3s>r$ and $4(\sqrt{r+s}-\sqrt{s})^2\leq 1$, we have $$\frac{1}{2} + \max_{|v|\leq 1}\left[\alpha(v)-\frac{v^2}{2}\right]=\frac{1}{2}+r-2\sqrt{s}(\sqrt{r+s}-\sqrt{s}).$$ By Proposition \[prop:opt4\], when $(\sqrt{r+s}-\sqrt{s})^2>1$, we have $$1 + \max_{|v|\geq 1}\left[\alpha(v)-v^2\right]=1.$$ Finally, we also have $$\begin{aligned}
\frac{1}{2} + \max_{|v|= 1}\left[\alpha(v)-\frac{v^2}{2}\right] &=& 1 + \max_{|v|=1}\left[\alpha(v)-v^2\right] \\
&=& r-2(\sqrt{r+s}-\sqrt{s})(\sqrt{r+s}-1).\end{aligned}$$ After we properly organize the above cases, we obtain that $$\frac{1}{2} + \max_v\left[\alpha(v)-v^2+\frac{v^2\wedge 1}{2}\right]=\bar{\beta}^*(r,s),$$ which implies (\[eq:almost-done\]) is equivalent to $\beta>\bar{\beta}^*(r,s)$, and the proof is complete.
Define the statistic $$T_n(t)=\frac{\sum_{i=1}^n{{\mathbf{1}_{\left\{{|\sqrt{r}U_i+\sqrt{s}V_i|-\sqrt{r+s}|V_i|>t\sqrt{2\log n}}\right\}}}}-nS_{(r,s)}(t\sqrt{2\log n})}{\sqrt{n S_{(r,s)}(t\sqrt{2\log n})(1-S_{(r,s)}(t\sqrt{2\log n}))}},$$ and then we can write the test as ${{\mathbf{1}_{\left\{{\sup_{t\in\mathbb{R}}|T_n(t)|>\sqrt{2(1+\delta)\log\log n}}\right\}}}}$. By [@shorack2009empirical], $\frac{\sup_{t\in\mathbb{R}}|T_n(t)|}{\sqrt{2\log\log n}}$ converges to $1$ in probability under $H_0$, which then implies a vanishing Type-I error. The Type-II error can be bounded by $$\mathbb{P}_{H_1}\left(\sup_{t\in\mathbb{R}}|T_n(t)|\leq\sqrt{2(1+\delta)\log\log n}\right)\leq \min_{t\in\mathbb{R}}\mathbb{P}_{H_1}\left(|T_n(t)|\leq\sqrt{2(1+\delta)\log\log n}\right).$$ So it suffices to show there exists some $t$ so that $\frac{(\mathbb{E}_{H_1}T_n(t))^2}{\Var_{H_1}(T_n(t))}$ diverges to infinity at a polynomial rate. Let us write the null distribution (\[eq:general-comb-null\]) and the alternative distribution (\[eq:general-comb-alt\]) as $P$ and $(1-\epsilon)P+\epsilon Q$, respectively. Then, $\frac{(\mathbb{E}_{H_1}T_n(t))^2}{\Var_{H_1}(T_n(t))}$ is at the same order of $$\frac{n^2\epsilon^2Q\left(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|>t\sqrt{2\log n}\right)^2}{nP\left(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|>t\sqrt{2\log n}\right) + n\epsilon Q\left(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|>t\sqrt{2\log n}\right)}.$$ By Lemma \[lem:comp-tail-0\] and Lemma \[lem:comp-tail-1\], we have $$\begin{aligned}
&& n\epsilon Q\left(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|>t\sqrt{2\log n}\right) \\
&\asymp& \begin{cases}
\frac{1}{\log n}n^{1-\beta-\frac{(t-r)^2+rs}{r}}, & t\in T_1, \\
\frac{1}{\sqrt{\log n}}n^{1-\beta-\frac{(t-(r+s)+\sqrt{s}\sqrt{r+s})^2}{2(r+s-\sqrt{s}\sqrt{r+s})}}, & t\in T_2, \\
n^{1-\beta}, & t\in T_3\cup T_4,
\end{cases}\end{aligned}$$ and $$\begin{aligned}
&& \frac{n\epsilon^2Q\left(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|>t\sqrt{2\log n}\right)^2}{P\left(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|>t\sqrt{2\log n}\right)} \\
&\asymp& \begin{cases}
\frac{1}{\log n}n^{1-2\beta-\frac{2(t-r)^2+2rs}{r}+\frac{t^2+r(r+s)}{r}}, & t\in T_1, \\
\frac{1}{\sqrt{\log n}}n^{1-2\beta-\frac{(t-r-s-\sqrt{s}\sqrt{r+s})^2}{r+s-\sqrt{s}\sqrt{r+s}}+\frac{(t+r+s-\sqrt{s}\sqrt{r+s})^2}{2(r+s-\sqrt{s}\sqrt{r+s})}}, & t\in T_2, \\
(\sqrt{\log n})n^{1-2\beta+\frac{(t+r+s-\sqrt{s}\sqrt{r+s})^2}{2(r+s-\sqrt{s}\sqrt{r+s})}}, & t\in T_3, \\
1, & t\in T_4,
\end{cases}\end{aligned}$$ where $$\begin{aligned}
T_1 &=& (r+s+\sqrt{s}\sqrt{r+s},\infty), \\
T_2 &=& (r+s-\sqrt{s}\sqrt{r+s},r+s+\sqrt{s}\sqrt{r+s}], \\
T_3 &=& (-r-s+\sqrt{s}\sqrt{r+s},r+s-\sqrt{s}\sqrt{r+s}], \\
T_4 &=& (-\infty, -r-s+\sqrt{s}\sqrt{r+s}].\end{aligned}$$ Therefore, in order that there exists some $t$ such that $\frac{(\mathbb{E}_{H_1}T_n(t))^2}{\Var_{H_1}(T_n(t))}$ diverges to infinity at a polynomial rate, it is sufficient to require $$\beta < \max_{t\in T_1}(f_1(t)\wedge g_1(t))\vee\max_{t\in T_2}(f_2(t)\wedge g_2(t))\vee\max_{t\in T_3}(1\wedge g_3(t))\vee\frac{1}{2}, \label{eq:general-upper-HC-abstract}$$ where $$\begin{aligned}
f_1(t) &=& 1 -\frac{(t-r)^2+rs}{r}, \\
g_1(t) &=& \frac{1}{2} -\frac{(t-r)^2+rs}{r}+\frac{t^2+r(r+s)}{2r}, \\
f_2(t) &=& 1-\frac{(t-(r+s)+\sqrt{s}\sqrt{r+s})^2}{2(r+s-\sqrt{s}\sqrt{r+s})}, \\
g_2(t) &=& \frac{1}{2}-\frac{(t-(r+s)+\sqrt{s}\sqrt{r+s})^2}{2(r+s-\sqrt{s}\sqrt{r+s})} + \frac{(t+r+s-\sqrt{s}\sqrt{r+s})^2}{4(r+s-\sqrt{s}\sqrt{r+s})}, \\
g_3(t) &=& \frac{1}{2} + \frac{(t+r+s-\sqrt{s}\sqrt{r+s})^2}{4(r+s-\sqrt{s}\sqrt{r+s})}.\end{aligned}$$ Now we need to show that $\beta<\beta^*(r,s)$ is a sufficient condition of (\[eq:general-upper-HC-abstract\]). According to the definition of $\beta^*(r,s)$, we will discuss the five cases respectively, and in each case, we will show $\beta^*(r,s)\leq \max_{t\in T_1}(f_1(t)\wedge g_1(t))\vee\max_{t\in T_2}(f_2(t)\wedge g_2(t))\vee\max_{t\in T_3}(1\wedge g_3(t))\vee\frac{1}{2}$.
*Case 1.* $3s> r$ and $r+s-\sqrt{s}\sqrt{r+s}\leq \frac{1}{8}$. Note that $3s> r$ is equivalent to $$3(r+s-\sqrt{s}\sqrt{r+s})< r+s+\sqrt{s}\sqrt{r+s},$$ and $r+s-\sqrt{s}\sqrt{r+s}\leq \frac{1}{8}$ is equivalent to $$3(r+s-\sqrt{s}\sqrt{r+s})<\sqrt{2(r+s-\sqrt{s}\sqrt{r+s})}-(r+s-\sqrt{s}\sqrt{r+s}).$$ It is easy to see that $f_2(t)$ is a quadratic function maximized at $t=r+s-\sqrt{s}\sqrt{r+s}$, and $g_2(t)$ is a quadratic function maximized at $t=3(r+s-\sqrt{s}\sqrt{r+s})$. Moreover, when $t\leq \sqrt{2(r+s-\sqrt{s}\sqrt{r+s})}-(r+s-\sqrt{s}\sqrt{r+s})$ and $t\in T_2$, $f_2(t)\wedge g_2(t)=g_2(t)$ achieves [its]{} maximum at $t=3(r+s-\sqrt{s}\sqrt{r+s})$. When $t> \sqrt{2(r+s-\sqrt{s}\sqrt{r+s})}-(r+s-\sqrt{s}\sqrt{r+s})$ and $t\in T_2$, $f_2(t)\wedge g_2(t)=f_2(t)$ is decreasing. Therefore, $$\max_{t\in T_2}(f_2(t)\wedge g_2(t))=g_2(3(r+s-\sqrt{s}\sqrt{r+s}))=\frac{1}{2}+2(r+s-\sqrt{s}\sqrt{r+s})=\beta^*(r,s),$$ and we can conclude that $\beta<\beta^*(r,s)$ implies (\[eq:general-upper-HC-abstract\]).
*Case 2.* $3s\leq r$ and $5r+s\leq 1$. Note that we can equivalently write the condition as $r+s+\sqrt{s}\sqrt{r+s}\leq 2r\leq \sqrt{r(1-r-s)}$. It can be checked that $$f_1(t)\wedge g_1(t)=\begin{cases}
f_1(t), & t> \sqrt{r(1-r-s)},\\
g_1(t), & r+s+\sqrt{s}\sqrt{r+s}<t\leq \sqrt{r(1-r-s)}.
\end{cases}$$ Moreover, when $r+s+\sqrt{s}\sqrt{r+s}<t\leq \sqrt{r(1-r-s)}$, $g_1(t)$ is maximized at $t=2r$, and when $t> \sqrt{r(1-r-s)}$, $f_1(t)$ is decreasing. Therefore, $$\max_{t\in T_1}(f_1(t)\wedge g_1(t))=g_1(2r)=\frac{1-s+3r}{2}=\beta^*(r,s),$$ and we can conclude that $\beta<\beta^*(r,s)$ implies (\[eq:general-upper-HC-abstract\]).
*Case 3.* $5r+s> 1$, $\frac{1}{8}<r+s-\sqrt{s}\sqrt{r+s}\leq \frac{1}{2}$ and $2(1-r-s)(r+s-\sqrt{s}\sqrt{r+s})>r$. Let us first show that the condition $2(1-r-s)(r+s-\sqrt{s}\sqrt{r+s})>r$ implies $\sqrt{r(1-r-s)}>r+s+\sqrt{s}\sqrt{r+s}$. By $2(1-r-s)(r+s-\sqrt{s}\sqrt{r+s})>r$, we have $$r+s<1-\frac{r}{2(r+s-\sqrt{s}\sqrt{r+s})}=\frac{r}{2(r+s+\sqrt{s}\sqrt{r+s})}.$$ The inequality $r+s<\frac{r}{2(r+s+\sqrt{s}\sqrt{r+s})}$ can be rearranged into $$\frac{r}{2(r+s-\sqrt{s}\sqrt{r+s})}>\frac{(r+s+\sqrt{s}\sqrt{r+s})^2}{r}.$$ Then, from $2(1-r-s)(r+s-\sqrt{s}\sqrt{r+s})>r$, we get $$1-r-s>\frac{r}{2(r+s-\sqrt{s}\sqrt{r+s})}>\frac{(r+s+\sqrt{s}\sqrt{r+s})^2}{r},$$ which leads to the desired inequality $\sqrt{r(1-r-s)}>r+s+\sqrt{s}\sqrt{r+s}$. Moreover, we can easily check that the condition $5r+s> 1$ implies $\sqrt{r(1-r-s)}<2r$, and thus we have $$r+s+\sqrt{s}\sqrt{r+s}<\sqrt{r(1-r-s)}<2r. \label{eq:greatly-simplified}$$ Now we analyze $\max_{t\in T_1}(f_1(t)\wedge g_1(t))$ under (\[eq:greatly-simplified\]). Since $f_1(t)$ is a quadratic function that achieves maximum at $t=r$ and $g_1(t)$ is a quadratic function that achieves maximum at $t=2r$, we know that when $r+s+\sqrt{s}\sqrt{r+s}<t\leq \sqrt{r(1-r-s)}$, $f_1(t)\wedge g_1(t)=g_1(t)$ is increasing, and when $t>\sqrt{r(1-r-s)}$, $f_1(t)\wedge g_1(t)=f_1(t)$ is decreasing. Thus, $$\max_{t\in T_1}(f_1(t)\wedge g_1(t))=f_1(\sqrt{r(1-r-s)})=2\sqrt{r}\sqrt{1-r-s}=\beta^*(r,s),$$ and we can conclude that $\beta<\beta^*(r,s)$ implies (\[eq:general-upper-HC-abstract\]).
*Case 4.* $5r+s> 1$, $\frac{1}{8}<r+s-\sqrt{s}\sqrt{r+s}\leq \frac{1}{2}$ and $2(1-r-s)(r+s-\sqrt{s}\sqrt{r+s})\leq r$. Let us first show that the condition $2(1-r-s)(r+s-\sqrt{s}\sqrt{r+s})\leq r$ implies $$r+s+\sqrt{s}\sqrt{r+s} \geq \sqrt{2(r+s-\sqrt{s}\sqrt{r+s})}-(r+s-\sqrt{s}\sqrt{r+s}). \label{eq:happy-to-figure-out}$$ By $2(1-r-s)(r+s-\sqrt{s}\sqrt{r+s})\leq r$, we have $$\begin{aligned}
r+s &\geq& 1-\frac{r}{2(r+s-\sqrt{s}\sqrt{r+s})} \\
&=& \frac{r}{2(r+s+\sqrt{s}\sqrt{r+s})} \\
&=& \frac{1}{2}\left(1-\sqrt{\frac{s}{r+s}}\right),\end{aligned}$$ and thus we have $r+s\geq \frac{1}{2}\left(1-\sqrt{\frac{s}{r+s}}\right)$. Multiply both sides by $4(r+s)$ to obtain $4(r+s)^2\geq 2(r+s-\sqrt{s}\sqrt{r+s})$. We then [take the]{} square roots [of]{} both sides [to obtain]{} $2(r+s)\geq \sqrt{2(r+s-\sqrt{s}\sqrt{r+s})}$, which can then be rearranged into (\[eq:happy-to-figure-out\]). In addition to (\[eq:happy-to-figure-out\]), by $\frac{1}{8}<r+s-\sqrt{s}\sqrt{r+s}\leq \frac{1}{2}$, we also have $$r+s-\sqrt{s}\sqrt{r+s}\leq \sqrt{2(r+s-\sqrt{s}\sqrt{r+s})}-(r+s-\sqrt{s}\sqrt{r+s}) < 3(r+s-\sqrt{s}\sqrt{r+s}). \label{eq:this-one-is-easy}$$ Now we will analyze $\max_{t\in T_2}(f_2(t)\wedge g_2(t))$ under (\[eq:happy-to-figure-out\]) and (\[eq:this-one-is-easy\]). Since $f_2(t)$ is a quadratic function that achieves maximum at $t=r+s-\sqrt{s}\sqrt{r+s}$ and $g_2(t)$ is a quadratic function that achieves maximum at $t=3(r+s-\sqrt{s}\sqrt{r+s})$, by (\[eq:happy-to-figure-out\]) and (\[eq:this-one-is-easy\]), we know that for $t\in T_2$, $f_2(t)\wedge g_2(t)=g_2(t)$ is increasing on the left hand side of $\sqrt{2(r+s-\sqrt{s}\sqrt{r+s})}-(r+s-\sqrt{s}\sqrt{r+s})$, and $f_2(t)\wedge g_2(t)=f_2(t)$ is decreasing on the righthand side of [it]{}. Therefore, $$\begin{aligned}
\max_{t\in T_2}(f_2(t)\wedge g_2(t)) &=& g_2\left(\sqrt{2(r+s-\sqrt{s}\sqrt{r+s})}-(r+s-\sqrt{s}\sqrt{r+s})\right) \\
&=& 2\sqrt{2(r+s-\sqrt{s}\sqrt{r+s})}-2(r+s-\sqrt{s}\sqrt{r+s}) \\
&=& \beta^*(r,s),\end{aligned}$$ and we can conclude that $\beta<\beta^*(r,s)$ implies (\[eq:general-upper-HC-abstract\]).
*Case 5.* $r+s-\sqrt{s}\sqrt{r+s}>\frac{1}{2}$. In this case, we have $$\max_{t\in T_3}(1\wedge g_3(t))=1\wedge \min_{t\in T_3}g_3(t)=1\wedge g_3(r+s-\sqrt{s}\sqrt{r+s})=1,$$ under the condition $r+s-\sqrt{s}\sqrt{r+s}>\frac{1}{2}$. Since we also have $\beta^*(r,s)=1$, we can conclude that $\beta<\beta^*(r,s)$ implies (\[eq:general-upper-HC-abstract\]).
Combining the results of the five cases above, we conclude that $\beta<\beta^*(r,s)$ is a sufficient condition for (\[eq:general-upper-HC-abstract\]), and thus the proof is complete.
Let us write the null distribution (\[eq:general-comb-null\]) and the alternative distribution (\[eq:general-comb-alt\]) as $P$ and $(1-\epsilon)P+\epsilon Q$, where the densities of $P$ and $Q$ are given by $p(u,v)$ and $q(u,v)$ defined in (\[eq:general-P-den\]) and (\[eq:general-Q-den\]). It is sufficient to show that $H(P,(1-\epsilon)P+\epsilon Q)^2=o(n^{-1})$. By the same argument used in the lower bound proof of Theorem \[thm:general-separate\], $H(P,(1-\epsilon)P+\epsilon Q)^2$ can be written as the sum of (\[eq:Hell-V1\]) and (\[eq:Hell-V2\]). By Equation (88) of [@cai2014optimal], the first term (\[eq:Hell-V1\]) can be bounded by $n^{-2\beta}$, which is $o(n^{-1})$ as long as $\beta>\frac{1}{2}$. For (\[eq:Hell-V2\]), we have $$\begin{aligned}
\nonumber && \mathbb{E}_P\left[\left(\sqrt{1+\epsilon\left(\frac{q}{p}-1\right)}-1\right)^2{{\mathbf{1}_{\left\{{q< p}\right\}}}}\right] \\
\nonumber &\leq& \mathbb{E}_P\left(\sqrt{1+\epsilon\frac{q}{p}}-1\right)^2 \\
\label{eq:He-bound-U-V-CW} &\leq& \mathbb{E}_{(U,V)\sim P}\left(\sqrt{1+2\epsilon e^{(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|)\sqrt{2\log n}}}-1\right)^2,\end{aligned}$$ where the last inequality is by Lemma \[lem:LR-approx\]. Let us define the function $$\alpha(u,v)=2|\sqrt{r}u+\sqrt{s}v+\sqrt{s}\sqrt{r+s}|-2\sqrt{r+s}|v+\sqrt{r+s}|,$$ and then we can write (\[eq:He-bound-U-V-CW\]) as $\mathbb{E}\left(\sqrt{1+2n^{-\beta+\alpha(U,V)}}-1\right)^2$, where $U,V\stackrel{iid}{\sim} N(0,(2\log n)^{-1})$. By Lemma \[lem:Yihong-Wu\], we have $$\begin{aligned}
\mathbb{E}\left(\sqrt{1+2n^{-\beta+\alpha(U,V)}}-1\right)^2 &\leq& 4\mathbb{E}n^{(\alpha(U,V)-\beta)\wedge(2\alpha(U,V)-2\beta)} \\
\label{eq:intetral-U-V-lower} &=& \frac{4\log n}{\pi}\int\int n^{(\alpha(u,v)-\beta)\wedge(2\alpha(u,v)-2\beta)-u^2-v^2}dudv.\end{aligned}$$ Then, by Lemma \[lem:Laplace\], a sufficient condition for (\[eq:He-bound-U-V-CW\]) to be $o(n^{-1})$ is $$\max_{u,v}\left[(\alpha(u,v)-\beta)\wedge(2\alpha(u,v)-2\beta)-u^2-v^2\right] < -1. \label{eq:half-baked-U-V}$$ By the same argument that leads to the equivalence between (\[eq:half-baked\]) and (\[eq:almost-done\]), (\[eq:half-baked-U-V\]) is also equivalent to $$\beta > \frac{1}{2} + \max_{u,v}\left[\alpha(u,v)-u^2-v^2+\frac{(u^2+v^2)\wedge 1}{2}\right]. \label{eq:almost-done-U-V}$$ We also define $$\bar{\alpha}(u,v)=2\sqrt{r}|u|-2(\sqrt{r+s}-\sqrt{s})|v+\sqrt{r+s}|.$$ Since $\alpha(u,v)\leq \bar{\alpha}(u,v)$, a sufficient condition of (\[eq:almost-done-U-V\]) is $$\beta > \frac{1}{2} + \max_{u,v}\left[\bar{\alpha}(u,v)-u^2-v^2+\frac{(u^2+v^2)\wedge 1}{2}\right]. \label{eq:almost-done-U-V-bar}$$ Now it suffices to show (\[eq:almost-done-U-V-bar\]) is equivalent to $\beta>\beta^*(r,s)$. To solve the righthand side of (\[eq:almost-done-U-V-bar\]), we write $$\begin{aligned}
\nonumber && \frac{1}{2} + \max_{u,v}\left[\bar{\alpha}(u,v)-u^2-v^2+\frac{(u^2+v^2)\wedge 1}{2}\right] \\
\label{eq:how-to-combine} &=& \left(\frac{1}{2}+\max_{u^2+v^2\leq 1}\left[\bar{\alpha}(u,v)-\frac{u^2+v^2}{2}\right]\right)\vee\left(1+\max_{u^2+v^2\geq 1}\left[\bar{\alpha}(u,v)-u^2-v^2\right]\right).\end{aligned}$$ By Proposition \[prop:opt1\] and Proposition \[prop:opt2\], when $3s>r$ and $r+s-\sqrt{s}\sqrt{r+s}\leq\frac{1}{8}$, we have $$\frac{1}{2}+\max_{u^2+v^2\leq 1}\left[\bar{\alpha}(u,v)-\frac{u^2+v^2}{2}\right]=\frac{1}{2}+2(r+s-\sqrt{s}\sqrt{r+s}),$$ and when $3\leq r$ and $5r+s\leq 1$, we have $$\frac{1}{2}+\max_{u^2+v^2\leq 1}\left[\bar{\alpha}(u,v)-\frac{u^2+v^2}{2}\right]=\frac{1+3r-s}{2}.$$ We then use Proposition \[prop:opt3\] and Proposition \[prop:opt4\]. When $r+s-\sqrt{s}\sqrt{r+s}>\frac{1}{2}$, we have $$1+\max_{u^2+v^2\geq 1}\left[\bar{\alpha}(u,v)-u^2-v^2\right]=1.$$ Finally, by Proposition \[prop:opt5\], when $5r+s> 1$, $\frac{1}{8}<r+s-\sqrt{s}\sqrt{r+s}\leq \frac{1}{2}$ and $2(1-r-s)(r+s-\sqrt{s}\sqrt{r+s})>r$, we have $$\max_{u^2+v^2=1}\bar{\alpha}(u,v)=2\sqrt{r}\sqrt{1-r-s},$$ and when $5r+s> 1$, $\frac{1}{8}<r+s-\sqrt{s}\sqrt{r+s}\leq \frac{1}{2}$ and $2(1-r-s)(r+s-\sqrt{s}\sqrt{r+s})\leq r$, we have $$\max_{u^2+v^2=1}\bar{\alpha}(u,v)=2\sqrt{2(r+s-\sqrt{s}\sqrt{r+s})}-2(r+s-\sqrt{s}\sqrt{r+s}).$$ Combine the above cases through (\[eq:how-to-combine\]), and we obtain that $$\frac{1}{2} + \max_{u,v}\left[\bar{\alpha}(u,v)-u^2-v^2+\frac{(u^2+v^2)\wedge 1}{2}\right]=\beta^*(r,s),$$ and therefore, $\beta>\beta^*(r,s)$ implies (\[eq:almost-done-U-V\]), which completes the proof.
Similar to the proof of Theorem \[thm:equal-sum\], the upper bound conclusion directly follows the arguments used in the proofs of Proposition \[prop:equal-diff\] and Theorem \[thm:general-HC\]. For the lower bound, we can use the same argument in the proof of Theorem \[thm:main-equal\] to reduce to the testing problem (\[eq:general-comb-null\])-(\[eq:general-comb-alt\]) with $\epsilon$ replaced by $n^{-(\beta-\delta)}$ for some sufficiently small constant $\delta>0$ that satisfies $\delta<\beta-\beta^*(r,s)$. Then, apply Theorem \[thm:general-HC\], and we obtain the desired conclusion.
Proofs of Theorem \[thm:exact\] and Theorem \[thm:Bonf\]
--------------------------------------------------------
Let us first state a proposition on the properties of $t^*(r,s)$ defined in Section \[sec:equality\].
\[prop:t-star\] The following properties of $t^*(r,s)$ are satisfied.
1. $t^*(r,s)\leq \sqrt{2(r+s-\sqrt{s}\sqrt{r+s})}-(r+s-\sqrt{s}\sqrt{r+s})$.
2. If $r+s-\sqrt{s}\sqrt{r+s}>\frac{1}{2}$, then $t^*(r,s)<r+s-\sqrt{s}\sqrt{r+s}$.
3. If $r+s-\sqrt{s}\sqrt{r+s}<\frac{1}{2}$, then $t^*(r,s)>r+s-\sqrt{s}\sqrt{r+s}$.
4. If $t^*(r,s)\geq 3(r+s-\sqrt{s}\sqrt{r+s})$, then $r+s-\sqrt{s}\sqrt{r+s}\leq\frac{1}{8}$.
Define $$f(t)=\begin{cases}
f_1(t), & t>r+s+\sqrt{s}\sqrt{r+s}, \\
f_2(t), & -(r+s)+\sqrt{s}\sqrt{r+s} < t\leq r+s+\sqrt{s}\sqrt{r+s},
\end{cases}$$ where $f_1(t)=\frac{t^2+r(r+s)}{r}$ and $f_2(t)=\frac{(t+r+s-\sqrt{s}\sqrt{r+s})^2}{2(r+s-\sqrt{s}\sqrt{r+s})}$. Then, $t^*(r,s)$ is a solution to the equation $f(t)=1$. In particular, $\sqrt{r(1-r-s)}$ is a solution to $f_1(t)=1$ and $\sqrt{2(r+s-\sqrt{s}\sqrt{r+s})}-(r+s-\sqrt{s}\sqrt{r+s})$ is a solution to $f_2(t)=1$. It is not hard to check that $f(t)$ is an increasing function, and the condition $2(r+s)(r+s+\sqrt{s}\sqrt{r+s})\leq r$ is equivalent to $f(r+s+\sqrt{s}\sqrt{r+s})\leq 1$. To prove the first conclusion, it suffices to show $\sqrt{r(1-r-s)}\leq \sqrt{2(r+s-\sqrt{s}\sqrt{r+s})}-(r+s-\sqrt{s}\sqrt{r+s})$ when $f(r+s+\sqrt{s}\sqrt{r+s})\leq 1$. It is direct to check that $f_1(r+s+\sqrt{s}\sqrt{r+s})=f_2(r+s+\sqrt{s}\sqrt{r+s})$ and $f_1'(r+s+\sqrt{s}\sqrt{r+s})=f_2'(r+s+\sqrt{s}\sqrt{r+s})$. Moreover, since $r+s-\sqrt{s}\sqrt{r+s}=\frac{\sqrt{r+s}}{\sqrt{r+s}+\sqrt{s}}r\geq\frac{r}{2}$, we have $f_1''(t)=\frac{1}{r}\geq \frac{1}{2(r+s-\sqrt{s}\sqrt{r+s})}=f_2''(t)$. This means $f_1$ grows faster than $f_2$ for $t\geq r+s+\sqrt{s}\sqrt{r+s}$, and thus will $f_1(t)$ reach $1$ no later than $f_2(t)$ does, which implies $\sqrt{r(1-r-s)}\leq \sqrt{2(r+s-\sqrt{s}\sqrt{r+s})}-(r+s-\sqrt{s}\sqrt{r+s})$.
To prove the second conclusion, we notice that $t^*(r,s)=\sqrt{r(1-r-s)}$ when $r\geq 2(r+s)(r+s+\sqrt{s}\sqrt{r+s})$. This condition can be equivalently written as $$2(1-r-s)(r+s-\sqrt{s}\sqrt{r+s})\geq r, \label{eq:oh-again}$$ because of the identity $\frac{r}{2(r+s+\sqrt{s}\sqrt{r+s})}=1-\frac{r}{2(r+s-\sqrt{s}\sqrt{r+s})}$. Following the same way that we derive (\[eq:happy-to-figure-out\]) from $2(1-r-s)(r+s-\sqrt{s}\sqrt{r+s})\leq r$, we can also show that (\[eq:oh-again\]) implies $$r+s+\sqrt{s}\sqrt{r+s} \leq \sqrt{2(r+s-\sqrt{s}\sqrt{r+s})}-(r+s-\sqrt{s}\sqrt{r+s}). \label{eq:to-be-contradicted}$$ However, under the condition that $r+s-\sqrt{s}\sqrt{r+s}>\frac{1}{2}$, we have $$\sqrt{2(r+s-\sqrt{s}\sqrt{r+s})}-(r+s-\sqrt{s}\sqrt{r+s})< r+s-\sqrt{s}\sqrt{r+s}, \label{eq:tired}$$ which contradicts (\[eq:to-be-contradicted\]). Therefore, the condition $r+s-\sqrt{s}\sqrt{r+s}>\frac{1}{2}$ implies that we can only have $$t^*(r,s)=\sqrt{2(r+s-\sqrt{s}\sqrt{r+s})}-(r+s-\sqrt{s}\sqrt{r+s}). \label{eq:t-star-exact-recov}$$ Hence, $t^*(r,s)< r+s-\sqrt{s}\sqrt{r+s}$ holds because of (\[eq:tired\]) and (\[eq:t-star-exact-recov\]).
Now we prove the third conclusion. When $2(r+s)(r+s+\sqrt{s}\sqrt{r+s})>r$, since $t^*(r,s)=\sqrt{2(r+s-\sqrt{s}\sqrt{r+s})}-(r+s-\sqrt{s}\sqrt{r+s})$, $t^*(r,s)>r+s-\sqrt{s}\sqrt{r+s}$ is equivalent to $r+s-\sqrt{s}\sqrt{r+s}<\frac{1}{2}$. Now consider the other case $2(r+s)(r+s+\sqrt{s}\sqrt{r+s})\leq r$, and then $t^*(r,s)=\sqrt{r(1-r-s)}$. Note that the condition $2(r+s)(r+s+\sqrt{s}\sqrt{r+s})\leq r$ can be equivalently written as (\[eq:oh-again\]), and thus we have $1-r-s\geq \frac{r}{2(r+s-\sqrt{s}\sqrt{r+s})}$. Then, by $r+s-\sqrt{s}\sqrt{r+s}<\frac{1}{2}$, we get $1-r-s>r$, which then implies $\sqrt{r(1-r-s)}>r$. Since $r+s-\sqrt{s}\sqrt{r+s}=\sqrt{r+s}(\sqrt{r+s}-\sqrt{s})=\frac{\sqrt{r+s}}{\sqrt{r+s}+\sqrt{s}}r\leq r$, we get $\sqrt{r(1-r-s)}>r+s-\sqrt{s}\sqrt{r+s}$ as desired. So we conclude that $t^*(r,s)>r+s-\sqrt{s}\sqrt{r+s}$ holds.
Finally, by the first conclusion, $t^*(r,s)\geq 3(r+s-\sqrt{s}\sqrt{r+s})$ implies $$\sqrt{2(r+s-\sqrt{s}\sqrt{r+s})}-(r+s-\sqrt{s}\sqrt{r+s})\geq 3(r+s-\sqrt{s}\sqrt{r+s}).$$ This directly leads to $r+s-\sqrt{s}\sqrt{r+s}\leq\frac{1}{8}$.
For the test $\dot{\psi}$ defined in (\[eq:general-test-comb\]), its Type-I error is vanishing by the same arguments used in (\[eq:type-1-3.1\])-(\[eq:type-1-3.1-part2\]). For the Type-II error, we follow (\[eq:type-2-3.1\]) and (\[eq:type-2-3.1-part\]), and thus it suffices to prove $$P^{(n)}_{(\theta,\eta,z,\sigma)}\left(\dot{T}_n^-(t^*(r,s))\leq\sqrt{2(1+\delta)\log \log n}\right)\rightarrow 0, \label{eq:want-to-prove-exact-equal}$$ uniformly over any $z,\sigma\in\{-1,1\}^n$ that satisfy $\frac{1}{n}\sum_{i=1}^n{{\mathbf{1}_{\left\{{z_i\neq \sigma_i}\right\}}}}>0$. For any $t\in\mathbb{R}$, the $\dot{T}_n^-(t)$ in (\[eq:want-to-prove-exact-equal\]) is defined by $$\dot{T}_n^-(t)=\frac{\sum_{i=1}^n{{\mathbf{1}_{\left\{{C^-(X_i,Y_i,\theta,\eta)>\sqrt{2}t\log n}\right\}}}}-nS_{(r,s)}(t\sqrt{2\log n})}{\sqrt{n S_{(r,s)}(t\sqrt{2\log n})(1-S_{(r,s)}(t\sqrt{2\log n}))}}.$$ By Lemma \[lem:comp-tail-0\] and the definition of $t^*(r,s)$, we have $$\frac{1}{\log n}n^{-1}\lesssim\mathbb{P}\left(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|>t^*(r,s)\sqrt{2\log n}\right)\lesssim \frac{1}{\sqrt{\log n}}n^{-1}, \label{eq:range-of-S}$$ and thus $$\frac{1}{\log n}\lesssim nS_{(r,s)}(t^*(r,s)\sqrt{2\log n})\lesssim \frac{1}{\sqrt{\log n}}.$$ Moreover, since $\frac{1}{n}\sum_{i=1}^n{{\mathbf{1}_{\left\{{z_i\neq \sigma_i}\right\}}}}>0$, there exists some $i_0\in[n]$ such that $z_{i_0}\neq \sigma_{i_0}$. By Lemma \[lem:comp-tail-1\], we have $$\begin{aligned}
&& \mathbb{P}\left(C^-(X_{i_0},Y_{i_0},\theta,\eta)>\sqrt{2}t^*(r,s)\log n\right) \\
&=& \mathbb{P}_{(U^2,V^2)\sim \chi_{1,2r\log n}^2\otimes \chi_{1,2s\log n}^2}\left(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|>t^*(r,s)\sqrt{2\log n}\right) \\
&\rightarrow& 1,\end{aligned}$$ where the last line uses the fact that $t^*(r,s)< r+s-\sqrt{s}\sqrt{r+s}$, which is implied by $r+s-\sqrt{s}\sqrt{r+s}>\frac{1}{2}$ according to Proposition \[prop:t-star\]. This implies that $${{\mathbf{1}_{\left\{{C^-(X_{i_0},Y_{i_0},\theta,\eta)>\sqrt{2}t^*(r,s)\log n}\right\}}}}=1,$$ with probability tending to $1$. Finally, by (\[eq:range-of-S\]), we have $$\dot{T}_n^-(t^*(r,s))\geq \frac{{{\mathbf{1}_{\left\{{C^-(X_{i_0},Y_{i_0},\theta,\eta)>\sqrt{2}t^*(r,s)\log n}\right\}}}}-nS_{(r,s)}(t^*(r,s)\sqrt{2\log n})}{\sqrt{nS_{(r,s)}(t^*(r,s)\sqrt{2\log n})(1-S_{(r,s)}(t^*(r,s)\sqrt{2\log n}))}}\gtrsim (\log n)^{1/4}, \label{eq:ipad-os}$$ with probability tending to $1$. By the fact that $\sqrt{\log\log n}=o((\log n)^{1/4})$, (\[eq:want-to-prove-exact-equal\]) holds, and the proof is complete.
Define $P_0$ to be the joint distribution of $\{(X_i,Y_i)\}_{1\leq i\leq n}$, under which we have $$(X_i,Y_i)\stackrel{iid}{\sim} \frac{1}{2}N(\theta,I_p)\otimes N(\eta,I_q)+\frac{1}{2}N(-\theta,I_p)\otimes N(-\eta,I_q),\quad i\in[n].$$ For each $i\in[n]$, we define $P_i$ to be the product measure identical to $P_0$ except for its $i$th coordinate takes $$\frac{1}{2}N(\theta,I_p)\otimes N(-\eta,I_q)+\frac{1}{2}N(-\theta,I_p)\otimes N(\eta,I_q).$$ Then, for any testing function $\psi$, we have $$\sup_{\substack{z\in\{-1,1\}^n\\ \sigma\in\{z,-z\}}}P^{(n)}_{(\theta,\eta,z,\sigma)}\psi\geq P_0\psi,$$ and $$\sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n\\\ell(z,\sigma)> 0}}P^{(n)}_{(\theta,\eta,z,\sigma)}(1-\psi)\geq \frac{1}{n}\sum_{i=1}^n P_i(1-\psi).$$ Let us use $p_i$ for the density function of $P_i$ for all $i\in[n]\cup\{0\}$, and we have $$\begin{aligned}
R_n^{\rm exact}(\theta,\eta) &\geq& \inf_{\psi}\left(P_0\psi + \frac{1}{n}\sum_{i=1}^nP_i(1-\psi)\right) \\
&=& \int p_0\wedge\left(\frac{1}{n}\sum_{i=1}^np_i\right) \\
&\geq& \int_{\frac{1}{n}\sum_{i=1}^np_i>p_0/2}\frac{1}{2}p_0 \\
&=& \frac{1}{2}P_0\left(\frac{1}{n}\sum_{i=1}^n\frac{p_i(X,Y)}{p_0(X,Y)}>\frac{1}{2}\right).\end{aligned}$$ Define $$U_i = \frac{\frac{\|\eta\|}{\|\theta\|}\theta^TX_i-\frac{\|\theta\|}{\|\eta\|}\eta^TY_i}{\sqrt{\|\theta\|^2+\|\eta\|^2}}, \quad
V_i = \frac{\theta^TX_i+\eta^TY_i}{\sqrt{\|\theta\|^2+\|\eta\|^2}}.$$ Then, under $P_0$, we have $$(U_i,V_i)\stackrel{iid}{\sim} \frac{1}{2}N(0,1)\otimes N(-\sqrt{2(r+s)\log n},1)+\frac{1}{2}N(0,1)\otimes N(\sqrt{2(r+s)\log n},1).$$ We also have $$\begin{aligned}
\theta^TX_i-\eta^TY_i &=& \sqrt{2r\log n}U_i + \sqrt{2s\log n}V_i, \\
\theta^TX_i+\eta^TY_i &=& \sqrt{2(r+s)\log n}V_i.\end{aligned}$$ This implies for each $i\in[n]$, $$\begin{aligned}
\frac{p_i(X,Y)}{p_0(X,Y)} &=& \frac{e^{-\frac{1}{2}\|X_i-\theta\|^2-\frac{1}{2}\|Y_i+\eta\|^2}+e^{-\frac{1}{2}\|X_i+\theta\|^2-\frac{1}{2}\|Y_i-\eta\|^2}}{e^{-\frac{1}{2}\|X_i-\theta\|^2-\frac{1}{2}\|Y_i-\eta\|^2}+e^{-\frac{1}{2}\|X_i+\theta\|^2-\frac{1}{2}\|Y_i+\eta\|^2}} \\
&=& \frac{e^{\theta^TX_i-\eta^TY_i}+e^{-\theta^TX_i+\eta^TY_i}}{e^{\theta^TX_i+\eta^TY_i}+e^{-\theta^TX_i-\eta^TY_i}} \\
&=& \frac{e^{\sqrt{2r\log n}U_i + \sqrt{2s\log n}V_i}+e^{-\sqrt{2r\log n}U_i - \sqrt{2s\log n}V_i}}{e^{\sqrt{2(r+s)\log n}V_i}+e^{-\sqrt{2(r+s)\log n}V_i}} \\
&=& \frac{\phi(U_i-\sqrt{2r\log n})\phi(V_i-\sqrt{2s\log n})+\phi(U_i+\sqrt{2r\log n})\phi(V_i+\sqrt{2s\log n})}{\phi(U_i)\phi(V_i-\sqrt{2(r+s)\log n})+\phi(U_i)\phi(V_i+\sqrt{2(r+s)\log n})},\end{aligned}$$ where $\phi(\cdot)$ is the density function of $N(0,1)$. Use the notation $r(U_i,V_i)=\frac{p_i(X,Y)}{p_0(X,Y)}$, and it suffices to study the statistic $\frac{1}{n}\sum_{i=1}^nr(U_i,V_i)$ under $P_0$.
To this end, we define the event $G=\cap_{i=1}^nG_i$, where $$G_i=\left\{\sqrt{r}|U_i|-(\sqrt{r+s}-\sqrt{s})|V_i|\leq t^*(r,s)\sqrt{2\log n}\right\}.$$ Then, by Lemma \[lem:comp-tail-0\] and the definition of $t^*(r,s)$, we have $$\begin{aligned}
P_0(G) &\geq& 1-\sum_{i=1}^nP_0(G_i) \\
&=& 1-n\mathbb{P}_{(U^2,V^2)\sim \chi_1^2\otimes \chi_{1,2(r+s)\log n}^2}\left(\sqrt{r}|U|-(\sqrt{r+s}-\sqrt{s})|V|>t^*(r,s)\sqrt{2\log n}\right) \\
&\geq& 1-O\left(\frac{1}{\sqrt{\log n}}\right),\end{aligned}$$ which means the event $G$ holds with high probability under $P_0$. Therefore, $$\frac{1}{n}\sum_{i=1}^nr(U_i,V_i)=\frac{1}{n}\sum_{i=1}^nr(U_i,V_i){{\mathbf{1}_{\left\{{G_i}\right\}}}}, \label{eq:just-trunc}$$ with high probability, and we can analyze the truncated version $\frac{1}{n}\sum_{i=1}^nr(U_i,V_i){{\mathbf{1}_{\left\{{G_i}\right\}}}}$ instead. We first calculate the mean, $$\begin{aligned}
&& \mathbb{E}_{P_0}\left(\frac{1}{n}\sum_{i=1}^nr(U_i,V_i){{\mathbf{1}_{\left\{{G_i}\right\}}}}\right) \\
&=& \mathbb{E}_{P_0}\left(\frac{1}{n}\sum_{i=1}^n\frac{p_i(X,Y)}{p_0(X,Y)}{{\mathbf{1}_{\left\{{G_i}\right\}}}}\right) \\
&=& \frac{1}{n}\sum_{i=1}^nP_i(G_i) \\
&=& \mathbb{P}_{(U,V)\sim N(\sqrt{2r\log n},1)\otimes N(\sqrt{2s\log n},1)}\left(\sqrt{r}|U|-(\sqrt{r+s}-\sqrt{s})|V|\leq t^*(r,s)\sqrt{2\log n}\right).\end{aligned}$$ Write $U=Z_1+\sqrt{2r\log n}$ and $V=Z_2+\sqrt{2s\log n}$ with independent $Z_1,Z_2\sim N(0,1)$, and we can write the above probability as $$\begin{aligned}
&& \mathbb{P}\left(\sqrt{r}|Z_1+\sqrt{2r\log n}|+(\sqrt{r+s}-\sqrt{s})|Z_2+\sqrt{2s\log n}|\leq t^*(r,s)\sqrt{2\log n}\right) \\
&\geq& \mathbb{P}\left(\frac{\sqrt{r}|Z_1|+(\sqrt{r+s}-\sqrt{s})|Z_2|}{\sqrt{2\log n}}+r+s-\sqrt{s}\sqrt{r+s}\leq t^*(r,s)\right) \\
&\rightarrow& 1.\end{aligned}$$ The last line above uses the inequality $t^*(r,s)>r+s-\sqrt{s}\sqrt{r+s}$ and the fact that $\frac{\sqrt{r}|Z_1|+(\sqrt{r+s}-\sqrt{s})|Z_2|}{\sqrt{2\log n}}=o_{\mathbb{P}}(1)$. Note that $t^*(r,s)>r+s-\sqrt{s}\sqrt{r+s}$ is implied by $r+s-\sqrt{s}\sqrt{r+s}<\frac{1}{2}$ according to Proposition \[prop:t-star\]. We therefore have $$\mathbb{E}_{P_0}\left(\frac{1}{n}\sum_{i=1}^nr(U_i,V_i){{\mathbf{1}_{\left\{{G_i}\right\}}}}\right) \rightarrow 1, \label{eq:mean-trunc}$$ as $n\rightarrow \infty$.
Next, we calculate the variance of $\frac{1}{n}\sum_{i=1}^nr(U_i,V_i){{\mathbf{1}_{\left\{{G_i}\right\}}}}$. We have $$\begin{aligned}
\Var_{P_0}\left(\frac{1}{n}\sum_{i=1}^nr(U_i,V_i){{\mathbf{1}_{\left\{{G_i}\right\}}}}\right) &=& \frac{1}{n}\Var_{P_0}\left(r(U_1,V_1){{\mathbf{1}_{\left\{{G_1}\right\}}}}\right) \\
&\leq& \frac{1}{n}\mathbb{E}_{P_0}\left(r(U_1,V_1){{\mathbf{1}_{\left\{{G_1}\right\}}}}\right)^2.\end{aligned}$$ By the definition of $r(U_1,V_1)$, we have $$\begin{aligned}
&& \mathbb{E}_{P_0}\left(r(U_1,V_1){{\mathbf{1}_{\left\{{G_1}\right\}}}}\right)^2 \\
&=& \int_{\frac{\sqrt{r}|u|-(\sqrt{r+s}-\sqrt{s})|v|}{t^*(r,s)}\leq \sqrt{2\log n}}f(u,v)dudv \\
&\leq& \int_{\frac{\sqrt{r}|u|-(\sqrt{r+s}-\sqrt{s})|v|}{t^*(r,s)}\leq \sqrt{2\log n}}\left(f(u,v)+f(-u,v)\right)dudv,\end{aligned}$$ where $$f(u,v) = \frac{\left[\phi(u-\sqrt{2r\log n})\phi(v-\sqrt{2s\log n})+\phi(u+\sqrt{2r\log n})\phi(v+\sqrt{2s\log n})\right]^2}{2[\phi(u)\phi(v-\sqrt{2(r+s)\log n})+\phi(u)\phi(v+\sqrt{2(r+s)\log n})]}.$$ Note that the $f(u,v)+f(-u,v)$ is a function of $(|u|,|v|)$ because of symmetry. Moreover, when $u\geq 0$ and $v\geq 0$, we have $$\begin{aligned}
\phi(u-\sqrt{2r\log n}) &\geq& \phi(u+\sqrt{2r\log n}), \\
\phi(v-\sqrt{2s\log n}) &\geq& \phi(v+\sqrt{2s\log n}).\end{aligned}$$ These inequalities imply $$f(u,v)+f(-u,v)\leq \frac{4\phi(u-\sqrt{2r\log n})^2\phi(v-\sqrt{2s\log n})^2}{\phi(u)\phi(v-\sqrt{2(r+s)\log n})}.$$ Hence, $$\begin{aligned}
&& \int_{\frac{\sqrt{r}|u|-(\sqrt{r+s}-\sqrt{s})|v|}{t^*(r,s)}\leq \sqrt{2\log n}}\left(f(u,v)+f(-u,v)\right)dudv \\
&=& 4\int_{\frac{\sqrt{r}|u|-(\sqrt{r+s}-\sqrt{s})|v|}{t^*(r,s)}\leq \sqrt{2\log n},u\geq 0,v\geq 0}\left(f(u,v)+f(-u,v)\right)dudv \\
&\leq& 16\int_{\frac{\sqrt{r}|u|-(\sqrt{r+s}-\sqrt{s})|v|}{t^*(r,s)}\leq \sqrt{2\log n},u\geq 0,v\geq 0} \frac{\phi(u-\sqrt{2r\log n})^2\phi(v-\sqrt{2s\log n})^2}{\phi(u)\phi(v-\sqrt{2(r+s)\log n})}dudv \\
&\leq& \frac{16}{2\pi}n^{4(r+s-\sqrt{s}\sqrt{r+s})}\int_{\frac{\sqrt{r}u-(\sqrt{r+s}-\sqrt{s})v}{t^*(r,s)}\leq \sqrt{2\log n}} e^{-\frac{(u-2\sqrt{2r\log n})^2}{2}-\frac{(v-(2\sqrt{2s\log n}-\sqrt{2(r+s)\log n}))^2}{2}}dudv \\
&=& 16n^{4(r+s-\sqrt{s}\sqrt{r+s})}\mathbb{P}\left(\sqrt{r}Z_1-(\sqrt{r+s}-\sqrt{s})Z_2\leq \left(t^*(r,s)-3(r+s-\sqrt{s}\sqrt{r+s})\right)\sqrt{2\log n}\right),\end{aligned}$$ where $Z_1,Z_2\stackrel{iid}{\sim} N(0,1)$. We consider two cases. In the first case, we have $t^*(r,s)\geq 3(r+s-\sqrt{s}\sqrt{r+s})$, and by Proposition \[prop:t-star\], this implies $r+s-\sqrt{s}\sqrt{r+s}\leq \frac{1}{8}$. Thus, we have $$\Var_{P_0}\left(\frac{1}{n}\sum_{i=1}^nr(U_i,V_i){{\mathbf{1}_{\left\{{G_i}\right\}}}}\right)\leq 16n^{4(r+s-\sqrt{s}\sqrt{r+s})-1}\rightarrow 0.$$ In the second case, we have $t^*(r,s)< 3(r+s-\sqrt{s}\sqrt{r+s})$, and then a standard Gaussian tail bound (\[eq:Gaussian-tail\]) gives $$\begin{aligned}
&& \mathbb{P}\left(\sqrt{r}Z_1-(\sqrt{r+s}-\sqrt{s})Z_2\leq \left(t^*(r,s)-3(r+s-\sqrt{s}\sqrt{r+s})\right)\sqrt{2\log n}\right) \\
&\lesssim& n^{-\frac{(3(r+s-\sqrt{s}\sqrt{r+s})-t^*(r,s))^2}{2(r+s-\sqrt{s}\sqrt{r+s})}} \\
&\leq& n^{-\frac{(4(r+s-\sqrt{s}\sqrt{r+s})-\sqrt{2(r+s-\sqrt{s}\sqrt{r+s})})^2}{2(r+s-\sqrt{s}\sqrt{r+s})}},\end{aligned}$$ where the last inequality uses the property $t^*(r,s)\leq \sqrt{2(r+s-\sqrt{s}\sqrt{r+s})}-(r+s-\sqrt{s}\sqrt{r+s})$ established by Proposition \[prop:t-star\]. Let us write $A=r+s-\sqrt{s}\sqrt{r+s}$, and then we have $$\Var_{P_0}\left(\frac{1}{n}\sum_{i=1}^nr(U_i,V_i){{\mathbf{1}_{\left\{{G_i}\right\}}}}\right)\lesssim n^{4A-1-\frac{(4A-\sqrt{2A})^2}{2A}}=n^{-2(\sqrt{2A}-1)^2}\rightarrow 0,$$ as long as $A=r+s-\sqrt{s}\sqrt{r+s}<\frac{1}{2}$. Combine the two cases, and we conclude that $$\Var_{P_0}\left(\frac{1}{n}\sum_{i=1}^nr(U_i,V_i){{\mathbf{1}_{\left\{{G_i}\right\}}}}\right) \rightarrow 0, \label{eq:var-trunc}$$ as $n\rightarrow \infty$. By (\[eq:mean-trunc\]) and (\[eq:var-trunc\]), $\frac{1}{n}\sum_{i=1}^nr(U_i,V_i){{\mathbf{1}_{\left\{{G_i}\right\}}}}\rightarrow 1$ in probability. Finally, by (\[eq:just-trunc\]), we also have $\frac{1}{n}\sum_{i=1}^nr(U_i,V_i)\rightarrow 1$ in probability, and thus $P_0\left(\frac{1}{n}\sum_{i=1}^n\frac{p_i(X,Y)}{p_0(X,Y)}>\frac{1}{2}\right)\rightarrow 1$, which completes the proof.
Similar to (\[eq:type-1-3.1\]), we need to bound $\sup_{z\in\{-1,1\}^n}P_{(\theta,\eta,z,z)}^{(n)}\psi_{\rm Bonferroni}$ and $\sup_{z\in\{-1,1\}^n}P_{(\theta,\eta,z,-z)}^{(n)}\psi_{\rm Bonferroni}$ in order to control the Type-I error. For the first term, we have $$\begin{aligned}
&& \sup_{z\in\{-1,1\}^n}P_{(\theta,\eta,z,z)}^{(n)}\psi_{\rm Bonferroni} \\
&\leq& \sup_{z\in\{-1,1\}^n}P_{(\theta,\eta,z,z)}^{(n)}\left(\max_{1\leq i\leq n}C^-(X_i,Y_i,\theta,\eta)>2t^*(r,s)\log n\right) \\
&\leq& \sup_{z\in\{-1,1\}^n}\sum_{i=1}^nP_{(\theta,\eta,z,z)}^{(n)}\left(C^-(X_i,Y_i,\theta,\eta)>2t^*(r,s)\log n\right) \\
&=& n\mathbb{P}_{(U^2,V^2)\sim \chi_{1}^2\otimes \chi_{1,2(r+s)\log n}^2}\left(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|>t^*(r,s)\sqrt{2\log n}\right) \\
&\lesssim& \frac{1}{\sqrt{\log n}}\rightarrow 0,\end{aligned}$$ where the last inequality is by Lemma \[lem:comp-tail-0\] and the definition of $t^*(r,s)$. The same argument also applies to $\sup_{z\in\{-1,1\}^n}P_{(\theta,\eta,z,-z)}^{(n)}\psi_{\rm Bonferroni}$, and therefore the Type-I error is vanishing.
For the Type-II error, we have $$\begin{aligned}
\nonumber && \sup_{\substack{z\in\{-1,1\}^n \\ \sigma\in\{-1,1\}^n \\ \ell(z,\sigma)>0}}P_{(\theta,\eta,z,\sigma)}^{(n)}(1-\psi_{\rm Bonferroni}) \\
\label{eq:bonf-t2-1} &\leq& \sup_{\substack{z\in\{-1,1\}^n \\ \sigma\in\{-1,1\}^n \\ \frac{1}{n}\sum_{i=1}^n{{\mathbf{1}_{\left\{{z_i\neq \sigma_i}\right\}}}}>0}}P_{(\theta,\eta,z,\sigma)}^{(n)}\left(\max_{1\leq i\leq n}C^-(X_i,Y_i,\theta,\eta)\leq 2t^*(r,s)\log n\right) \\
\label{eq:bonf-t2-2} && + \sup_{\substack{z\in\{-1,1\}^n \\ \sigma\in\{-1,1\}^n \\ \frac{1}{n}\sum_{i=1}^n{{\mathbf{1}_{\left\{{z_i\neq -\sigma_i}\right\}}}}>0}}P_{(\theta,\eta,z,\sigma)}^{(n)}\left(\max_{1\leq i\leq n}C^+(X_i,Y_i,\theta,\eta)\leq 2t^*(r,s)\log n\right)\end{aligned}$$ We give a bound for (\[eq:bonf-t2-1\]). For any $z,\sigma\in\{-1,1\}^n$ such that $\frac{1}{n}\sum_{i=1}^n{{\mathbf{1}_{\left\{{z_i\neq \sigma_i}\right\}}}}>0$, there exists some $i_0\in[n]$ such that $z_{i_0}\neq \sigma_{i_0}$. Then, $$\begin{aligned}
&& P_{(\theta,\eta,z,\sigma)}^{(n)}\left(\max_{1\leq i\leq n}C^-(X_i,Y_i,\theta,\eta)\leq 2t^*(r,s)\log n\right) \\
&\leq& P_{(\theta,\eta,z,\sigma)}^{(n)}\left(C^-(X_{i_0},Y_{i_0},\theta,\eta)\leq 2t^*(r,s)\log n\right) \\
&=& \mathbb{P}_{(U^2,V^2)\sim \chi_{1,2r\log n}^2\otimes \chi_{1,2s\log n}^2}\left(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|\leq t^*(r,s)\sqrt{2\log n}\right) \\
&\rightarrow& 0,\end{aligned}$$ where the last line is by Lemma \[lem:comp-tail-1\] and the condition $t^*(r,s)< r+s-\sqrt{s}\sqrt{r+s}$. Note that $t^*(r,s)< r+s-\sqrt{s}\sqrt{r+s}$ is implied by $r+s-\sqrt{s}\sqrt{r+s}>\frac{1}{2}$ according to Proposition \[prop:t-star\]. The same analysis also applies to (\[eq:bonf-t2-2\]), and we thus conclude that the Type-II error is vanishing. The proof is complete.
Proofs of Theorem \[thm:ada-Bonf\] and Theorem \[thm:HC-adaptive\]
------------------------------------------------------------------
Let us first introduce some notation. Define $$\begin{aligned}
{C}_0^- &=& \max_{i\in\mathcal{D}_0}C^-(X_i,Y_i,\theta,\eta), \\
{C}_1^- &=& \max_{i\in\mathcal{D}_1}C^-(X_i,Y_i,\theta,\eta), \\
{C}_0^+ &=& \max_{i\in\mathcal{D}_0}C^+(X_i,Y_i,\theta,\eta), \\
{C}_1^+ &=& \max_{i\in\mathcal{D}_1}C^+(X_i,Y_i,\theta,\eta).\end{aligned}$$ Then, the Bonferroni test defined by (\[eq:test-exact-Bonf\]) can be written as $$\psi_{\rm Bonferroni}={{\mathbf{1}_{\left\{{(C_0^-\vee C_1^-)\wedge (C_0^+\vee C_1^+)>2t^*(r,s)\log n}\right\}}}}.$$ Our primary goal in the proof is to bound the difference between ${\widehat}{C}^-\wedge {\widehat}{C}^+$ and $(C_0^-\vee C_1^-)\wedge (C_0^+\vee C_1^+)$. Define $$M_n^{(0)}=\max_{i\in\mathcal{D}_0}\frac{|({\widehat}{\theta}^{(1)}-\theta)^TX_i|}{\|{\widehat}{\theta}^{(1)}-\theta\|}\vee \max_{i\in\mathcal{D}_0}\frac{|({\widehat}{\theta}^{(1)}+\theta)^TX_i|}{\|{\widehat}{\theta}^{(1)}+\theta\|}\vee\max_{i\in\mathcal{D}_0}\frac{|({\widehat}{\eta}^{(1)}-\eta)^TY_i|}{\|{\widehat}{\eta}^{(1)}-\eta\|}\vee\max_{i\in\mathcal{D}_0}\frac{|({\widehat}{\eta}^{(1)}+\eta)^TY_i|}{\|{\widehat}{\eta}^{(1)}+\eta\|},$$ $$M_n^{(1)}=\max_{i\in\mathcal{D}_1}\frac{|({\widehat}{\theta}^{(0)}-\theta)^TX_i|}{\|{\widehat}{\theta}^{(0)}-\theta\|}\vee \max_{i\in\mathcal{D}_1}\frac{|({\widehat}{\theta}^{(0)}+\theta)^TX_i|}{\|{\widehat}{\theta}^{(0)}+\theta\|}\vee\max_{i\in\mathcal{D}_1}\frac{|({\widehat}{\eta}^{(0)}-\eta)^TY_i|}{\|{\widehat}{\eta}^{(0)}-\eta\|}\vee\max_{i\in\mathcal{D}_1}\frac{|({\widehat}{\eta}^{(0)}+\eta)^TY_i|}{\|{\widehat}{\eta}^{(0)}+\eta\|},$$ and $M_n=M_n^{(0)}\vee M_n^{(1)}$.
Since $|C^-(X_i,Y_i,{\widehat}{\theta},{\widehat}{\eta})-C^-(X_i,Y_i,\theta,\eta)|\leq 2|({\widehat}{\theta}-\theta)^TX_i|+2|({\widehat}{\eta}-\eta)^TY_i|$, we have $$\begin{aligned}
|{\widehat}{C}_0^--C_0^-| &\leq& 2\|{\widehat}{\theta}^{(1)}-\theta\|\max_{i\in\mathcal{D}_0}\frac{|({\widehat}{\theta}^{(1)}-\theta)^TX_i|}{\|{\widehat}{\theta}^{(1)}-\theta\|} + 2\|{\widehat}{\eta}^{(1)}-\eta\|\max_{i\in\mathcal{D}_0}\frac{|({\widehat}{\eta}^{(1)}-\eta)^TY_i|}{\|{\widehat}{\eta}^{(1)}-\eta\|} \\
&\leq& 2M_n\left(\|{\widehat}{\theta}^{(1)}-\theta\|+\|{\widehat}{\eta}^{(1)}-\eta\|\right).\end{aligned}$$ Note that we can write $C^-(X_i,Y_i,\theta,\eta)=C^-(X_i,Y_i,-\theta,-\eta)$, and thus we also have $$|{\widehat}{C}_0^--C_0^-|\leq 2M_n\left(\|{\widehat}{\theta}^{(1)}+\theta\|+\|{\widehat}{\eta}^{(1)}+\eta\|\right).$$ Combine the two bounds above, we have $$|{\widehat}{C}_0^--C_0^-|\leq 2M_n\left[\left(\|{\widehat}{\theta}^{(1)}-\theta\|+\|{\widehat}{\eta}^{(1)}-\eta\|\right)\wedge \left(\|{\widehat}{\theta}^{(1)}+\theta\|+\|{\widehat}{\eta}^{(1)}+\eta\|\right)\right].$$ Using the same argument, we also get $$|{\widehat}{C}_1^--C_1^-|\leq 2M_n\left[\left(\|{\widehat}{\theta}^{(0)}-\theta\|+\|{\widehat}{\eta}^{(0)}-\eta\|\right)\wedge \left(\|{\widehat}{\theta}^{(0)}+\theta\|+\|{\widehat}{\eta}^{(0)}+\eta\|\right)\right].$$ With the above two bounds, we have $$\begin{aligned}
|{\widehat}{C}_0^-\vee {\widehat}{C}_1^--C_0^-\vee C_1^-| &\leq& |{\widehat}{C}_0^--C_0^-|\vee|{\widehat}{C}_1^--C_1^-| \\
&\leq& 2M_n\left[\left(\|{\widehat}{\theta}^{(1)}-\theta\|+\|{\widehat}{\eta}^{(1)}-\eta\|\right)\wedge \left(\|{\widehat}{\theta}^{(1)}+\theta\|+\|{\widehat}{\eta}^{(1)}+\eta\|\right)\right] \\
&& + 2M_n\left[\left(\|{\widehat}{\theta}^{(0)}-\theta\|+\|{\widehat}{\eta}^{(0)}-\eta\|\right)\wedge \left(\|{\widehat}{\theta}^{(0)}+\theta\|+\|{\widehat}{\eta}^{(0)}+\eta\|\right)\right].\end{aligned}$$ A similar argument leads to $$\begin{aligned}
|{\widehat}{C}_0^+\vee {\widehat}{C}_1^+-C_0^+\vee C_1^+| &\leq& 2M_n\left[\left(\|{\widehat}{\theta}^{(1)}-\theta\|+\|{\widehat}{\eta}^{(1)}-\eta\|\right)\wedge \left(\|{\widehat}{\theta}^{(1)}+\theta\|+\|{\widehat}{\eta}^{(1)}+\eta\|\right)\right] \\
&& + 2M_n\left[\left(\|{\widehat}{\theta}^{(0)}-\theta\|+\|{\widehat}{\eta}^{(0)}-\eta\|\right)\wedge \left(\|{\widehat}{\theta}^{(0)}+\theta\|+\|{\widehat}{\eta}^{(0)}+\eta\|\right)\right].\end{aligned}$$ Observe the property that $C^+(X_i,Y_i,{\widehat}{\theta},{\widehat}{\eta})=C^-(X_i,Y_i,-{\widehat}{\theta},{\widehat}{\eta})=C^-(X_i,Y_i,{\widehat}{\theta},-{\widehat}{\eta})$. Use this property repeatedly, we get $$\begin{aligned}
|{\widehat}{C}_0^-\vee {\widehat}{C}_1^--C_0^+\vee C_1^+| &\leq& 2M_n\left[\left(\|{\widehat}{\theta}^{(1)}-\theta\|+\|{\widehat}{\eta}^{(1)}+\eta\|\right)\wedge \left(\|{\widehat}{\theta}^{(1)}+\theta\|+\|{\widehat}{\eta}^{(1)}-\eta\|\right)\right] \\
&& + 2M_n\left[\left(\|{\widehat}{\theta}^{(0)}-\theta\|+\|{\widehat}{\eta}^{(0)}+\eta\|\right)\wedge \left(\|{\widehat}{\theta}^{(0)}+\theta\|+\|{\widehat}{\eta}^{(0)}-\eta\|\right)\right],\end{aligned}$$ and $$\begin{aligned}
|{\widehat}{C}_0^+\vee {\widehat}{C}_1^+-C_0^-\vee C_1^-| &\leq& 2M_n\left[\left(\|{\widehat}{\theta}^{(1)}-\theta\|+\|{\widehat}{\eta}^{(1)}+\eta\|\right)\wedge \left(\|{\widehat}{\theta}^{(1)}+\theta\|+\|{\widehat}{\eta}^{(1)}-\eta\|\right)\right] \\
&& + 2M_n\left[\left(\|{\widehat}{\theta}^{(0)}-\theta\|+\|{\widehat}{\eta}^{(0)}+\eta\|\right)\wedge \left(\|{\widehat}{\theta}^{(0)}+\theta\|+\|{\widehat}{\eta}^{(0)}-\eta\|\right)\right].\end{aligned}$$ By combining the four bounds above, we obtain $$\begin{aligned}
\nonumber && |({\widehat}{C}_0^-\vee {\widehat}{C}_1^-)\wedge ({\widehat}{C}_0^+\vee {\widehat}{C}_1^+) - (C_0^-\vee C_1^-)\wedge (C_0^+\vee C_1^+)| \\
\nonumber &\leq& \left(|{\widehat}{C}_0^-\vee {\widehat}{C}_1^--C_0^-\vee C_1^-| \vee |{\widehat}{C}_0^+\vee {\widehat}{C}_1^+-C_0^+\vee C_1^+|\right) \\
\nonumber && \wedge \left(|{\widehat}{C}_0^-\vee {\widehat}{C}_1^--C_0^+\vee C_1^+| \vee |{\widehat}{C}_0^+\vee {\widehat}{C}_1^+-C_0^-\vee C_1^-|\right) \\
\label{eq:very-disgusting1} &\leq& \left(2M_n\left[\left(\|{\widehat}{\theta}^{(1)}-\theta\|+\|{\widehat}{\eta}^{(1)}-\eta\|\right)\wedge \left(\|{\widehat}{\theta}^{(1)}+\theta\|+\|{\widehat}{\eta}^{(1)}+\eta\|\right)\right]\right. \\
\nonumber && \left.+ 2M_n\left[\left(\|{\widehat}{\theta}^{(0)}-\theta\|+\|{\widehat}{\eta}^{(0)}-\eta\|\right)\wedge \left(\|{\widehat}{\theta}^{(0)}+\theta\|+\|{\widehat}{\eta}^{(0)}+\eta\|\right)\right]\right) \\
\nonumber && \wedge\left(2M_n\left[\left(\|{\widehat}{\theta}^{(1)}-\theta\|+\|{\widehat}{\eta}^{(1)}+\eta\|\right)\wedge \left(\|{\widehat}{\theta}^{(1)}+\theta\|+\|{\widehat}{\eta}^{(1)}-\eta\|\right)\right]\right. \\
\nonumber && \left.+ 2M_n\left[\left(\|{\widehat}{\theta}^{(0)}-\theta\|+\|{\widehat}{\eta}^{(0)}+\eta\|\right)\wedge \left(\|{\widehat}{\theta}^{(0)}+\theta\|+\|{\widehat}{\eta}^{(0)}-\eta\|\right)\right]\right).\end{aligned}$$ A symmetric argument leads to $$\begin{aligned}
\nonumber && \left|({\widehat}{C}_0^-\vee{\widehat}{C}_1^+)\wedge({\widehat}{C}_0^+\vee{\widehat}{C}_1^-)-(C_0^-\vee C_1^-)\wedge (C_0^+\vee C_1^+)\right| \\
\nonumber &\leq& \left(\left|{\widehat}{C}_0^-\vee{\widehat}{C}_1^+-C_0^-\vee C_1^-\right|\vee\left|{\widehat}{C}_0^+\vee{\widehat}{C}_1^--C_0^+\vee C_1^+\right|\right) \\
\nonumber && \wedge \left(\left|{\widehat}{C}_0^-\vee{\widehat}{C}_1^+-C_0^+\vee C_1^+\right|\vee\left|{\widehat}{C}_0^+\vee{\widehat}{C}_1^--C_0^-\vee C_1^-\right|\right) \\
\label{eq:very-disgusting2} &\leq& \left(2M_n\left[\left(\|{\widehat}{\theta}^{(1)}-\theta\|+\|{\widehat}{\eta}^{(1)}-\eta\|\right)\wedge \left(\|{\widehat}{\theta}^{(1)}+\theta\|+\|{\widehat}{\eta}^{(1)}+\eta\|\right)\right]\right) \\
\nonumber && \left.+ 2M_n\left[\left(\|{\widehat}{\theta}^{(0)}+\theta\|+\|{\widehat}{\eta}^{(0)}-\eta\|\right)\wedge \left(\|{\widehat}{\theta}^{(0)}-\theta\|+\|{\widehat}{\eta}^{(0)}+\eta\|\right)\right]\right) \\
\nonumber && \wedge\left(2M_n\left[\left(\|{\widehat}{\theta}^{(1)}+\theta\|+\|{\widehat}{\eta}^{(1)}-\eta\|\right)\wedge \left(\|{\widehat}{\theta}^{(1)}-\theta\|+\|{\widehat}{\eta}^{(1)}+\eta\|\right)\right]\right. \\
\nonumber && \left.+ 2M_n\left[\left(\|{\widehat}{\theta}^{(0)}+\theta\|+\|{\widehat}{\eta}^{(0)}+\eta\|\right)\wedge \left(\|{\widehat}{\theta}^{(0)}-\theta\|+\|{\widehat}{\eta}^{(0)}-\eta\|\right)\right]\right).\end{aligned}$$ The condition (\[eq:estimation-error-weak\]) indicates that $$L({\widehat}{\theta}^{(0)},\theta)\vee L({\widehat}{\theta}^{(1)},\theta)\vee L({\widehat}{\eta}^{(0)},\eta)\vee L({\widehat}{\eta}^{(1)},\eta)\leq n^{-\gamma}, \label{eq:msorrow}$$ with high probability. Due to sample splitting, we have $M_n^{(0)}\vee M_n^{(0)}\leq 8\sqrt{2\log n}$ with high probability. Let $\gamma'$ be a constant that satisfies $0<\gamma'<\gamma$. Then, with (\[eq:msorrow\]), either the righthand side of (\[eq:very-disgusting1\]) or the righthand side of (\[eq:very-disgusting2\]) is bounded by $n^{-\gamma'}$. In the first case, we have ${{\mathbf{1}_{\left\{{\|{\widehat}{\theta}^{(0)}-{\widehat}{\theta}^{(1)}\|\leq 1, \|{\widehat}{\eta}^{(0)}-{\widehat}{\eta}^{(1)}\|\leq 1}\right\}}}} + {{\mathbf{1}_{\left\{{\|{\widehat}{\theta}^{(0)}-{\widehat}{\theta}^{(1)}\|> 1, \|{\widehat}{\eta}^{(0)}-{\widehat}{\eta}^{(1)}\|> 1}\right\}}}}=1$, and then $${\widehat}{C}^-\wedge{\widehat}{C}^+=({\widehat}{C}_0^-\vee {\widehat}{C}_1^-)\wedge ({\widehat}{C}_0^+\vee {\widehat}{C}_1^+).$$ In the second case, we have ${{\mathbf{1}_{\left\{{\|{\widehat}{\theta}^{(0)}-{\widehat}{\theta}^{(1)}\|> 1, \|{\widehat}{\eta}^{(0)}-{\widehat}{\eta}^{(1)}\|\leq 1}\right\}}}} + {{\mathbf{1}_{\left\{{\|{\widehat}{\theta}^{(0)}-{\widehat}{\theta}^{(1)}\|\leq 1, \|{\widehat}{\eta}^{(0)}-{\widehat}{\eta}^{(1)}\|> 1}\right\}}}}=1$, and then $${\widehat}{C}^-\wedge{\widehat}{C}^+=({\widehat}{C}_0^-\vee{\widehat}{C}_1^+)\wedge({\widehat}{C}_0^+\vee{\widehat}{C}_1^-).$$ Therefore, in both cases, we have $$\left|{\widehat}{C}^-\wedge{\widehat}{C}^+-(C_0^-\vee C_1^-)\wedge (C_0^+\vee C_1^+)\right|\leq n^{-\gamma'}, \label{eq:cao-ni-ma}$$ with high probability. The definition of ${\widehat}{t}$ also implies $$|{\widehat}{t}-t^*(r,s)|\leq n^{-\gamma'}, \label{eq:MLGB}$$ with high probability.
To this end, we define $G$ to be the intersection of the events (\[eq:cao-ni-ma\]) and (\[eq:MLGB\]). Then, $$\begin{aligned}
&& \sup_{\substack{z\in\{-1,1\}^n\\ \sigma\in\{z,-z\}}}P^{(n)}_{(\theta,\eta,z,\sigma)}\psi_{\rm ada-Bonferroni} + \sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n\\\ell(z,\sigma)> 0}}P^{(n)}_{(\theta,\eta,z,\sigma)}(1-\psi_{\rm ada-Bonferroni}) \\
&\leq& \sup_{\substack{z\in\{-1,1\}^n\\ \sigma\in\{z,-z\}}}P^{(n)}_{(\theta,\eta,z,\sigma)}\psi_{\rm ada-Bonferroni}{{\mathbf{1}_{\left\{{G}\right\}}}} + \sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n\\\ell(z,\sigma)> 0}}P^{(n)}_{(\theta,\eta,z,\sigma)}(1-\psi_{\rm ada-Bonferroni}){{\mathbf{1}_{\left\{{G}\right\}}}} \\
&& + \sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n}}P^{(n)}_{(\theta,\eta,z,\sigma)}(G^c).\end{aligned}$$ The last term is vanishing because both (\[eq:cao-ni-ma\]) and (\[eq:MLGB\]) hold with high probability. Since $$\begin{aligned}
&& P^{(n)}_{(\theta,\eta,z,\sigma)}\psi_{\rm ada-Bonferroni}{{\mathbf{1}_{\left\{{G}\right\}}}} \\
&\leq& P^{(n)}_{(\theta,\eta,z,\sigma)}\left((C_0^-\vee C_1^-)\wedge (C_0^+\vee C_1^+)>2t^*(r,s)\log n\right),\end{aligned}$$ and $$\begin{aligned}
&& P^{(n)}_{(\theta,\eta,z,\sigma)}(1-\psi_{\rm ada-Bonferroni}){{\mathbf{1}_{\left\{{G}\right\}}}} \\
&\leq& P^{(n)}_{(\theta,\eta,z,\sigma)}\left((C_0^-\vee C_1^-)\wedge (C_0^+\vee C_1^+)\leq 2(t^*(r,s)+\delta)\log n\right),\end{aligned}$$ for some arbitrarily small constant $\delta>0$, the same argument in the proof of Theorem \[thm:Bonf\] leads to the desired conclusion.
Let us introduce some notation. For any $a_1,b_1,a_2,b_2,t\in\mathbb{R}$, define $$P(a_1,b_1,a_2,b_2,t)=\mathbb{P}\left(C^-(Z_1+a_1,Z_2+b_1,a_2,b_2)>t\sqrt{2\log n}\right),$$ where $Z_1,Z_2\stackrel{iid}{\sim}N(0,1)$. We first give a bound for the difference between $P(a_1,b_1,a_2,b_2,t)$ and $P(a_2,b_2,a_2,b_2,t)$. Note that $$\begin{aligned}
\nonumber && P(a_1,b_1,a_2,b_2,t) \\
\label{eq:only-useful-later} &=& \mathbb{P}\left(\left\{-2b_2(Z_2+b_1)>t\sqrt{2\log n}\text{ and }2a_2(Z_1+a_1)>t\sqrt{2\log n}\right\}\right. \\
\nonumber && \left.\text{ or }\left\{-2a_2(Z_1+a_1)>t\sqrt{2\log n}\text{ and }2b_2(Z_2+b_1)>t\sqrt{2\log n}\right\}\right) \\
\nonumber &=& \mathbb{P}\left(-2b_2(Z_2+b_1)>t\sqrt{2\log n}\right)\mathbb{P}\left(2a_2(Z_1+a_1)>t\sqrt{2\log n}\right) \\
\nonumber && + \mathbb{P}\left(-2a_2(Z_1+a_1)>t\sqrt{2\log n}\right)\mathbb{P}\left(2b_2(Z_2+b_1)>t\sqrt{2\log n}\right) \\
\nonumber && - \mathbb{P}\left(|2b_2(Z_2+b_1)|<-t\sqrt{2\log n}\right)\mathbb{P}\left(|2a_2(Z_1+a_1)|<-t\sqrt{2\log n}\right).\end{aligned}$$ We therefore have $$\begin{aligned}
\nonumber && \left|P(a_1,b_1,a_2,b_2,t)-P(a_2,b_2,a_2,b_2,t)\right| \\
\label{eq:est-prob-diff-1} &\leq& \left|\mathbb{P}\left(-2b_2(Z_2+b_1)>t\sqrt{2\log n}\right)\mathbb{P}\left(2a_2(Z_1+a_1)>t\sqrt{2\log n}\right)\right. \\
\nonumber && \left.- \mathbb{P}\left(-2b_2(Z_2+b_2)>t\sqrt{2\log n}\right)\mathbb{P}\left(2a_2(Z_1+a_2)>t\sqrt{2\log n}\right)\right| \\
\label{eq:est-prob-diff-2} && +\left|\mathbb{P}\left(-2a_2(Z_1+a_1)>t\sqrt{2\log n}\right)\mathbb{P}\left(2b_2(Z_2+b_1)>t\sqrt{2\log n}\right)\right. \\
\nonumber && \left.- \mathbb{P}\left(-2a_2(Z_1+a_2)>t\sqrt{2\log n}\right)\mathbb{P}\left(2b_2(Z_2+b_2)>t\sqrt{2\log n}\right)\right| \\
\label{eq:est-prob-diff-3} && +\left|\mathbb{P}\left(|2b_2(Z_2+b_1)|<-t\sqrt{2\log n}\right)\mathbb{P}\left(|2a_2(Z_1+a_1)|<-t\sqrt{2\log n}\right)\right. \\
\nonumber && \left.- \mathbb{P}\left(|2b_2(Z_2+b_2)|<-t\sqrt{2\log n}\right)\mathbb{P}\left(|2a_2(Z_1+a_2)|<-t\sqrt{2\log n}\right)\right|.\end{aligned}$$ We demonstrate how to bound (\[eq:est-prob-diff-1\]). The bounds for (\[eq:est-prob-diff-2\]) and (\[eq:est-prob-diff-3\]) follow the same argument, and thus we omit the details. By triangle inequality, (\[eq:est-prob-diff-1\]) can be bounded by $$\begin{aligned}
&& \mathbb{P}\left(-2b_2(Z_2+b_2)>t\sqrt{2\log n}\right) \\
&& \times \left|\mathbb{P}\left(2a_2(Z_1+a_1)>t\sqrt{2\log n}\right)-\mathbb{P}\left(2a_2(Z_1+a_2)>t\sqrt{2\log n}\right)\right| \\
&& +\mathbb{P}\left(2a_2(Z_1+a_2)>t\sqrt{2\log n}\right) \\
&& \times \left|\mathbb{P}\left(-2b_2(Z_2+b_1)>t\sqrt{2\log n}\right)-\mathbb{P}\left(-2b_2(Z_2+b_2)>t\sqrt{2\log n}\right)\right| \\
&& + \left|\mathbb{P}\left(2a_2(Z_1+a_1)>t\sqrt{2\log n}\right)-\mathbb{P}\left(2a_2(Z_1+a_2)>t\sqrt{2\log n}\right)\right| \\
&& \times \left|\mathbb{P}\left(-2b_2(Z_2+b_1)>t\sqrt{2\log n}\right)-\mathbb{P}\left(-2b_2(Z_2+b_2)>t\sqrt{2\log n}\right)\right|.\end{aligned}$$ Then, for any $1\leq |a_1|, |a_2|, |b_1|, |b_2|\leq \log n$ and $|t|\leq \log n$, apply Lemma \[prop:standard-normal\], and we have $$\begin{aligned}
&& \left|\mathbb{P}\left(2a_2(Z_1+a_1)>t\sqrt{2\log n}\right)-\mathbb{P}\left(2a_2(Z_1+a_2)>t\sqrt{2\log n}\right)\right| \\
&=& \left|\int_{\frac{t\sqrt{2\log n}-2a_1a_2}{2|a_2|}}^{\infty}\phi(x)dx-\int_{\frac{t\sqrt{2\log n}-2a_2^2}{2|a_2|}}^{\infty}\phi(x)dx\right| \\
&\leq& 2|a_1-a_2|\left(\phi\left(\frac{t\sqrt{2\log n}-2a_1a_2}{2|a_2|}\right)\vee\phi\left(\frac{t\sqrt{2\log n}-2a_2^2}{2|a_2|}\right)\right) \\
&\leq& 4|a_1-a_2|\phi\left(\frac{t\sqrt{2\log n}-2a_2^2}{2|a_2|}\right) \\
&\leq& (\log n)^2|a_1-a_2|\mathbb{P}\left(2a_2(Z_1+a_2)>t\sqrt{2\log n}\right).\end{aligned}$$ With a similar argument, we also have $$\begin{aligned}
&& \left|\mathbb{P}\left(-2b_2(Z_2+b_1)>t\sqrt{2\log n}\right)-\mathbb{P}\left(-2b_2(Z_2+b_2)>t\sqrt{2\log n}\right)\right| \\
&\leq& (\log n)^2|b_1-b_2|\mathbb{P}\left(-2b_2(Z_2+b_2)>t\sqrt{2\log n}\right),\end{aligned}$$ and we therefore obtain the following bound for (\[eq:est-prob-diff-1\]), $$\begin{aligned}
&& \left((\log n)^2\left(|a_1-a_2|+|b_1-b_2|\right) + (\log n)^4|a_1-a_2||b_1-b_2|\right) \\
&& \times \mathbb{P}\left(2a_2(Z_1+a_2)>t\sqrt{2\log n}\right)\mathbb{P}\left(-2b_2(Z_2+b_2)>t\sqrt{2\log n}\right).\end{aligned}$$ With similar bounds obtained for (\[eq:est-prob-diff-2\]) and (\[eq:est-prob-diff-3\]), we have $$\begin{aligned}
&& \left|P(a_1,b_1,a_2,b_2,t)-P(a_2,b_2,a_2,b_2,t)\right| \\
&\leq& \left((\log n)^2\left(|a_1-a_2|+|b_1-b_2|\right) + (\log n)^4|a_1-a_2||b_1-b_2|\right) \\
&& \times \left(\mathbb{P}\left(2a_2(Z_1+a_2)>t\sqrt{2\log n}\right)\mathbb{P}\left(-2b_2(Z_2+b_2)>t\sqrt{2\log n}\right) \right. \\
&& \left. + \mathbb{P}\left(-2a_2(Z_1+a_2)>t\sqrt{2\log n}\right)\mathbb{P}\left(2b_2(Z_2+b_2)>t\sqrt{2\log n}\right)\right) \\
&\leq& 2\left((\log n)^2\left(|a_1-a_2|+|b_1-b_2|\right) + (\log n)^4|a_1-a_2||b_1-b_2|\right)P(a_2,b_2,a_2,b_2,t),\end{aligned}$$ where the last inequality uses the fact that $\mathbb{P}(A)+\mathbb{P}(B)\leq 2\mathbb{P}(A\cup B)$. We summarize the bound into the following inequality, $$\begin{aligned}
\nonumber && \frac{\left|P(a_1,b_1,a_2,b_2,t)-P(a_2,b_2,a_2,b_2,t)\right|}{P(a_2,b_2,a_2,b_2,t)} \\
&\leq& 2\left((\log n)^2\left(|a_1-a_2|+|b_1-b_2|\right) + (\log n)^4|a_1-a_2||b_1-b_2|\right), \label{eq:good-interface-1}\end{aligned}$$ which holds uniformly over $1\leq |a_1|, |a_2|, |b_1|, |b_2|\leq \log n$ and $|t|\leq \log n$.
Next, we study the ratio between $P(a_1,b_1,a_1,b_1,t)$ and $P(a_2,b_2,a_2,b_2,t)$. By a union bound argument, we have $$\begin{aligned}
&& P(a_1,b_1,a_1,b_1,t) \\
&\leq& \mathbb{P}\left(-2b_1(Z_2+b_1)>t\sqrt{2\log n}, 2a_1(Z_1+a_1)>t\sqrt{2\log n}\right) \\
&& + \mathbb{P}\left(-2a_1(Z_1+a_1)>t\sqrt{2\log n},2b_1(Z_2+b_1)>t\sqrt{2\log n}\right) \\
&=& \mathbb{P}\left(-2b_1(Z_2+b_1)>t\sqrt{2\log n}\right)\mathbb{P}\left(2a_1(Z_1+a_1)>t\sqrt{2\log n}\right) \\
&& + \mathbb{P}\left(-2a_1(Z_1+a_1)>t\sqrt{2\log n}\right)\mathbb{P}\left(2b_1(Z_2+b_1)>t\sqrt{2\log n}\right).\end{aligned}$$ By $\mathbb{P}(A)+\mathbb{P}(B)\leq 2\mathbb{P}(A\cup B)$ we have $$\begin{aligned}
&& P(a_2,b_2,a_2,b_2,t) \\
&\geq& \frac{1}{2}\mathbb{P}\left(-2b_2(Z_2+b_2)>t\sqrt{2\log n}\right)\mathbb{P}\left(2a_2(Z_1+a_2)>t\sqrt{2\log n}\right) \\
&& + \frac{1}{2}\mathbb{P}\left(-2a_2(Z_1+a_2)>t\sqrt{2\log n}\right)\mathbb{P}\left(2b_2(Z_2+b_2)>t\sqrt{2\log n}\right).\end{aligned}$$ For any $1\leq |a_1|, |a_2|, |b_1|, |b_2|\leq \log n$ and $|t|\leq \log n$ that satisfy $|a_1-a_2|\leq n^{-c}$ and $|b_1-b_2|\leq n^{-c}$ with some constant $c>0$, we apply Lemma \[prop:standard-normal\], and obtain $$\begin{aligned}
\frac{\mathbb{P}\left(-2b_1(Z_2+b_1)>t\sqrt{2\log n}\right)}{\mathbb{P}\left(-2b_2(Z_2+b_2)>t\sqrt{2\log n}\right)} &\leq& 2, \\
\frac{\mathbb{P}\left(2a_1(Z_1+a_1)>t\sqrt{2\log n}\right)}{\mathbb{P}\left(2a_2(Z_1+a_2)>t\sqrt{2\log n}\right)} &\leq& 2, \\
\frac{\mathbb{P}\left(-2a_1(Z_1+a_1)>t\sqrt{2\log n}\right)}{\mathbb{P}\left(-2a_2(Z_1+a_2)>t\sqrt{2\log n}\right)} &\leq& 2, \\
\frac{\mathbb{P}\left(2b_1(Z_2+b_1)>t\sqrt{2\log n}\right)}{\mathbb{P}\left(2b_2(Z_2+b_2)>t\sqrt{2\log n}\right)} &\leq& 2.\end{aligned}$$ Hence, we have $$\frac{P(a_1,b_1,a_1,b_1,t)}{P(a_2,b_2,a_2,b_2,t)} \leq 4, \label{eq:good-interface-2}$$ uniformly over all $1\leq |a_1|, |a_2|, |b_1|, |b_2|\leq \log n$ and $|t|\leq \log n$ that satisfy $|a_1-a_2|\leq n^{-c}$ and $|b_1-b_2|\leq n^{-c}$.
Use Hoeffding’s inequality, and we have $$|\mathcal{D}_0|\wedge|\mathcal{D}_1|\wedge|\mathcal{D}_2|\geq \frac{n}{4},\label{eq:sample-size-large}$$ with high probability. The condition (\[eq:estimation-error-strong\]), together with (\[eq:sample-size-large\]), implies $L({\widehat}{\theta},\theta)\vee L({\widehat}{\eta},\eta)\leq (n/4)^{-\gamma}$ with high probability. By the definitions of $a$ and $b$, we then have $$(|a-\|\theta\||\wedge|a+\|\theta\||)\vee(|b-\|\eta\||\wedge|b+\|\eta\||)\leq (n/4)^{-\gamma}, \label{eq:prob-a-b}$$ with high probability. With some standard calculation and the sample size bound (\[eq:sample-size-large\]), we also have $$(|{\widehat}{a}-a|\wedge|{\widehat}{a}+a|)\vee(|{\widehat}{b}-b|\wedge|{\widehat}{b}+b|)\leq \sqrt{\frac{\log n}{n}}, \label{eq:prob-a-b-hat}$$ with high probability. Finally, by definitions of ${\widehat}{r},{\widehat}{s},r,s$ and (\[eq:sample-size-large\])-(\[eq:prob-a-b-hat\]), we have $$|{\widehat}{r}-r|\vee|{\widehat}{s}-s|\leq n^{-{\widetilde}{\gamma}}, \label{eq:prob-r-s}$$ with high probability for some constant ${\widetilde}{\gamma}>0$. Let us define $G$ to be the intersection of the events (\[eq:sample-size-large\]), (\[eq:prob-a-b\]), (\[eq:prob-a-b-hat\]), and (\[eq:prob-r-s\]). Then, we have $$\sup_{z,\sigma\in\{-1,1\}^n}P_{(\theta,\eta,z,\sigma)}^{(n)}(G)\rightarrow 1. \label{eq:good-event-adaptive}$$
With the help of (\[eq:good-interface-1\]), (\[eq:good-interface-2\]), and (\[eq:good-event-adaptive\]), we are ready to analyze $R_n(\psi_{\rm ada-HC},\theta,\eta)$. It is easy to check by the definition that $$\begin{aligned}
\label{eq:pp1} S_{(r,s)}(t) &=& P(\|\theta\|,\|\eta\|,\|\theta\|,\|\eta\|,t), \\
\label{eq:pp2} S_{({\widehat}{r},{\widehat}{s})}(t) &=& P({\widehat}{a},{\widehat}{b},{\widehat}{a},{\widehat}{b},t).\end{aligned}$$ By the calibration (\[eq:general-cali\]), we know that under the event $G$, we have $1\leq |a|, |b|, |{\widehat}{a}|, |{\widehat}{b}|, \|\theta\|, \|\eta\|\leq \log n$ [for sufficiently large values of $n$]{}. Moreover, the definitions of ${\widehat}{T}_n^-$ and ${\widehat}{T}_n^+$ imply that it is sufficient to consider $|t|\leq \log n$. With $X_i\sim N(z_i\theta,I_p)$ and $Y_i\sim N(\sigma_i\eta,I_q)$, for any $i\in\mathcal{D}_2$, we have $$\begin{aligned}
\label{eq:pp3} \mathbb{P}\left(C^-({\widehat}{X}_i,{\widehat}{Y}_i,{\widehat}{a},{\widehat}{b})>t\sqrt{2\log n}\Big|
{\{d_j\}_{j\in[n]},\{(X_j,Y_j)\}_{j\in\mathcal{D}_0\cup\mathcal{D}_1}}\right) &=& P(z_ia,\sigma_ib,{\widehat}{a},{\widehat}{b},t), \\
\label{eq:pp4} \mathbb{P}\left(C^+({\widehat}{X}_i,{\widehat}{Y}_i,{\widehat}{a},{\widehat}{b})>t\sqrt{2\log n}\Big|
{\{d_j\}_{j\in[n]},\{(X_j,Y_j)\}_{j\in\mathcal{D}_0\cup\mathcal{D}_1}}\right) &=& P(z_ia,\sigma_ib,{\widehat}{a},-{\widehat}{b},t).\end{aligned}$$ In addition, due to the symmetry in the definition, we have $$\begin{aligned}
\label{eq:pp5} && P(a,b,{\widehat}{a},{\widehat}{b},t) = P(-a,-b,{\widehat}{a},{\widehat}{b},t), \\
\label{eq:pp6} && P(a,b,{\widehat}{a},-{\widehat}{b},t) = P(a,-b,{\widehat}{a},{\widehat}{b},t) = P(-a,b,{\widehat}{a},{\widehat}{b},t).\end{aligned}$$ Now we analyze the Type-I error. By triangle inequality, we have $$\begin{aligned}
\label{eq:bound-T-ada1} {\widehat}{T}_n^- &\leq& \sup_{|t|\leq \log n}\frac{\left|\sum_{i\in\mathcal{D}_2}{{\mathbf{1}_{\left\{{C^-({\widehat}{X}_i,{\widehat}{Y}_i,{\widehat}{a},{\widehat}{b})>t\sqrt{2\log n}}\right\}}}}-|\mathcal{D}_2|P(a,b,{\widehat}{a},{\widehat}{b},t) \right|}{\sqrt{|\mathcal{D}_2|S_{({\widehat}{r},{\widehat}{s})}(t)}} \\
\nonumber && + \sup_{|t|\leq\log n}\frac{|\mathcal{D}_2|\left|P(a,b,{\widehat}{a},{\widehat}{b},t)-S_{({\widehat}{r},{\widehat}{s})}(t)\right|}{\sqrt{|\mathcal{D}_2|S_{({\widehat}{r},{\widehat}{s})}(t)}}, \\
\label{eq:bound-T-ada2} {\widehat}{T}_n^- &\leq& \sup_{|t|\leq \log n}\frac{\left|\sum_{i\in\mathcal{D}_2}{{\mathbf{1}_{\left\{{C^-({\widehat}{X}_i,{\widehat}{Y}_i,{\widehat}{a},{\widehat}{b})>t\sqrt{2\log n}}\right\}}}}-|\mathcal{D}_2|P(-a,b,{\widehat}{a},{\widehat}{b},t) \right|}{\sqrt{|\mathcal{D}_2|S_{({\widehat}{r},{\widehat}{s})}(t)}} \\
\nonumber && + \sup_{|t|\leq\log n}\frac{|\mathcal{D}_2|\left|P(-a,b,{\widehat}{a},{\widehat}{b},t)-S_{({\widehat}{r},{\widehat}{s})}(t)\right|}{\sqrt{|\mathcal{D}_2|S_{({\widehat}{r},{\widehat}{s})}(t)}}, \\\end{aligned}$$ $$\begin{aligned}
\label{eq:bound-T-ada3} {\widehat}{T}_n^+ &\leq& \sup_{|t|\leq \log n}\frac{\left|\sum_{i\in\mathcal{D}_2}{{\mathbf{1}_{\left\{{C^+({\widehat}{X}_i,{\widehat}{Y}_i,{\widehat}{a},{\widehat}{b})>t\sqrt{2\log n}}\right\}}}}-|\mathcal{D}_2|P(a,b,{\widehat}{a},{\widehat}{b},t) \right|}{\sqrt{|\mathcal{D}_2|S_{({\widehat}{r},{\widehat}{s})}(t)}} \\
\nonumber && + \sup_{|t|\leq\log n}\frac{|\mathcal{D}_2|\left|P(a,b,{\widehat}{a},{\widehat}{b},t)-S_{({\widehat}{r},{\widehat}{s})}(t)\right|}{\sqrt{|\mathcal{D}_2|S_{({\widehat}{r},{\widehat}{s})}(t)}}, \\
\label{eq:bound-T-ada4} {\widehat}{T}_n^+ &\leq& \sup_{|t|\leq \log n}\frac{\left|\sum_{i\in\mathcal{D}_2}{{\mathbf{1}_{\left\{{C^+({\widehat}{X}_i,{\widehat}{Y}_i,{\widehat}{a},{\widehat}{b})>t\sqrt{2\log n}}\right\}}}}-|\mathcal{D}_2|P(-a,b,{\widehat}{a},{\widehat}{b},t) \right|}{\sqrt{|\mathcal{D}_2|S_{({\widehat}{r},{\widehat}{s})}(t)}} \\
\nonumber && + \sup_{|t|\leq\log n}\frac{|\mathcal{D}_2|\left|P(-a,b,{\widehat}{a},{\widehat}{b},t)-S_{({\widehat}{r},{\widehat}{s})}(t)\right|}{\sqrt{|\mathcal{D}_2|S_{({\widehat}{r},{\widehat}{s})}(t)}}.\end{aligned}$$ Therefore, we can use any one of the four bounds above to upper bound ${\widehat}{T}_n^-\wedge {\widehat}{T}_n^+$. Under the null hypothesis, we either have $z=\sigma$ or $z=-\sigma$. Assume $z=\sigma$ without loss of generality, and then by (\[eq:pp3\])-(\[eq:pp6\]), we should use the smaller bound between (\[eq:bound-T-ada1\]) and (\[eq:bound-T-ada4\]). By (\[eq:prob-a-b-hat\]), ${\widehat}{a}$ and ${\widehat}{b}$ estimates $a$ and $b$ up to their signs. Let us suppose $|{\widehat}{a}-a|\vee|{\widehat}{b}-b|\leq\sqrt{\frac{\log n}{n}}$, and then by (\[eq:pp2\]), we should use (\[eq:bound-T-ada1\]) instead of (\[eq:bound-T-ada4\]) to bound ${\widehat}{T}_n^-\wedge {\widehat}{T}_n^+$. By (\[eq:good-interface-1\]), the first term of (\[eq:bound-T-ada1\]) can be bounded by $$\begin{aligned}
&& (1+o_{\mathbb{P}}(1))\sup_{|t|\leq \log n}\frac{\left|\sum_{i\in\mathcal{D}_2}{{\mathbf{1}_{\left\{{C^-({\widehat}{X}_i,{\widehat}{Y}_i,{\widehat}{a},{\widehat}{b})>t\sqrt{2\log n}}\right\}}}}-|\mathcal{D}_2|P(a,b,{\widehat}{a},{\widehat}{b},t) \right|}{\sqrt{|\mathcal{D}_2|P(a,b,{\widehat}{a},{\widehat}{b},t)}} \\
&\leq& (1+o_{\mathbb{P}}(1))\sup_{|t|\leq \log n}\frac{\left|\sum_{i\in\mathcal{D}_2}{{\mathbf{1}_{\left\{{C^-({\widehat}{X}_i,{\widehat}{Y}_i,{\widehat}{a},{\widehat}{b})>t\sqrt{2\log n}}\right\}}}}-|\mathcal{D}_2|P(a,b,{\widehat}{a},{\widehat}{b},t) \right|}{\sqrt{|\mathcal{D}_2|P(a,b,{\widehat}{a},{\widehat}{b},t)(1-P(a,b,{\widehat}{a},{\widehat}{b},t))}} \\
&=& (1+o_{\mathbb{P}}(1))\sqrt{2\log\log|\mathcal{D}_2|}=(1+o_{\mathbb{P}}(1))\sqrt{2\log\log n},\end{aligned}$$ where the last line is by [@shorack2009empirical; @donoho2004higher] and (\[eq:sample-size-large\]). By (\[eq:good-interface-1\]), the second term of (\[eq:bound-T-ada1\]) can be bounded by $$\begin{aligned}
&& \sup_{|t|\leq\log n}\frac{|\mathcal{D}_2|\left|P(a,b,{\widehat}{a},{\widehat}{b},t)-S_{({\widehat}{r},{\widehat}{s})}(t)\right|}{\sqrt{|\mathcal{D}_2|S_{({\widehat}{r},{\widehat}{s})}(t)}} \\
&\lesssim& \sup_{|t|\leq\log n} \frac{|\mathcal{D}_2|\frac{(\log n)^{5/2}}{\sqrt{n}}S_{({\widehat}{r},{\widehat}{s})}(t)}{\sqrt{|\mathcal{D}_2|S_{({\widehat}{r},{\widehat}{s})}(t)}} \lesssim (\log n)^{5/2}.\end{aligned}$$ Thus, we have ${\widehat}{T}_n^-\wedge {\widehat}{T}_n^+\lesssim (\log n)^{5/2}$. In the case when $z=-\sigma$, $|{\widehat}{a}+a|\leq\sqrt{\frac{\log n}{n}}$, or $|{\widehat}{b}+b|\leq\sqrt{\frac{\log n}{n}}$, we can use one of the four bounds (\[eq:bound-T-ada1\])-(\[eq:bound-T-ada4\]) to get the same conclusion. To summarize, whenever $G$ holds, ${\widehat}{T}_n^-\wedge {\widehat}{T}_n^+\lesssim (\log n)^{5/2}$ with high probability under the null, and the Type-I error can be bounded by $$\sup_{\substack{z\in\{-1,1\}^n\\ \sigma\in\{z,-z\}}}P^{(n)}_{(\theta,\eta,z,\sigma)}\psi_{\rm ada-HC}\leq \sup_{\substack{z\in\{-1,1\}^n\\ \sigma\in\{z,-z\}}}P^{(n)}_{(\theta,\eta,z,\sigma)}\psi_{\rm ada-HC}{{\mathbf{1}_{\left\{{G}\right\}}}}+\sup_{\substack{z\in\{-1,1\}^n\\ \sigma\in\{z,-z\}}}P^{(n)}_{(\theta,\eta,z,\sigma)}(G^c)\rightarrow 0.$$
To analyze the Type-II error, we define $G_{++}=G\cap\{a\geq 0, b\geq 0\}$, $G_{+-}=G\cap\{a\geq 0, b< 0\}$, $G_{-+}=G\cap\{a< 0, b\geq 0\}$, and $G_{--}=G\cap\{a< 0, b< 0\}$. Then, $$\begin{aligned}
&& \sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n\\\ell(z,\sigma)> \epsilon}}P^{(n)}_{(\theta,\eta,z,\sigma)}(1-\psi_{\rm ada-HC}) \\
&\leq& \sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n\\\ell(z,\sigma)> \epsilon}}P^{(n)}_{(\theta,\eta,z,\sigma)}(1-\psi_{\rm ada-HC}){{\mathbf{1}_{\left\{{G_{++}}\right\}}}} + \sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n\\\ell(z,\sigma)> \epsilon}}P^{(n)}_{(\theta,\eta,z,\sigma)}(1-\psi_{\rm ada-HC}){{\mathbf{1}_{\left\{{G_{+-}}\right\}}}} \\
&& + \sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n\\\ell(z,\sigma)> \epsilon}}P^{(n)}_{(\theta,\eta,z,\sigma)}(1-\psi_{\rm ada-HC}){{\mathbf{1}_{\left\{{G_{-+}}\right\}}}} + \sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n\\\ell(z,\sigma)> \epsilon}}P^{(n)}_{(\theta,\eta,z,\sigma)}(1-\psi_{\rm ada-HC}){{\mathbf{1}_{\left\{{G_{--}}\right\}}}} \\
&& +\sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n\\\ell(z,\sigma)> \epsilon}}P^{(n)}_{(\theta,\eta,z,\sigma)}(G^c).\end{aligned}$$ The first four terms in the bound can be bounded in the same way, and we only show how to bound the first term. By the definition of $\ell(z,z^*)$, we have $$\begin{aligned}
\nonumber & \sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n\\\ell(z,\sigma)> \epsilon}}P^{(n)}_{(\theta,\eta,z,\sigma)}(1-\psi_{\rm ada-HC}){{\mathbf{1}_{\left\{{G_{++}}\right\}}}} \\
\label{eq:xiaren-1} \leq& \sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n\\ \frac{1}{n}\sum_{i=1}^n{{\mathbf{1}_{\left\{{z_i\neq \sigma_i}\right\}}}}> \epsilon}}P^{(n)}_{(\theta,\eta,z,\sigma)}\left({\widehat}{T}_n^-\leq (\log n)^3,G_{++}\right) \\
\label{eq:xiaren-2} & + \sup_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n\\ \frac{1}{n}\sum_{i=1}^n{{\mathbf{1}_{\left\{{z_i\neq -\sigma_i}\right\}}}}> \epsilon}}P^{(n)}_{(\theta,\eta,z,\sigma)}\left({\widehat}{T}_n^+\leq (\log n)^3,G_{++}\right).\end{aligned}$$ We then bound (\[eq:xiaren-1\]). The bound for (\[eq:xiaren-2\]) follows a similar argument and thus we omit the details. We have $$P^{(n)}_{(\theta,\eta,z,\sigma)}\left({\widehat}{T}_n^-\leq (\log n)^3,G_{++}\right)\leq \inf_{|t|\leq \log n}P^{(n)}_{(\theta,\eta,z,\sigma)}\left(|{\widehat}{T}_n^-(t)|\leq (\log n)^3,G_{++}\right),$$ where $${\widehat}{T}_n^-(t)=\frac{\sum_{i\in\mathcal{D}_2}{{\mathbf{1}_{\left\{{C^-({\widehat}{X}_i,{\widehat}{Y}_i,{\widehat}{a},{\widehat}{b})>t\sqrt{2\log n}}\right\}}}}-|\mathcal{D}_2|S_{({\widehat}{r},{\widehat}{s})}(t)}{\sqrt{|\mathcal{D}_2|S_{({\widehat}{r},{\widehat}{s})}(t)}}.$$ Note that under $G_{++}$, we have $$\begin{aligned}
\label{eq:prob-a-b-hat-sign} |{\widehat}{a}-a|\vee|{\widehat}{b}-b| &\leq& \sqrt{\frac{\log n}{n}}, \\
\label{eq:prob-a-b-sign} |a-\|\theta\||\vee|b-\|\eta\|| &\leq& (n/4)^{-\gamma}.\end{aligned}$$ Define $m_0=\sum_{i\in\mathcal{D}_2}{{\mathbf{1}_{\left\{{z_i=\sigma_i}\right\}}}}$ and $m_1=\sum_{i\in\mathcal{D}_2}{{\mathbf{1}_{\left\{{z_i\neq\sigma_i}\right\}}}}$. Then, we can write ${\widehat}{T}_n^-(t)$ as $${\widehat}{T}_n^-(t)=R_n(t)+\frac{m_0P(a,b,{\widehat}{a},{\widehat}{b},t)-m_0P({\widehat}{a},{\widehat}{b},{\widehat}{a},{\widehat}{b},t)}{\sqrt{|\mathcal{D}_2|S_{({\widehat}{r},{\widehat}{s})}(t)}},$$ where $$R_n(t)=\frac{\sum_{i\in\mathcal{D}_2}{{\mathbf{1}_{\left\{{C^-({\widehat}{X}_i,{\widehat}{Y}_i,{\widehat}{a},{\widehat}{b})>t\sqrt{2\log n}}\right\}}}}-m_0P(a,b,{\widehat}{a},{\widehat}{b},t)-m_1P({\widehat}{a},{\widehat}{b},{\widehat}{a},{\widehat}{b},t)}{\sqrt{|\mathcal{D}_2|S_{({\widehat}{r},{\widehat}{s})}(t)}}.$$ The difference between ${\widehat}{T}_n^-(t)$ and $R_n(t)$ can be ignored compared with the threshold $(\log n)^3$ because $$\sup_{|t|\leq\log n}\frac{\left|m_0P(a,b,{\widehat}{a},{\widehat}{b},t)-m_0P({\widehat}{a},{\widehat}{b},{\widehat}{a},{\widehat}{b},t)\right|}{\sqrt{|\mathcal{D}_2|S_{({\widehat}{r},{\widehat}{s})}(t)}} \lesssim (\log n)^{5/2},$$ by (\[eq:good-interface-1\]), (\[eq:sample-size-large\]), and (\[eq:prob-a-b-hat-sign\]). Now we study the mean and the variance [of $R_n(t)$]{} conditioning on $\{d_i\}_{i\in[n]}$ and $\{(X_i,Y_i)\}_{i\in\mathcal{D}_0\cup\mathcal{D}_1}$. The conditional mean of the numerator of $R_n(t)$ is given by $$m_1P(a,-b,{\widehat}{a},{\widehat}{b},t)-m_1P({\widehat}{a},{\widehat}{b},{\widehat}{a},{\widehat}{b},t),$$ and the conditional variance of the numerator of $R_n(t)$ is bounded by $$m_0P(a,b,{\widehat}{a},{\widehat}{b},t)+m_1P(a,-b,{\widehat}{a},{\widehat}{b},t).$$ By Chebyshev’s inequality, $m_1\geq \frac{1}{6}n^{1-\beta}$ with high probability. Use (\[eq:good-interface-1\]), (\[eq:good-interface-2\]), (\[eq:prob-a-b-hat-sign\]), and (\[eq:prob-a-b-sign\]), and we have $$\begin{aligned}
&& P(a,-b,{\widehat}{a},{\widehat}{b},t) \asymp P({\widehat}{a},-{\widehat}{b},{\widehat}{a},{\widehat}{b},t) \asymp P(a,-b,a,b,t) \asymp P(\|\theta\|,-\|\eta\|,\|\theta\|,\|\eta\|,t) \\
&& P(a,b,{\widehat}{a},{\widehat}{b},t) \asymp P({\widehat}{a},{\widehat}{b},{\widehat}{a},{\widehat}{b},t) \asymp P(a,b,a,b,t) \asymp P(\|\theta\|,\|\eta\|,\|\theta\|,\|\eta\|,t).\end{aligned}$$ Therefore, following the same argument in the proof of Theorem \[thm:general-HC\], we have $$\frac{(\mathbb{E}(R_n(t)|\{d_i\}_{i\in[n]},\{(X_i,Y_i)\}_{i\in\mathcal{D}_0\cup\mathcal{D}_1}))^2}{\Var(R_n(t)|\{d_i\}_{i\in[n]},\{(X_i,Y_i)\}_{i\in\mathcal{D}_0\cup\mathcal{D}_1})}\rightarrow\infty,$$ at a polynomial rate with high probability as long as $\beta<\beta^*(r,s)$, which implies (\[eq:xiaren-1\]) is vanishing, and thus we have $\lim_{n\rightarrow 0}R_n(\psi_{\rm ada-HC},\theta,\eta)=0$.
Proofs of Lemma \[lem:LR-approx\] and Lemma \[lem:max-order\]
-------------------------------------------------------------
By the definitions of $p(u,v)$, and $q(u,v)$, we have $$\begin{aligned}
\frac{q(u,v)}{p(u,v)} &=& \frac{\phi(u-\sqrt{2r\log n})\phi(v-\sqrt{2(r+s)\log n})+\phi(u+\sqrt{2r\log n})\phi(v+\sqrt{2(r+s)\log n})}{\phi(u)\phi(v-\sqrt{2(r+s)\log n})+\phi(u)\phi(v+\sqrt{2(r+s)\log n})} \\
&=& \frac{e^{u\sqrt{2r\log n}+v\sqrt{2s\log n}}+e^{-u\sqrt{2r\log n}-v\sqrt{2s\log n}}}{e^{v\sqrt{2(r+s)\log n}}+e^{-v\sqrt{2(r+s)\log n}}}.\end{aligned}$$ Apply the inequality $e^{|x|}\leq e^x+e^{-x}\leq 2e^{|x|}$ to both the numerator and the denominator, and we have $$\frac{1}{2}\leq\frac{q(u,v)/p(u,v)}{e^{\sqrt{2\log n}\left(|\sqrt{r}u+\sqrt{s}v|-\sqrt{r+s}|v|\right)}}\leq 2.$$ This leads to the desired conclusion. We first consider $u\geq 0$ and $v\geq 0$. Then, we have $$\begin{aligned}
\phi(v-\sqrt{2(r+s)\log n}) &\geq& \phi(v+\sqrt{2(r+s)\log n}), \\
\phi(u-\sqrt{2r\log n}) &\geq& \phi(u+\sqrt{2r\log n}), \\
\phi(v-\sqrt{2s\log n}) &\geq& \phi(v+\sqrt{2s\log n}). \\\end{aligned}$$ These inequalities imply $$p(u,v)\leq \phi(u)\phi(v-\sqrt{2(r+s)\log n}) \leq 2p(u,v),$$ and $$q(u,v)\leq \phi(u-\sqrt{2r\log n})\phi(v-\sqrt{2s\log n})\leq 4q(u,v).$$ Thus, $$\frac{q(u,v)}{2p(u,v)}\leq \frac{\phi(u-\sqrt{2r\log n})\phi(v-\sqrt{2s\log n})}{\phi(u)\phi(v-\sqrt{2(r+s)\log n})}\leq \frac{4q(u,v)}{p(u,v)}.$$ Note that $$\frac{\phi(u-\sqrt{2r\log n})\phi(v-\sqrt{2s\log n})}{\phi(u)\phi(v-\sqrt{2(r+s)\log n})}=e^{\left(\sqrt{r}u-(\sqrt{r+s}-\sqrt{s})v\right)\sqrt{2\log n}},$$ and therefore $$\left|\log\frac{q(u,v)}{p(u,v)}-\left(\sqrt{r}u-(\sqrt{r+s}-\sqrt{s})v\right)\sqrt{2\log n}\right|\leq \log 4,$$ for all $u\geq 0$ and $v\geq 0$. Use the same argument, we can also show $$\left|\log\frac{q(u,v)}{p(u,v)}-\left(\sqrt{r}u+(\sqrt{r+s}-\sqrt{s})v\right)\sqrt{2\log n}\right|\leq \log 4,$$ for all $u\geq 0$ and $v< 0$, $$\left|\log\frac{q(u,v)}{p(u,v)}-\left(-\sqrt{r}u-(\sqrt{r+s}-\sqrt{s})v\right)\sqrt{2\log n}\right|\leq \log 4,$$ for all $u< 0$ and $v\geq 0$, and $$\left|\log\frac{q(u,v)}{p(u,v)}-\left(-\sqrt{r}u+(\sqrt{r+s}-\sqrt{s})v\right)\sqrt{2\log n}\right|\leq \log 4,$$ for all $u< 0$ and $v< 0$. The proof is complete by combining the four cases.
We note that by Lemma \[lem:comp-tail-0\] and the definition of $t^*(r,s)$, we have $$\mathbb{P}\left(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|>t^*(r,s)\sqrt{2\log n}\right)\lesssim \frac{1}{\sqrt{\log n}}n^{-1}.$$ Therefore, using a union bound argument, we have $$\begin{aligned}
&& \mathbb{P}\left(\frac{\max_{1\leq i\leq n}\left(|\sqrt{r}U_i+\sqrt{s}V_i|-\sqrt{r+s}|V_i|\right)}{\sqrt{2\log n}}> t^*(r,s)\right) \\
&\leq& \sum_{i=1}^n\mathbb{P}\left(|\sqrt{r}U_i+\sqrt{s}V_i|-\sqrt{r+s}|V_i|>t^*(r,s)\sqrt{2\log n}\right) \\
&\lesssim& \frac{1}{\sqrt{\log n}}\rightarrow 0.\end{aligned}$$ Apply Lemma \[lem:comp-tail-0\] and the definition of $t^*(r,s)$ again, and we have for any constant $\delta>0$, there exists some $\delta'>0$, such that $$\mathbb{P}\left(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|>(t^*(r,s)-\delta)\sqrt{2\log n}\right)\gtrsim n^{-(1-\delta')}.$$ Then, we have $$\begin{aligned}
&& \mathbb{P}\left(\frac{\max_{1\leq i\leq n}\left(|\sqrt{r}U_i+\sqrt{s}V_i|-\sqrt{r+s}|V_i|\right)}{\sqrt{2\log n}}\leq t^*(r,s)-\delta\right) \\
&\leq& \prod_{i=1}^n\mathbb{P}\left(\frac{|\sqrt{r}U_i+\sqrt{s}V_i|-\sqrt{r+s}|V_i|}{\sqrt{2\log n}}\leq t^*(r,s)-\delta\right) \\
&=& \prod_{i=1}^n\left(1-\mathbb{P}\left(\frac{|\sqrt{r}U_i+\sqrt{s}V_i|-\sqrt{r+s}|V_i|}{\sqrt{2\log n}}> t^*(r,s)-\delta\right)\right) \\
&\leq& \left(1-n^{-(1-\delta')}\right)^n \rightarrow 0.\end{aligned}$$ The proof is complete.
Proofs of Proposition \[prop:comp-p-value\] and Proposition \[prop:parameter-estimation\] {#sec:pf-org2}
-----------------------------------------------------------------------------------------
By the definition of $C^-({\widehat}{X}_i,{\widehat}{Y}_i,{\widehat}{a},{\widehat}{b})$, we have $$\max_{i\in\mathcal{D}_2}|C^-({\widehat}{X}_i,{\widehat}{Y}_i,{\widehat}{a},{\widehat}{b})| \leq 2|{\widehat}{a}|\max_{i\in\mathcal{D}_2}|{\widehat}{X}_i| + 2|{\widehat}{b}|\max_{i\in\mathcal{D}_2}|{\widehat}{Y}_i|.$$ Note that $|{\widehat}{a}|\leq\|\theta\|=O(\sqrt{\log n})$ and $|{\widehat}{b}|\leq\|\eta\|=O(\sqrt{\log n})$. Due to data splitting, we can write ${\widehat}{X}_i=z_i{\widehat}{\theta}^T\theta/\|{\widehat}{\theta}\|+W_i$ for each $i\in\mathcal{D}_2$ with $W_i\sim N(0,1)$ independent of $\mathcal{D}_2$. Then, $$\max_{i\in\mathcal{D}_2}|{\widehat}{X}_i|\leq \|\theta\|+\max_{i\in\mathcal{D}_2}|W_i|,$$ where we have $\|\theta\|=O(\sqrt{\log n})$ and $\max_{i\in\mathcal{D}_2}|W_i|\leq C\sqrt{\log n}$ with probability tending to $1$ with some sufficiently large constant $C>0$. The same argument also applies to $\max_{i\in\mathcal{D}_2}|{\widehat}{Y}_i|$. Therefore, $\max_{i\in\mathcal{D}_2}\frac{|C^-({\widehat}{X}_i,{\widehat}{Y}_i,{\widehat}{a},{\widehat}{b})|}{\sqrt{2\log n}}=O(\sqrt{\log n})$ with high probability, which implies that $$\inf_{\substack{z\in\{-1,1\}^n\\\sigma\in\{-1,1\}^n}}P^{(n)}_{(\theta,\eta,z,\sigma)}\left(\max_{i\in\mathcal{D}_2}\frac{|C^-({\widehat}{X}_i,{\widehat}{Y}_i,{\widehat}{a},{\widehat}{b})|}{\sqrt{2\log n}}\leq\log n\right)\rightarrow 1.$$ Hence, with high probability, we have $$\begin{aligned}
{\widehat}{T}_n^- &=& \sup_{|t|\leq \log n}\frac{\left|\sum_{i\in\mathcal{D}_2}{{\mathbf{1}_{\left\{{C^-({\widehat}{X}_i,{\widehat}{Y}_i,{\widehat}{a},{\widehat}{b})>t\sqrt{2\log n}}\right\}}}}-|\mathcal{D}_2|S_{({\widehat}{r},{\widehat}{s})}(t)\right|}{\sqrt{|\mathcal{D}_2|S_{({\widehat}{r},{\widehat}{s})}(t)}} \\
&=& \sup_{t\in\mathbb{R}}\frac{\left|\sum_{i\in\mathcal{D}_2}{{\mathbf{1}_{\left\{{C^-({\widehat}{X}_i,{\widehat}{Y}_i,{\widehat}{a},{\widehat}{b})>t\sqrt{2\log n}}\right\}}}}-|\mathcal{D}_2|S_{({\widehat}{r},{\widehat}{s})}(t)\right|}{\sqrt{|\mathcal{D}_2|S_{({\widehat}{r},{\widehat}{s})}(t)}} \\
&=& \max_{1\leq i\leq |\mathcal{D}_2|}\frac{\sqrt{|\mathcal{D}_2|}\left|\frac{i}{|\mathcal{D}_2|}-{\widehat}{p}_{(i,\mathcal{D}_2)}^-\right|}{\sqrt{{\widehat}{p}_{(i,\mathcal{D}_2)}^-}}.\end{aligned}$$ The same conclusion also applies to ${\widehat}{T}_n^+$ by the same argument, and the proof is complete.
For $X_i\sim N(z_i\theta,I_p)$, we can write $X_i=z_i\theta+W_i$ with $W_i\sim N(0,I_p)$ independently for all $i\in[n]$. Then, $$\begin{aligned}
\left\|\frac{1}{n}\sum_{i=1}^nX_iX_i^T-\theta\theta^T-I_p\right\|_{\rm op} &\leq& \left\|\frac{1}{n}\sum_{i=1}^nW_iW_i^T-I_p\right\|_{\rm op} + 2\left\|\theta\left(\frac{1}{n}\sum_{i=1}^nz_iW_i^T\right)\right\|_{\rm op} \\
&\leq& \left\|\frac{1}{n}\sum_{i=1}^nW_iW_i^T-I_p\right\|_{\rm op} + 2\|\theta\|\left\|\frac{1}{n}\sum_{i=1}^nz_iW_i\right\|.\end{aligned}$$ By a standard covariance matrix concentration bound [@koltchinskii2014concentration], $$\left\|\frac{1}{n}\sum_{i=1}^nW_iW_i^T-I_p\right\|_{\rm op}\leq C\sqrt{\frac{p}{n}},$$ with probability at least $1-e^{-C'p}$. Since $\left\|\frac{1}{\sqrt{n}}\sum_{i=1}^nz_iW_i\right\|^2\sim \chi_p^2$, a chi-square tail bound [@laurent2000adaptive] gives $$\left\|\frac{1}{n}\sum_{i=1}^nz_iW_i\right\|\leq C\sqrt{\frac{p}{n}},$$ with probability at least $1-e^{-C'p}$. Therefore, $$\left\|\frac{1}{n}\sum_{i=1}^nX_iX_i^T-\theta\theta^T-I_p\right\|_{\rm op} \leq C(1+2\|\theta\|)\sqrt{\frac{p}{n}},\label{eq:trivial-cov-bound}$$ with probability at least $1-e^{-C'p}$. By (\[eq:trivial-cov-bound\]) and Davis-Kahan theorem, we have $$\|{\widehat}{u}_1-\theta/\|\theta\|\|\wedge\|{\widehat}{u}_1+\theta/\|\theta\|\|\leq C_1\frac{1+\|\theta\|}{\|\theta\|^2}\sqrt{\frac{p}{n}}.$$ Weyl’s inequality and (\[eq:trivial-cov-bound\]) give $|{\widehat}{\lambda}_1 {-1}-\|\theta\|^2|\leq C(1+2\|\theta\|)\sqrt{\frac{p}{n}}$, which leads to $$\left|\sqrt{{\widehat}{\lambda}_1 {-1}}-\|\theta\| \right|\leq C\frac{1+2\|\theta\|}{\|\theta\|}\sqrt{\frac{p}{n}}.$$ Then, by triangle inequality, we have $$L({\widehat}{\theta},\theta)\leq \left|\sqrt{{\widehat}{\lambda}_1 {-1}}-\|\theta\| \right|+\|\theta\|\left(\|{\widehat}{u}_1-\theta/\|\theta\|\|\wedge\|{\widehat}{u}_1+\theta/\|\theta\|\|\right).$$ Combining the bounds, we obtain the desired result.
For any $z,\sigma\in\{-1,1\}^n$ such that $\ell(z,\sigma)=0$, we have $$\begin{aligned}
P_{(\theta,\eta,z,\sigma)}^{(n)}\left(\ell({\widehat}{z},{\widehat}{\sigma})>\epsilon/2\right) &\leq& 2\mathbb{E}\frac{\ell({\widehat}{z},{\widehat}{\sigma})}{\epsilon} \\
&\leq& 2\mathbb{E}\frac{\ell({\widehat}{z},z)+\ell({\widehat}{\sigma},\sigma)}{\epsilon} \\
&=& \epsilon^{-1}\exp\left(-(1+o(1))(\|\theta\|^2\wedge\|\eta\|^2)/2\right) \\
&=& n^{\beta-(1+o(1))(r+s-\sqrt{s}\sqrt{r+s})/2},\end{aligned}$$ which is vanishing when $\beta<\frac{1}{2}(r+s-\sqrt{s}\sqrt{r+s})$. For any $z,\sigma\in\{-1,1\}^n$ such that $\ell(z,\sigma)>\epsilon$, we have $$\begin{aligned}
P_{(\theta,\eta,z,\sigma)}^{(n)}\left(\ell({\widehat}{z},{\widehat}{\sigma})\leq\epsilon/2\right) &\leq& P_{(\theta,\eta,z,\sigma)}^{(n)}\left(\ell(z,\sigma)-\ell({\widehat}{z},z)-\ell({\widehat}{\sigma},\sigma)\leq\epsilon/2\right) \\
&\leq& P_{(\theta,\eta,z,\sigma)}^{(n)}\left(\ell({\widehat}{z},z)+\ell({\widehat}{\sigma},\sigma)>\epsilon/2\right),\end{aligned}$$ and then following the same application of Markov’s inequality, we get the desired conclusion that $P_{(\theta,\eta,z,\sigma)}^{(n)}\left(\ell({\widehat}{z},{\widehat}{\sigma})\leq\epsilon/2\right)$ is vanishing, which then implies $\lim_{n\rightarrow\infty}R_n(\psi_{\rm estimation},\theta,\eta)=0$.
For the exact equality test, since when $r+s-\sqrt{s}\sqrt{r+s}>2$, $\ell({\widehat}{z},z)=0$ and $\ell({\widehat}{\sigma},\sigma)=0$ with high probability, both Type-I and Type-II errors are vanishing, and thus the proof is complete.
Proofs of Technical Lemmas {#sec:pf-last}
--------------------------
Conclusion 1 is a standard Gaussian tail estimate from Page 116 of [@ross2007second]. For the second conclusion, we consider $t_1\leq t_2$ without loss of generality. If $t_1$ and $t_2$ are on the same side of $0$, we will have $\left|\int_{t_1}^{t_2}\phi(x)dx\right|\leq |t_1-t_2|\sup_{x\in[t_1,t_2]}\phi(x)\leq |t_1-t_2|\left(\phi(t_1)\vee\phi(t_2)\right)$. Otherwise, $\left|\int_{t_1}^{t_2}\phi(x)dx\right|\leq |t_1-t_2|\phi(0)$. The condition $|t_1-t_2|\leq 1$ implies $\phi(0)\leq 2\left(\phi(t_1)\vee\phi(t_2)\right)$, which leads to the desired conclusion. For Conclusion 3, we first consider $t\leq 2$, and then $\frac{\phi(t)/(1\vee t)}{1-\Phi(t)}\leq \frac{\phi(0)}{1-\Phi(2)}\leq 20$. For $t>2$, we use the first conclusion and then we have $\frac{\phi(t)/(1\vee t)}{1-\Phi(t)}\leq 1-t^{-2}\leq 20$. For Conclusion 4, we have $\phi(t_1)/\phi(t_2)\leq e^{\frac{1}{2}|t_1-t_2||t_1+t_2|}\leq 2$, since $|t_1-t_2||t_1+t_2|\rightarrow 0$ when $|t_1|,|t_2|\leq (\log n)^2$ and $|t_1-t_2|\leq n^{-c}$. Finally, we prove Conclusion 5. We have $\frac{1-\Phi(t_1)}{1-\Phi(t_2)}\leq \frac{1-\Phi(t_1)}{1-\Phi(t_1)-2|t_1-t_2|\left(\phi(t_1)\vee\phi(t_2)\right)}$, where the inequality is by Conclusion 2. By Conclusions 3 and 4, $2|t_1-t_2|\left(\phi(t_1)\vee\phi(t_2)\right)/(1-\Phi(t_1))\rightarrow 0$ when $|t_1|,|t_2|\leq (\log n)^2$ and $|t_1-t_2|\leq n^{-c}$. This implies $\frac{1-\Phi(t_1)}{1-\Phi(t_2)}\leq 2$, and the proof is complete.
Let $Z\sim N(0,1)$, and then we can write $$\mathbb{P}\left(U^2\leq 2t\log n\right)=\mathbb{P}\left(-\sqrt{2\log n}(\sqrt{t}+\sqrt{r})\leq Z\leq -\sqrt{2\log n}(\sqrt{r}-\sqrt{t})\right).$$ An application of (\[eq:Gaussian-tail\]) leads to first result.
For the second result, note that we can write $U=Z_1+\sqrt{2r\log n}$ and $V=Z_2+\sqrt{2s\log n}$ with independent $Z_1,Z_2\sim N(0,1)$. Then, we have $$\begin{aligned}
\nonumber && \mathbb{P}\left(|U|-|V|>t\sqrt{2\log n}\right) \\
\label{eq:union-sharp} &\asymp& \mathbb{P}\left(U>|V|+t\sqrt{2\log n}\right) + \mathbb{P}\left(U<-|V|-t\sqrt{2\log n}\right) \\
\nonumber &=& \mathbb{P}\left(V<U-t\sqrt{2\log n}, V>-U+t\sqrt{2\log n}\right) \\
\nonumber && + \mathbb{P}\left(V<-U-t\sqrt{2\log n}, V>U+t\sqrt{2\log n}\right) \\
\nonumber &=& \mathbb{P}\left(Z_2-Z_1<\sqrt{2\log n}(\sqrt{r}-\sqrt{s}-t), Z_2+Z_1> \sqrt{2\log n}(t-\sqrt{r}-\sqrt{s})\right) \\
\nonumber && + \mathbb{P}\left(Z_2+Z_1<\sqrt{2\log n}(-\sqrt{r}-\sqrt{s}-t), Z_2-Z_1>\sqrt{2\log n}(\sqrt{r}-\sqrt{s}+t)\right) \\
\label{eq:independent-sharp} &=& \mathbb{P}\left(Z_2-Z_1<\sqrt{2\log n}(\sqrt{r}-\sqrt{s}-t)\right)\mathbb{P}\left(Z_2+Z_1> \sqrt{2\log n}(t-\sqrt{r}-\sqrt{s})\right) \\
\nonumber && + \mathbb{P}\left(Z_2+Z_1<\sqrt{2\log n}(-\sqrt{r}-\sqrt{s}-t)\right)\mathbb{P}\left(Z_2-Z_1>\sqrt{2\log n}(t+\sqrt{r}-\sqrt{s})\right),\end{aligned}$$ where we have used $(\mathbb{P}(A)+\mathbb{P}(B))/2\leq \mathbb{P}(A\cup B)\leq \mathbb{P}(A)+\mathbb{P}(B)$ in (\[eq:union-sharp\]) and the fact that $Z_2-Z_1$ and $Z_2+Z_1$ are independent in (\[eq:independent-sharp\]). Use (\[eq:Gaussian-tail\]), and we have $$\mathbb{P}\left(Z_2-Z_1<\sqrt{2\log n}(\sqrt{r}-\sqrt{s}-t)\right)\asymp\begin{cases}
\frac{1}{\sqrt{\log n}}n^{-\frac{1}{2}(t-\sqrt{r}+\sqrt{s})^2}, & t>\sqrt{r}-\sqrt{s}, \\
1, & t\leq \sqrt{r}-\sqrt{s},
\end{cases}$$ and $$\mathbb{P}\left(Z_2+Z_1> \sqrt{2\log n}(t-\sqrt{r}-\sqrt{s})\right)\asymp \begin{cases}
\frac{1}{\sqrt{\log n}}n^{-\frac{1}{2}(t-\sqrt{r}-\sqrt{s})^2}, & t>\sqrt{r}+\sqrt{s}, \\
1, & t\leq \sqrt{r}+\sqrt{s}.
\end{cases}$$ The product of the above two probabilities is of order $$\begin{cases}
\frac{1}{\log n}n^{-\left[(t-\sqrt{r})^2+s\right]}, & t>\sqrt{r}+\sqrt{s}, \\
\frac{1}{\sqrt{\log n}}n^{-\frac{1}{2}(t-\sqrt{r}+\sqrt{s})^2}, & \sqrt{r}-\sqrt{s}<t\leq\sqrt{r}+\sqrt{s}, \\
1, & t\leq \sqrt{r}-\sqrt{s}.
\end{cases}$$ The quantity $\mathbb{P}\left(Z_2+Z_1<\sqrt{2\log n}(-\sqrt{r}-\sqrt{s}-t)\right)\mathbb{P}\left(Z_2-Z_1>\sqrt{2\log n}(t+\sqrt{r}-\sqrt{s})\right)$ can be analyzed in the same way, and it is of a smaller order. Thus, the proof is complete.
We first study $\mathbb{P}\left(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|>t\sqrt{2\log n}\right)$. Consider independent random variables $W_1\sim N(0,r+s-\sqrt{s}\sqrt{r+s})$, $W_2\sim N(0,r+s-\sqrt{s}\sqrt{r+s})$, and $W_3\sim N(0,\sqrt{s}\sqrt{r+s})$. It is easy to check that $$(\sqrt{r}U+\sqrt{s}V,\sqrt{r+s}V)\stackrel{d}{=}(W_1+W_3+\sqrt{s}\sqrt{r+s}\sqrt{2\log n}, W_2+W_3+(r+s)\sqrt{2\log n}).$$ Therefore, $$\begin{aligned}
&& \mathbb{P}\left(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|>t\sqrt{2\log n}\right) \\
&=& \mathbb{P}\left(|W_1+W_3+\sqrt{s}\sqrt{r+s}\sqrt{2\log n}|-|W_2+W_3+(r+s)\sqrt{2\log n}|>t\sqrt{2\log n}\right) \\
&\asymp& \mathbb{P}\left(W_1+W_3+\sqrt{s}\sqrt{r+s}\sqrt{2\log n}>t\sqrt{2\log n}+|W_2+W_3+(r+s)\sqrt{2\log n}|\right) \\
&& + \mathbb{P}\left(-W_1-W_3-\sqrt{s}\sqrt{r+s}\sqrt{2\log n}>t\sqrt{2\log n}+|W_2+W_3+(r+s)\sqrt{2\log n}|\right) \\
&=& \mathbb{P}\left(W_1+W_3+\sqrt{s}\sqrt{r+s}\sqrt{2\log n}>t\sqrt{2\log n}+W_2+W_3+(r+s)\sqrt{2\log n},\right. \\
&& \left.W_1+W_3+\sqrt{s}\sqrt{r+s}\sqrt{2\log n}>t\sqrt{2\log n}-W_2-W_3-(r+s)\sqrt{2\log n}\right) \\
&& + \mathbb{P}\left(-W_1-W_3-\sqrt{s}\sqrt{r+s}\sqrt{2\log n}>t\sqrt{2\log n}+W_2+W_3+(r+s)\sqrt{2\log n},\right. \\
&& \left.-W_1-W_3-\sqrt{s}\sqrt{r+s}\sqrt{2\log n}>t\sqrt{2\log n}-W_2-W_3-(r+s)\sqrt{2\log n}\right) \\
&=& \mathbb{P}\left(W_1-W_2>(t+r+s-\sqrt{s}\sqrt{r+s})\sqrt{2\log n}\right) \\
&& \times \mathbb{P}\left(W_1+W_2+2W_3>(t-(r+s)-\sqrt{s}\sqrt{r+s})\sqrt{2\log n}\right) \\
&& + \mathbb{P}\left(-W_1-W_2-2W_3>(t+r+s+\sqrt{s}\sqrt{r+s})\sqrt{2\log n}\right) \\
&& \times \mathbb{P}\left(-W_1+W_2>(t-(r+s)+\sqrt{s}\sqrt{r+s})\sqrt{2\log n}\right),\end{aligned}$$ where we have used the fact that $W_1-W_2$, $W_1+W_2$, and $W_3$ are independent. For the four probabilities above, we have $$\begin{aligned}
&& \mathbb{P}\left(W_1-W_2>(t+r+s-\sqrt{s}\sqrt{r+s})\sqrt{2\log n}\right) \\
&=& \mathbb{P}\left(N(0,1)>\frac{(t+r+s-\sqrt{s}\sqrt{r+s})\sqrt{2\log n}}{\sqrt{2(r+s-\sqrt{s}\sqrt{r+s})}}\right) \\
&\asymp& \begin{cases}
\frac{1}{\sqrt{\log n}}n^{-\frac{(t+r+s-\sqrt{s}\sqrt{r+s})^2}{2(r+s-\sqrt{s}\sqrt{r+s})}}, & t>-r-s+\sqrt{s}\sqrt{r+s}, \\
1, & t\leq -r-s+\sqrt{s}\sqrt{r+s},
\end{cases}\end{aligned}$$ $$\begin{aligned}
&& \mathbb{P}\left(W_1+W_2+2W_3>(t-(r+s)-\sqrt{s}\sqrt{r+s})\sqrt{2\log n}\right) \\
\end{cases}\end{aligned}$$ and $$\begin{aligned}
\end{cases}\end{aligned}$$ Putting the pieces together, we get $$\begin{aligned}
&& \mathbb{P}\left(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|>t\sqrt{2\log n}\right) \\
Next, we analyze $\mathbb{P}\left(\sqrt{r}|U|-(\sqrt{r+s}-\sqrt{s})|V|>t\sqrt{2\log n}\right)$. Note that $$\sqrt{r}|U|-(\sqrt{r+s}-\sqrt{s})|V|=\left(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|\right)\vee\left(|\sqrt{r}U-\sqrt{s}V|-\sqrt{r+s}|V|\right). \label{eq:great-formula}$$ Then, by the fact that $\mathbb{P}(A)\vee \mathbb{P}(B)\leq \mathbb{P}(A\cup B)\leq 2\left(\mathbb{P}(A)\vee \mathbb{P}(B)\right)$, we have $$\begin{aligned}
&& \mathbb{P}\left(\sqrt{r}|U|-(\sqrt{r+s}-\sqrt{s})|V|>t\sqrt{2\log n}\right) \\
&\asymp& \mathbb{P}\left(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|>t\sqrt{2\log n}\right) \\
&& \vee \mathbb{P}\left(|\sqrt{r}U-\sqrt{s}V|-\sqrt{r+s}|V|>t\sqrt{2\log n}\right).\end{aligned}$$ We have derived the asymptotics of $\mathbb{P}\left(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|>t\sqrt{2\log n}\right)$, and thus it is sufficient to analyze $\mathbb{P}\left(|\sqrt{r}U-\sqrt{s}V|-\sqrt{r+s}|V|>t\sqrt{2\log n}\right)$. [Since $|\sqrt{r}U-\sqrt{s}V|-\sqrt{r+s}|V|$ has the same distribution as that of $|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|$ under our assumptions, the desired bound is exactly the same as before.]{} Hence, the proof is complete.
Consider independent random variables $W_1\sim N(0,r+s-\sqrt{s}\sqrt{r+s})$, $W_2\sim N(0,r+s-\sqrt{s}\sqrt{r+s})$, and $W_3\sim N(0,\sqrt{s}\sqrt{r+s})$. It is easy to check that $$(\sqrt{r}U+\sqrt{s}V,\sqrt{r+s}V)\stackrel{d}{=}(W_1+W_3+(r+s)\sqrt{2\log n}, W_2+W_3+\sqrt{s}\sqrt{r+s}\sqrt{2\log n}).$$ Therefore, $$\begin{aligned}
&& \mathbb{P}\left(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|>t\sqrt{2\log n}\right) \\
&=& \mathbb{P}\left(|W_1+W_3+(r+s)\sqrt{2\log n}|-|W_2+W_3+\sqrt{s}\sqrt{r+s}\sqrt{2\log n}|>t\sqrt{2\log n}\right) \\
&\asymp& \mathbb{P}\left(W_1+W_3+(r+s)\sqrt{2\log n}>t\sqrt{2\log n}+|W_2+W_3+\sqrt{s}\sqrt{r+s}\sqrt{2\log n}|\right) \\
&& + \mathbb{P}\left(-W_1-W_3-(r+s)\sqrt{2\log n}>t\sqrt{2\log n}+|W_2+W_3+\sqrt{s}\sqrt{r+s}\sqrt{2\log n}|\right) \\
&=& \mathbb{P}\left(W_1+W_3+(r+s)\sqrt{2\log n}>t\sqrt{2\log n}+W_2+W_3+\sqrt{s}\sqrt{r+s}\sqrt{2\log n},\right. \\
&& \left. W_1+W_3+(r+s)\sqrt{2\log n}>t\sqrt{2\log n}-W_2-W_3-\sqrt{s}\sqrt{r+s}\sqrt{2\log n}\right) \\
&& + \mathbb{P}\left(-W_1-W_3-(r+s)\sqrt{2\log n}>t\sqrt{2\log n}+W_2+W_3+\sqrt{s}\sqrt{r+s}\sqrt{2\log n},\right. \\
&& \left.-W_1-W_3-(r+s)\sqrt{2\log n}>t\sqrt{2\log n}-W_2-W_3-\sqrt{s}\sqrt{r+s}\sqrt{2\log n}\right) \\
\end{cases}\end{aligned}$$ Therefore, the desired conclusion is obtained.
Finally, we consider $t<r+s-\sqrt{s}\sqrt{r+s}$. Write $U=Z_1+\sqrt{2r\log n}$ and $V=Z_2+\sqrt{2s\log n}$ with independent $Z_1,Z_2\sim N(0,1)$. Then, by using (\[eq:great-formula\]), we have $$\begin{aligned}
&& \mathbb{P}\left(|\sqrt{r}U+\sqrt{s}V|-\sqrt{r+s}|V|>t\sqrt{2\log n}\right) \\
&\geq& \mathbb{P}\left(\sqrt{r}|U|-(\sqrt{r+s}-\sqrt{s})|V|>t\sqrt{2\log n}\right) \\
&\geq& \mathbb{P}\left(\sqrt{r}(Z_1+\sqrt{2r\log n})-(\sqrt{r+s}-\sqrt{s})|Z_2|-(\sqrt{r+s}-\sqrt{s})\sqrt{2s\log n}>t\sqrt{2\log n}\right) \\
&=& \mathbb{P}\left(\frac{\sqrt{t}Z_1-(\sqrt{r+s}-\sqrt{s})|Z_2|}{\sqrt{2\log n}}>t-(r+s-\sqrt{s}\sqrt{r+s})\right) \rightarrow 1,\end{aligned}$$ since $\frac{\sqrt{t}Z_1-(\sqrt{r+s}-\sqrt{s})|Z_2|}{\sqrt{2\log n}}=o_{\mathbb{P}}(1)$. The proof is complete.
[^1]: The non-Bayesian version of the problem has also been studied in [@ingster1997some; @ingster2012nonparametric].
[^2]: Following [@cai2014optimal], we call $\beta_{\rm IDJ}^*(r)$ the Ingster–Donoho–Jin threshold.
[^3]: We only use $\|\theta\|\geq\|\eta\|$ to motivate the testing problem (\[eq:general-comb-null\])-(\[eq:general-comb-alt\]). All the theorems in the paper hold with general $\theta$ and $\eta$ that admit the calibration (\[eq:general-cali\]).
| |
With the advantage of the multi-GNSS signals from an individual station, the method can estimate the ionospheric TEC and DCBs independently, which could provide a potential tool in the future real-time applications.
GNSS‐ISR data fusion: General framework with application to the high‐latitude ionosphere
- Physics
- 2016
A mathematical framework is presented for the fusion of electron density measured by incoherent scatter radar (ISR) and total electron content (TEC) measured using global navigation satellite systems…
Characterization of multi-scale ionospheric irregularities using ground-based and space-based GNSS observations
- Physics, Environmental ScienceSatellite Navigation
- 2021
Ionospheric irregularities can adversely affect the performance of Global Navigation Satellite System (GNSS). However, this opens the possibility of using GNSS as an effective ionospheric remote…
Gaussian Markov Random Field Priors in Ionospheric 3-D Multi-Instrument Tomography
- PhysicsIEEE Transactions on Geoscience and Remote Sensing
- 2018
Gaussian Markov random fields (GMRFs) are used for constructing the prior electron density distribution and the use of GMRF introduces sparsity to the linear system, making the problem computationally feasible.
Field‐Aligned GPS Scintillation: Multisensor Data Fusion
- Physics
- 2018
The Mahali Global Positioning System (GPS) array (9 receivers, 15–30 km baseline distance) in central Alaska has probed auroral structures in a field‐aligned direction during a geomagnetic substorm…
An Ionosphere Specification Technique Based on Data Ingestion Algorithm and Empirical Orthogonal Function Analysis Method
- PhysicsSpace Weather
- 2018
The NeQuick TEC driven by EOF-modeled Az shows 10–15% improvement in accuracy over the standard ionosphere correction algorithm in the Galileo navigation system.
Automatic Identification of the Main Ionospheric Trough in Total Electron Content Images
- PhysicsSpace Weather
- 2021
The main ionospheric trough (MIT) is a salient density feature in the mid‐latitude ionosphere and characterizing its structure is important for understanding Global Positioning System and HF signal…
Two‐Dimensional Reconstruction of Ionospheric Plasma Density Variations Using Swarm
- PhysicsSpace Weather
- 2020
Space weather phenomena such as scintillations of Global Navigation Satellite Systems (GNSS) signals are of increasing importance for aviation, the maritime, and civil engineering industries. The…
Ionospheric Remote Sensing with GNSS
- Physics, Environmental ScienceEncyclopedia
- 2021
The Global Navigation Satellite System (GNSS) plays a pivotal role in our modern positioning, navigation and timing (PNT) technologies. GNSS satellites fly at altitudes of approximately 20,000 km or…
3‐D Regional Ionosphere Imaging and SED Reconstruction With a New TEC‐Based Ionospheric Data Assimilation System (TIDAS)
- Physics, Environmental ScienceSpace Weather
- 2022
A new TEC‐based ionospheric data assimilation system (TIDAS) over the continental US and adjacent area (20°–60°N, 60°–130°W, and 100–600 km) has been developed through assimilating heterogeneous…
References
SHOWING 1-10 OF 19 REFERENCES
Ionospheric measurement with GPS: Receiver techniques and methods
- Physics
- 2007
Results of ionospheric data collected with a receiver that mitigates these biases are presented to demonstrate the utility of improved accuracy particularly for ingestion into tomographic reconstructions, but also for conversion from slant to vertical TEC.
Accuracy of GPS total electron content: GPS receiver bias temperature dependence
- Physics
- 2013
Having an accurate method to estimate and remove ionospheric effects is a major issue for low‐frequency radio astronomy arrays, as the ionosphere is one of their largest error terms. One way to…
The GPS Segment of the AFRL-SCINDA Global Network and the Challenges of Real-Time TEC Estimation in the Equatorial Ionosphere
- Physics
- 2006
The estimation of Total Electron Content (TEC) in the equatorial ionosphere using GPS presents a number of challenges due to the presence of strong spatio-temporal density gradients and scintillation…
Automated daily processing of more than 1000 ground‐based GPS receivers for studying intense ionospheric storms
- Physics
- 2005
This work takes advantage of all available GPS receivers using a new processing algorithm based on the Global Ionospheric Mapping (GIM) software developed at the Jet Propulsion Laboratory, designed to estimate receiver biases for all stations.
A global mapping technique for GPS‐derived ionospheric total electron content measurements
- Environmental Science
- 1998
A worldwide network of receivers tracking the transmissions of Global Positioning System (GPS) satellites represents a new source of ionospheric data that is globally distributed and continuously…
Real‐Time Ionospheric Monitoring System Using GPS
- Physics
- 1991
Using data acquired from a GPS receiver, a real-time synoptic ionospheric monitoring system has been developed. The system is used at the Millstone Hill satellite tracking radar. Each GPS satellite…
Automated GPS processing for global total electron content data
- Computer Science
- 2006
The architecture of the MAPGPS software, which automates the processing of GPS data into global total electron density (TEC) maps, is described and three different methods for solving the receiver bias problem are described in detail.
Beacon satellite receiver for ionospheric tomography
- Physics, Mathematics
- 2014
A new coherent dual-channel beacon satellite receiver intended for ionospheric tomography is introduced and the distribution of errors for phase curve measurements and the use of phase curve measured for limited angle tomography using the framework of statistical linear inverse problems is investigated.
History, current state, and future directions of ionospheric imaging
- Physics
- 2008
The ability of imaging algorithms to ingest multiple types of data and use advanced inverse techniques borrowed from meteorological data assimilation to produce four-dimensional images of electron density is discussed.
Ionospheric Radio Propagation
- Physics
- 1965
The scope of the present work has been broadened to include aspects of ionospheric radio propagation which were not treated in earlier publication. | https://www.semanticscholar.org/paper/Statistical-framework-for-estimating-GNSS-bias-Vierinen-Coster/abdf3f262669b42949c17253a6d9b9f124e8f1e1 |
In 1974, a young Hungarian lecturer in interior design was contemplating an exercise he had given his students, namely to draw and study a cube bisected by its three mid-planes into eight smaller cubes. "How can I make the faces move?" he asked himself. Within a few weeks, Ernő Rubik had devised a more complicated version derived from a trisected cube, which soon became one of the greatest puzzle crazes of all time. It also was the most mathematically sophisticated toy ever produced, requiring and popularizing a branch of mathematics rarely seen in earlier mathematical toys, namely the theory of groups.
Since its development in the early 19th century, the notion of a group has become recognized as one of the archetypes of mathematical structure. Although its beginnings were in the field of algebra, a group is most clearly seen as a set of transformations of some object that preserve some aspect of its structure. For example, there are certain rigid motions of an unmarked cube—the 24 rotations about its various symmetry axes (including the null rotation by zero degrees) that leave it unchanged. One can combine these by doing first one motion, then another. If we denote these motions as A, B, C, ..., we denote the result of applying A, then B, as AB. Note that AB and BA may be quite different. For transformations, the "associative law" (AB)C = A(BC) naturally holds. The motion of doing nothing (that is, leaving the cube as it is), which we denote as I, acts as an "identity"—that is, AI = IA = A. Further, each motion A has an "inverse" motion A', such that AA' = A'A = I. These are the defining properties of a group. Groups are ubiquitous in mathematics—all the number systems have various group structures within them, and groups of transformations are the basic tool for the study of spaces and shapes.
Quite early in the study of Rubik's Cube, people realized that the terminology and tools of group theory are needed to understand it. It was the first mathematical toy to exemplify much of the theory of groups in a concrete way. One could actually hold a group in one's hand! Even experienced mathematicians found that they gained fresh insights into group theory as they struggled to solve the Cube and to make sense of what they were doing.
Numerous books about the Cube appeared from 1979 to 1986. Most of these were simple solution manuals, but a few authors tried (as I myself did in Notes on Rubik's Cube) to set forth the rich mathematical structure of the Cube. These works did not exhaust the subject; new ideas and findings have arisen, and many basic questions that were raised in the initial investigations remain unsolved.
David Joyner has been a Rubik's Cube enthusiast for many years. In Adventures in Group Theory: Rubik's Cube, Merlin's Machine, and Other Mathematical Toys, he brings the subject up to date and shows how group theory is used in related problems. Ostensibly, he starts with high school mathematics, introducing logic, set theory and algebra in the first two chapters, but a reader really needs to have met these ideas already to go over them at such speed.
Other chapters touch on such topics as bell ringing (an excuse to discuss permutations); the basics of group theory; "binary button puzzles" (arrays of buttons with associated lights that are toggled between on and off when you press the buttons); graph theory and the graph of Rubik's Cube (the set of all possible patterns, with a connection between two patterns if one is obtained from the other by a single move of the Cube); the symmetry of the Platonic solids; "illegal" moves of the Cube (which involve disassembling it); "conservation laws" followed by the Cube; the Fifteen Puzzle (a popular permutation puzzle with 15 numbered sliding blocks and one blank square that the blocks can slide into); card-shuffling; and a few of the many other Rubik-like puzzles.
Joyner gallops through a vast amount of material, giving at least two, if not three, semesters of undergraduate mathematics a once-over, with many proofs omitted. In group theory he gets up to wreath products; material on linear algebra, finite fields and computational complexity is also covered. Generally he introduces topics by giving examples first, but sometimes he is keen to get on with the theory and whizzes through rather more theory than necessary for the understanding of the puzzles being studied.
The principal unsolved problem of Cube theory is finding the maximum number of moves to restore a Cube to its initial or solved state. This is called the length of "God's Algorithm," or the diameter of the graph of the Cube. (A "move" may be a single quarter-turn of a face, or it may be what is called a face turn—that is, either a quarter-turn or a half-turn of a single face. Turning the entire Cube or other kinds of moves may also be permitted.) So far as we understand, determining this requires examining something like all the positions of the Cube, and there are 43,252,003,274,489,856,000 (or ≈ 4.3 X 1019) such patterns. If we could examine one pattern every microsecond, this would take about 1.4 million years. Since there are many millions of computers and computer speed is still increasing significantly, this computation is now approaching feasibility, and I suspect the answer will be known by 2010 or 2020. At present, we know there are solution methods that take at most 29 face turns or at most 42 quarter-turns, and there is a position that takes 21 face turns or 26 quarter-turns. For simpler puzzles that have been completely solved, there are very few starting positions that require the maximum number of moves to solve. For the Cube, it is quite possible that only 24 or 48 positions are "antipodal" to the solved position, and we will have to do a lot of searching to find these. Surprisingly, the diameter of the graph of the Fifteen Puzzle, which is much simpler than Rubik's Cube, also remains unknown.
Adventures in Group Theory grew out of Joyner's lecture notes. This fact, in combination with the great range of material covered, has resulted in occasional disjointedness of the narrative and a number of misprints, missing references and the like, which will strain a nonexpert reader.
Nonetheless, Joyner does convey some of the excitement and adventure in picking up knowledge of group theory by trying to understand Rubik's Cube. Enthusiastic students will learn a lot of mathematics from this book but must have the sense to skip bits which get too hard and return to them later or discuss them with others.—David Singmaster, Mathematics, South Bank University, London
| |
Horace Silver Quintet: Silver's Serenade
Recorded in 1963, Silver's Serenade was the last complete recording by a cast of players first assembled in 1959. It's hard to argue against the leader's choice of personnelselected as much to execute the composer's tight ensemble passages with accessible directness and clarity as to lay down economical, groove-tight solos.
The title tune, a loping, laid-back two-beat enticer that sits on whole notes like a serenade should, is another immediate attention-grabber by the gifted songwriter. Blue Mitchell's trumpet solo sustains the song's silky seductive allure before Junior Cook's turn on tenor leaves just enough space between phrases to set the stage for the leader's somewhat disjointed collection of funky riffs and catchy quotes, along with the trademark left-hand "bombs," or low-register chord clusters, which the pianist employs like a second bass drum.
Much the same formula is applied to the remaining tunes, though Roy Brooks exchanges his brushes for sticks and an insistent shuffle beat to insure that "Let's Get to the Nitty Gritty" makes good on its claim. Junior Cook has never sounded more in command of the upper register, confirming reports that he had been practicing with Coltrane at this time: his tenor solo is a model of restraint and construction, building to its climactic top tone just before passing the baton to Silver. An alternate ensemble chorus played before the out chorus recalls some of the more ambitious arranging characteristic of the maestro's late-'50s units.
Bassist Gene Taylor returns to a solid two-beat feel complemented by Brooks' spirited backbeat on snare to serve up unadulterated funk framed between the stop-time choruses and mystic allure of the minor-key "Sweet Sweetie Dee." The veil of mystery remains in place on "The Dragon Lady," where the composer harmonizes the horns in 4ths, establishing a mood evocative of a Far East setting. The set closes with an up-tempo original descriptively entitled "Nineteen Bars." Despite fluent solos by the principals (especially the surprising Junior Cook), Roy Brooks' extended percussion break comes as welcome relief from Silvers' repetitious stop-and-go accompaniment patterns (stop-time or pedal tones followed by 4/4 swing) on each of the final three numbers.
Even with the appealing title song and innovative departures from eight and twelve-bar song structure, Silver's Serenade, though not without grey scale, is monochromatic compared to the composer's fresh and sparkling, multi-hued work on compositions such as "The Outlaw" and the enchanting "Moon Rays" from Further Explorations. (An unfortunate title"further" sounds like an inessential addendum; "explorations" like an experimental workshop.)
Perhaps with a face-lifted cover and a less tentative title ("Silver's Symphony"?), the brass at Blue Note will still see fit to reissue the earlier date.
Track Listing
Silver's Serenade; Let's Get to the Nitty Gritty; Sweet Sweetie Dee; The Dragon Lady; Nineteen Bars.
Personnel
Blue Mitchell: trumpet; Junior Cook: tenor sax; Horace Silver: piano; Gene Taylor: bass; Roy Brooks: drums. | https://www.allaboutjazz.com/silvers-serenade-horace-silver-blue-note-records-review-by-samuel-chell.php |
During our trip to the Big Island of Hawaii earlier this year, we drove to the summit of the world’s tallest mountain. Nope, I’m not talking about Everest. What most folks don’t know is that Mauna Kea, one of the Big Island’s volcanoes, is actually taller from base to summit than Everest. The catch is that the base of Mauna Kea is almost 20,000 feet (6096 meters) below the surface of the Pacific Ocean. The entire mountain is about 33,500 feet (10,210 meters) tall, while Mount Everest is approximately 29,035 feet (8849 meters).
From our condo near the beach, the drive up to the summit of Mauna Kea was about two hours. The last 45 minutes were the most treacherous. The road was dirt, narrow, and very steep, with frightening drop-offs, only to be driven by 4WD vehicles. It was a sunny day, but the temperature dropped quickly as we got closer to the top, where it hovered around freezing.
At the summit, it truly felt like we were on Mars with all the red dirt and desolation. Because of the lack of light pollution, Mauna Kea is a great place to study the stars. A couple dozen countries, universities, and tech companies have built telescopes up there, making it perhaps the most advanced collection of telescopes on the globe.
Although it has apparently been a few thousand years since Mauna Kea last erupted, geologists say it could erupt again. If that happens, hopefully the scientists operating the telescopes will have time to remove their expensive equipment!
When we visited in mid-April, there were only a few patches of snow on the ground. During the winter, however, blizzards and extreme conditions are common. There are times when the road to the summit is closed due to weather hazards.
At the top, the air is very dry, and because it is above the majority of the atmosphere that protects against the sun’s damaging rays, sunglasses and sunscreen are essential. (I will say that Mr. Handsome accidentally took his sunglasses off before I was able to warn him, and he’s just fine.) Another interesting fact about the Mauna Kea summit is that it boasts some of the purest air in the world, so it is a great place for scientists to collect air samples.
While at the summit, I filmed a video (embedded below). | https://nashvillewife.com/on-top-of-the-world/ |
# Poecilia wingei
Poecilia wingei, known to aquarists as Endlers or Endler's livebearer, in the genus Poecilia, is a small fish native to the Paria Peninsula in Venezuela. They are prolific breeders and often hybridize with guppies. These very colorful hybrids are the easiest to find being offered in pet-shops, typically under the name Endler's guppy.
## History
Poecilia wingei is a very colorful guppy species, similar to the fancy guppy often found in pet shops. The species was first collected from Laguna de Patos in Venezuela by Franklyn F. Bond in 1937, and rediscovered by Dr. John Endler in 1975. The latter were the first examples of this fish to make it to the aquarium trade. More have been collected since then, notably by Armando Pou, to expand the captive breeding stock. The original Laguna de Patos population is threatened by runoff from a municipal garbage dump. Though it is rare in pet shops, this species is seen occasionally in the aquaria of enthusiasts.
Although not yet taken up into the IUCN Red List of endangered species, they are in danger of extinction in the wild, as humans enter their natural habitat, polluting and destroying it.
According to Stan Shubel, the author of Aquarium Care for Fancy Guppies, the Endler guppy is, in fact, not a separate species; claiming it has the same genetic makeup as the common guppy, yet is given its own name, Poecilia wingei, for conservation purposes. However, in 2009 S. Schories, M. K. Meyer and M. Schartl published on the basis of molecular data that Poecilia wingei is a separated taxon at the species level from P. reticulata and P. obscura. In 2014 H. Alexander et al. published a paper that refutes the assertions and conclusions made by S. Schories et al. concerning the status of Poecilia wingei as a species.
## Campoma Poecilia wingei
The first population of Poecilia to be given the name Poecilia wingei was discovered in 2005 in the Campoma region of Venezuela by Fred Poeser and Michael Kempkes. This population of P. wingei can be found in Laguna Campoma and in the lagoon's connected streams.
Most P. wingei from the Campoma region found in the hobby today are descended from those originally collected by Phil Voisin (Philderodez). The most popular collecting site in the Campoma region for P. Wingei has been the Campoma bridge location. P. wingei phenotypes collected from the Campoma bridge location are identified by a numbering system from 1 through 70.
P. wingei from the Campoma region are also known as the Campoma guppy.
## Cumana Poecilia wingei
Poecilia wingei from the Cumana region were originally known as Endler's guppy. Endler's livebearer, originally discovered in 1975 by John Endler and are found in Laguna Patos and in the lagoon's connected streams and canals, was actually a micropoecilia species that is believed to be extinct. In 2009 the Schories et al. publication broadened the definition of P. Wingei to include Endler's livebearer. Most P. Wingei from the Cumana region found in the hobby today are descended from those collected by Armando Pau and were line bred and distributed to hobbyists by Adrian Hernandez (AdrianHD).
P. Wingei from the Cumana region are also known as the Cumana guppy.
## El Tigre Poecilia wingei
El Tigre are Poecilia wingei collected from the El Tigre stream in the Campoma region of Venezuela. The El Tigre stream is not connected to Laguna Campoma so the El Tigre belong to their own distinct population. All El Tigre found in the hobby today are descended from those collected by Phil Voisin (Philderodez).
## Staeck Endler (hybrid)
The Staeck guppy was collected by Dr. Wolfgang Staeck in a creek around Laguna de los Patos in Cumana in 2004. Karen Koomans obtained a Staeck guppy male from the Hamburg University and identified it as pure Poecilia reticulata. Karen Koomans crossed this Staeck guppy male with a 'Yellow Top Sword' Endler female. She introduced the new line to the hobby as the 'Hamburg hybrid Endler strain'.
## Japan blue Endler (hybrid)
The original japan blue wild type guppy was a Poecilia reticulata collected from Lac du Rorata. Lac du Rorota is a reservoir in French Guiana. Karen Koomans received a single male japan blue guppy and crossed it with Cumana Endler females to preserve the strain. Karen Koomans introduced this strain to the hobby as the 'Japan blue wild type guppy'.
## Hybrids
Endlers (P. wingei) can be crossed with guppy species (P. reticulata, P. obscura), and the hybrid offspring will be fertile. This is considered to dilute the gene pool and therefore is avoided by fish breeders who wish to maintain pure strains. Avid hobbyists maintain registry records to ensure their Endlers are purebred; undocumented fish sold in pet stores as Endler's livebearers are assumed to have some degree of guppy hybridization. In addition, as P. reticulata has been found in the same bodies of water as P. wingei, natural hybridization may also occur in the wild.
Hybridization with fancy guppy strains (selectively bred P. reticulata) often produces bright and colourful offspring. This has led to some hybrids being selectively bred themselves and becoming so common that they may be sold under any number of names such as peacock, snake, tiger, paradise, fancy, or sword Endler and sometimes as flame tail.
## In the aquarium
Though Poecilia wingei are hardy and undemanding as far as survival goes, proper aquascaping, diet, water parameters, tank mates, along with many other factors such as male to female ratios will determine the strength of your lines' overall genetics and appearance.
## Breeding
The colors of Endler's livebearer males are very intense, especially the black, orange, and metallic green colors. Their natural patterns are highly variable, though many display a double sword tail. Breeders have developed numerous lines displaying specific patterns and colors, such as red chest, black bar, peacock, yellow sword, etc.
They are prolific breeders like their guppy relatives. They give birth to live young approximately every 23 days. Fry "drops" can range in size from one to 30 babies (or possibly more, depending on several variables, including the age and size of the mother). Their first few hours of life will primarily be spent on the bottom of the tank, where they consume their yolk sacs. At this time they are most vulnerable to predators, including their own mothers and other Endler females (males seem less interested in cannibalism).
The fry can be fed powdered fry food, baby brine shrimp, and crushed flake food. They will also nibble on the layer of algae and microorganisms that forms on aquatic plants. Even adult brine shrimp are not beyond their capability, as several fry will gang up on a brine shrimp their own size and tear it apart.
The males will start to show color in approximately three to four weeks, but it can be several months before they develop the full depth and richness of color that characterizes Endlers. The colors of a male Endler will gradually intensify over the first six months of their lives. Tail extensions similar to that seen in a swordtail are not uncommon, but are much shorter. Most often, what appears to be a sword extension can be seen as intense coloring along the edge of an otherwise transparent tail. While giving the impression of a sword it turns out to just be good coloring.
Females will spend their entire lives with rather unexciting coloring. Depending on their environments, females will range from a pale silver to a dull, dark gold, but have the ability to change their coloring somewhat if they are moved from a light environment to a dark one (or vice versa). When full-grown, adult females can be as much as twice the size of males.
The birth process can be stressful for the females, and some will not survive long after large births. The ones that do not do well will often turn grey and will start to "wither away" until they eventually die, due to the stress.
## Etymology
The specific name of Poecilia wingei honours the Danish biologist Øjvind Winge (1886–1964) who worked extensively on the genetics of Poecilia including this species. | https://en.wikipedia.org/wiki/Poecilia_wingei |
Objectives: The objective was to quantify the effect of scribes on three measures of emergency physician (EP) productivity in an adult emergency department (ED).
Methods: For this retrospective study, 243 clinical shifts (of either 10 or 12 hours) worked by 13 EPs during an 18-month period were selected for evaluation. Payroll data sheets were examined to determine whether these shifts were covered, uncovered, or partially covered (for less than 4 hours) by a scribe; partially covered shifts were grouped with uncovered shifts for analysis. Covered shifts were compared to uncovered shifts in a clustered design, by physician. Hierarchical linear models were used to study the association between percentage of patients with which a scribe was used during a shift and EP productivity as measured by patients per hour, relative value units (RVUs) per hour, and turnaround time (TAT) to discharge.
Results: RVUs per hour increased by 0.24 units (95% confidence interval [CI] = 0.10 to 0.38, p = 0.0011) for every 10% increment in scribe usage during a shift. The number of patients per hour increased by 0.08 (95% CI = 0.04 to 0.12, p = 0.0024) for every 10% increment of scribe usage during a shift. TAT was not significantly associated with scribe use. These associations did not lose significance after accounting for physician assistant (PA) use.
Conclusions: In this retrospective study, EP use of a scribe was associated with improved overall productivity as measured by patients treated per hour (Pt/hr) and RVU generated per hour by EPs, but not as measured by TAT to discharge. | https://pubmed.ncbi.nlm.nih.gov/20536801/?dopt=Abstract |
During Tesla’s second financial quarter of 2020, the EV company delivered a total of 90,650 electric vehicles, which is quite a bit more than it did prior to COVID-19.
The second-quarter runs from the beginning of April to some time in June, so you’d think it would be safe to assume that the company sold/delivered fewer vehicles. However, it appears that Tesla’s online ordering and lack of dealerships has worked in its favour.
Out of the 90,650 cars delivered, 10,600 of them were the higher-cost Model S and Model X vehicles, while the remaining 80,050 were Model 3s and Model Ys.
During the first quarter, the automaker delivered 88,400 electric cars. Of that total, 12,200 deliveries were Model Y and Model S trims, and the remaining 76,200 of these were Model 3s and Ys.
On the production side of things, the numbers aren’t as positive. The company built 6,326 Model S/X vehicles and 75,946 Model 3/Y cars for a total of 82,272. In the previous quarter, it made a total of 102,672 vehicles with 15,390 of them being Model X and Model S variants and the remaining 87,282 were Model 3s and Model Ys.
Musk seemed aware of this issue throughout this quarter as he struggled to keep the Tesla factory in Alameda County, California open amid the height of COVID-19 shelter in place orders. There was even a point where Musk asked employees to come back to work at the factory despite Alameda County health officials not giving the company the go ahead to re-open.
That said, forcing the factory to re-open did work in Musk’s favour since shortly after the county allowed the factory to resume work legally. | https://mobilesyrup.com/2020/07/02/tesla-production-q2-deliveries/ |
This notice writing is on opening a library by a club of your school.
Example Notice 1
Delhi High School
Notice
Opening of a Library
September 10, 2019
All the students are hereby informed that our school will establish a library for all kinds of book lovers. Our school has ample space and an opportunity to open a standard library.
We have already obtained financial aid from the Ministry of Human Resources Development, But it is not enough to decorate the library with all kinds of books.
Now the opening ceremony will take place in our club on July 5th. You are all requested to be there and donate as much as possible to the Library Fund, which will be used later on to purchase new books. We earnestly request your active cooperation.
Secretary,
Sunita Roy
Example Notice 2
Lucknow High School
Notice
Opening of a Library
10 July 2019
This is to inform each member of our club that an open library will start working in our club on the fifteenth of this month. Thus all respected individuals are coordinated to give a book to enhance the library. For donors, a specific scale for membership fees will be waived.
Additionally, it should be noted that poor students will have free access to the library. The start job will be held at 10 AM on 15 July 2019. Hence, all individuals are required to be present on that day, and the movement is interested in the program.
Head boy,
Adarsh Matthew
Example Notice 3
Kanpur High School
Notice
Opening of a Library
24 August 2019
Members of the club are informed that the club has decided to open a library on the 20th of next month and starting from it. Since the club has decided not to invest any initial amount in purchasing the book, all members are invited to actively join the program by donating the books to the library. Bearing in mind that it is a noble act, sincere cooperation from all members is expected to make the decision a success. For more information, one can contact the two sites below.
Secretary,
Blake Nyguen
Example Notice 4
Chandigarh High School
Notice
Opening of a Library
14 January 2019
All members of our club are hereby informed that our club has undertaken programs to open a library at the club’s headquarters. Members are requested to join this program by donating books to the library. Both storybooks and textbooks are accepted.
Members may donate new and old, good-quality, unused books in their homes. Expect at least five books from each of our 100+ members—the undersigned hope for cooperation from every member. Books can be deposited any day in the club building from 6:00 pm until 9 pm in a month.
Librarian,
Gretchen
Now I’d Love to Know Your Thoughts
There you have it: notice writing on opening a library.
Now I would like to ask you some specific questions:
Did you find these examples helpful?
Did you find anything missing or want to see any different types of example?
Do let me know whatever your thoughts by leaving a quick comment below. And if you already did a comment then thanks for your active participation to make this stuff more perfect for others. | https://englishcompositions.com/notice-writing-on-opening-a-library/ |
PROBLEM TO BE SOLVED: To eliminate the advantage of a first mover and the disadvantage of a second mover at a game wherein two players places a black stone and a white stone alternately on a board.
SOLUTION: Two players are positioned at playing sides 2, 2' so as to face each other across a board, on which a plurality of regular hexagons 3, 3' are arranged in such manner that adjoining hexagons share one of the sides there of. One of the players has a plurality of black stones each having such a size as to cover a side of the hexagons 3, 3'..., while the other player has a plurality of white stones having the same size as the black stones. The player having black stones has a first move and the player having white stones has a second move, and the former first places a black stone on a side of any one of the hexagons 3, 3'..., and at the second and third moves, the latter places a white stone on any two sides of the hexagons 3, 3'..., and thereafter the former places two black stones and the latter places two white stones alternately on sides 4, 4'... of the hexagons 3, 3'..., on which no stone is placed, to advance the game.
COPYRIGHT: (C)1999,JPO | |
The following is a comparison of CleanPowerSF and PG&E’s Peak Day Pricing Program Terms and Conditions.
Peak Day Pricing Default Rates: Peak Day Pricing (PDP) rates provide customers the opportunity to manage their electric costs by reducing load during high cost periods or shifting load from high cost periods to lower cost periods. Decision 10-02- 032 ordered that beginning May 1, 2010, eligible large Commercial and Industrial (C&I) customers default to PDP rates. A customer is eligible for default when 1) it has at least twelve (12) billing months of hourly usage data available, and 2) it has measured demands equal to or exceeding 200 kW for three (3) consecutive months during the past 12 months. All eligible customers will be placed on PDP rates unless they opt-out to a TOU rate.
Decision 10-02-032, as modified by Decision 11-11- 008, ordered that beginning November 1, 2014, eligible small and medium Commercial and Industrial (C&I) customers (those with demands that are not equal to or greater than 200kW for three consecutive months) default to PDP rates. A customer is eligible for default when it has at least twelve (12) billing months of hourly usage data available and two years of experience on TOU rates. All eligible customers will be placed on PDP rates unless they opt-out to a TOU rate.
Customers that do not meet default eligibility may voluntarily elect to enroll on PDP rates.
Bundled service customers are eligible for PDP. Direct Access (DA) and Customer Choice Aggregation (CCA) service customers are not eligible, including those DA customers on transitional bundled service (TBS). Customers on standby service (Schedule S), or on net-energy metering Schedules NEMFC, NEMBIO, NEMCCSF, or NEMA, are not eligible for PDP. In addition, master-metered customers are not eligible, except for commercial buildings with submetering as stated in PG&E Rule 1 and Rule 18. Non-residential SmartAC customers are eligible. Smart A/C customers may request PG&E to activate their A/C Cycling switch or Programmable Controllable Thermostat (PCT) when the customer is participating solely in a PDP event.
Decision 18-08-013 temporarily suspends the default of eligible E-19 customers to PDP beginning November 1, 2018.
Participation in CleanPowerSF’s PDP Pilot Program is voluntary. For the pilot year of 2019, CleanPowerSF will offer the Peak Day Pricing (PDP) program option only for service agreements on E-19 and E-20 rate schedules. Pilot program participants may not be net-energy metered (NEM) or standby accounts, and must have been an enrolled CleanPowerSF customer as of March 31, 2019 (including those rejoining CleanPowerSF who opted out during enrollment). CleanPowerSF reserves the right to limit participation to 50 service agreements for the Pilot, and to modify program eligibility terms, at its sole discretion. Customers participating in the program may be on Green or SuperGreen service.
a. Default Provision: The default of eligible customers to PDP will occur once per year with the start of their billing cycle on or after November 1. Eligible customers will have at least 45-days notice prior to their planned default date when they may opt-out of PDP rates to take service on TOU rates. During the 45-day period, customers will continue to take service on their non-PDP rate. Customers may elect any applicable PDP rate. However, if the customers taking service on this schedule have not made that choice or elected to opt-out to a TOU rate at least five (5) days before their proposed default date, their service will be defaulted to the PDP version of this rate schedule on their default date. Existing customers on a PDP rate eligible demand response program will have the option to enroll.
Bundled service Net Energy Metering (NEM) customers taking service on Schedule NEM, NEMV, NEMVMASH, NEM2, NEM2V, or NEM2VMSH are eligible for default and opt-in PDP. NEM customers on NEMBIO, NEMFC, NEMCCSF, and NEMA are not eligible for PDP. The NEM Annual TrueUp billing date, and the first year PDP Bill Stabilization date in 19.c, may be independent 12 month periods. After the first year on PDP, NEM credits can offset PDP charges. All PDP billing for NEM customers will be based on net usage during each 15-minute interval. Net positive usage above the CRL, as well as net exports in excess of the CRL, in each 15-minute interval will be subject to PDP credits and charges as applicable.
Customers must enroll in the PDP Pilot Program by May 31, 2019. If CleanPowerSF elects to offer a peak day pricing program in future years, Participants in CleanPowerSF’s 2019 PDP Pilot Program will be automatically enrolled in the new PDP Program unless the customer cancels participation. CleanPowerSF will notify the customer of any adjustments to Program rules prior to the start of a new Program Season (May 1 through October 31). Participants may cancel participation in the Program at any time.
b. Capacity Reservation Level: Customers may elect a capacity reservation level (CRL) and pay for a fixed level of capacity, specified in kW. While the CRL is applicable year round, customers electing a CRL will be billed on a take-or-pay basis up to the specified CRL under the non-PDP rate of this schedule during the summer period (May 1 through October 31). This means that customers will be billed for summer peak generation demand charges up to the level of their CRL, even in summer months when the actual demand might be less than their CRL. Customers will receive PDP credits on summer usage above the CRL on all summer-period days. All usage during a PDP event protected under the CRL will be billed at the non-PDP rate. All usage above the CRL (as measured in 15-minute intervals), and not protected during a PDP event, will be billed at the PDP rate.
If a customer fails to elect an initial CRL, the customer’s initial CRL will be set at 50% of its most recent six (6) summer months’ average peak-period maximum demand and may go back to the previous year to make a full summer season (if available). If the customer has not established any historic summer billing demand, the CRL will be set at zero (0). The CRL for all customers, including NEM customers, must be greater than or equal to zero (0).
A customer may only elect to change their CRL once every 12 months.
CleanPowerSF will allow participants to specify a Capacity Reservation Level (CRL) and will apply the same program rules as PG&E’s for that CRL; the CRL will only apply during the CleanPowerSF PDP Pilot Program Season (May 1 - October 31). Customers that do not specify a CRL at the time of enrollment will not have a CRL applied.
c. Bill Stabilization: PDP customers will be offered bill stabilization for the initial twelve (12) months unless they opt-out during their initial 45-day period. Bill stabilization ensures that during the initial 12 months under PDP, the customer will not pay more than it would have had it opted-out to the applicable TOU rate.
If a customer terminates its participation on the PDP rate prior to the initial 12 month period expiring, the customer will receive bill stabilization up to the date when the customer terminates its participation. Bill stabilization benefits will be computed on a cumulative basis, based on the earlier of 1) when a customer terminates its participation on the PDP rate or 2) at the end of the initial 12-month period. Any applicable credits will be applied to the customer’s account on a subsequent regular bill. Bill stabilization is only available one time per customer. If a customer unenrolls or terminates its participation on a PDP rate, bill stabilization will not be offered again.
Accounts participating in CleanPowerSF's program will benefit from bill protection from their enrollment in the pilot. At the end of the 2019 Program Season, CleanPowerSF will calculate the discounts and PDP event day peak-period electricity charges for each account enrolled in the Program. If the sum of all credits and peak-period charges for an account is in the customer’s favor, CleanPowerSF will issue a credit to the customer. If the sum of all credits and surcharges is not in the customer’s favor, the customer will not receive any bill adjustments or be required to pay any additional charges.
If a customer cancels its participation in the PDP Pilot Program before the end of the Program Season, CleanPowerSF will not calculate credits or peak-period electricity charges for the entire Program Season for the account. Canceling participation in the PDP Pilot will not affect a customer’s option to re-enroll in any future peak day pricing program.
d. Notification Equipment: Customers, at their expense, must have access to the Internet and an e-mail address or a phone number to receive notification of a PDP event. In addition, all customers can have, at their expense, an alphanumeric pager or cellular telephone that is capable of receiving a text message sent via the Internet, and/or a facsimile machine to receive notification messages.
If a PDP event occurs, customers will be notified using one or more of the abovementioned systems. Receipt of such notice is the responsibility of the participating customer. PG&E will make reasonable efforts to notify customers, however it is the customer’s responsibility to maintain accurate notification contact information, receive such notice and to check the PG&E website to see if an event is activated. PG&E does not guarantee the reliability of the phone, text messaging, e-mail system or Internet site by which the customer receives notification.
Receipt of the PDP event alert is the sole responsibility of the participating customer. CleanPowerSF will make reasonable efforts to notify customers, however it is the customer’s responsibility to maintain accurate contact information, receive such notice, and to check the CleanPowerSF or PG&E website to see if a PDP event is activated. CleanPowerSF does not guarantee the reliability of the text messaging system, e-mail system, or Internet site by which the customer receives notification.
e. Demand Response Operations Website: Customers with demands of 200 kW or greater for three consecutive months can use PG&E’s demand response operations website located at https://inter-act.pge.com for load curtailment event notifications and communications.
The customer’s actual energy usage is available at PG&E’s demand response operations website or on “My Account”. This data may not match billing quality data, and the customer understands and agrees that the data posted to PG&E’s demand response operations website or on “My Account” may be different from the actual bill.
f. Program Operations: A maximum of fifteen (15) PDP events and a minimum of nine (9) PDP events may be called in any calendar year. PG&E will notify customers by 2:00 p.m. on a day-ahead basis when a PDP event will occur the next day. The PDP program will operate year-round and PDP events may be called for any day of the week. PDP events will be called from 2:00 p.m. to 6:00 p.m.
CleanPowerSF’s Pilot will operate from May 1, 2019 through October 31, 2019 and PDP events may be called for any day of the week. A maximum of 15 PDP events may be called during the Program Season. CleanPowerSF will notify customers by posting a PDP Alert on the CleanPowerSF website by 3:00 p.m. on a day-ahead basis when a PDP event will occur the next day. Email and text notifications also will be sent on or about 3 p.m..
NOTE: CleanPowerSF PDP event hours are from 4:00 p.m. to 8:00 p.m.
g. Event Cancellation: PG&E may initiate the cancellation of a PDP event before 4:00 p.m. the day-ahead of a noticed PDP event. If PG&E cancels an event, it will count the cancelled event toward the PDP limits.
CleanPowerSF's program will operate under these same conditions. If PG&E cancels a PDP event day, CleanPowerSF will notify customers by posting on the CleanPowerSF website, and will send an updated notification to customers canceling the PDP event by 5 p.m..
h. Event Trigger: PG&E will trigger a PDP event when the day-ahead temperature forecast trigger is reached. The trigger will be the average of the day-ahead maximum temperature forecasts for San Jose, Concord, Red Bluff, Sacramento and Fresno.
Beginning May 1 of each summer season, the PDP events on non-holiday weekdays will be triggered at 98 degrees Fahrenheit (°F), and will be triggered at 105°F on holidays and weekends. If needed, PG&E will adjust the non-holiday weekday trigger up or down over the course of the summer to achieve the range of 9 to 15 PDP events in any calendar year. Such adjustments would be made no more than twice per month and would be posted to the demand response operations website or on PG&E’s PDP website.
PDP events may also be initiated as warranted on a day-ahead basis by 1) extreme system conditions such as special alerts issued by the California Independent System Operator, 2) under conditions of high forecasted California spot market power prices, 3) to meet annual PDP event limits for a calendar year, or 4) for testing/evaluation purposes.
CleanPowerSF will call the same PDP event days as PG&E's PDP program.
i. Program Terms: A customer may opt-out anytime during their initial 12 months on a PDP rate. After the initial 12 months, customer’s participation will be in accordance with Electric Rule 12.
Customers may opt-out of a PDP rate at anytime to enroll in another demand response program beginning May 1, 2011.
Customers may opt-out of participation in CleanPowerSF's PDP Pilot at any time, however they must remain in the Program until the end of the Program Season to earn a bill credit. Customers who opt-out are not required to enroll in another demand response program.
j. Interaction with Other PG&E Demand Response Programs: Pursuant to D.18-11-029, customers on a PDP rate may no longer participate in another demand response program offered by PG&E or a third-party demand response provider as of October 26, 2018. If dual enrolled in BIP and PDP prior to October 26, 2018 then participation will be capped at the customer’s subscribed megawatt level as of November 29, 2018. New dual enrollment in BIP and PDP as of October 26, 2018 is no longer available. If a NEM customer is on PDP, the customer cannot participate in a third party Demand Response program unless it ceases to be a PDP customer. If a third party signs a NEM customer up under Rule 24 at the CAISO, the customer is automatically removed from PDP.
CleanPowerSF's program will operate under these same conditions.
Hours: Monday – Friday, 7 a.m. to 7 p.m.
Automated Call Center Service Available 24/7. | https://www.cleanpowersf.org/peak-day-pricing-program-details |
Attained elite rank with one of your units.
Built a Tur, Gulyay-Gorod, Kaiser or Samson.
Played your first match in multiplayer or skirmish.
Won a match by destroying all enemy HQs.
Won a match by earning enough Victory Points.
Acquired 100000 Iron in total.
Acquired 100000 Oil in total.
Defeated 100 units with a Flamethrower squad.
Harvested 500 wrecks.
Picked up 1000 pieces of equipment.
Reinforced 5000 soldiers in total.
Retreated with 1000 units in total.
Revived a hero.
Revived a hero 3 times in the same mission.
Destroyed 50 enemy bunkers in total.
Built 1000 mechs.
Played 500 Matches or Missions in total.
Conquered 1000 resource points
Constructed 100 structures.
Destroyed 1000 enemy mechs.
Killed 100 enemy heroes.
Collected 1000 resource stockpiles.
Won 100 Skirmish Matches.
Won a match with a Victory Point outcome of 500:0.
Won a match without ever retreating or reinforcing.
Won a match in the Drop Zone gamemode.
Built your first airship.
Attained rank IV with any hero.
Won a challenge map.
Finished all Campaign Missions.
Finished all Polania Missions.
Finished all Rusviet Missions.
Finished all Saxony Missions.
Finished all Rusviet Revolution Missions.
Finished all Usonia Missions. | https://www.exophase.com/game/iron-harvest-complete-edition-xbox/achievements/ |
Even if you cannot attend the 51st annual California International Antiquarian Book Fair in Pasadena this weekend (Feb. 9-11, 2018), you can still virtually peruse the booths of some ABAA members through these catalogs of the rare books and ephemera they will be exhibiting in Pasadena. Contact the dealer directly if something catches your eye! Athena Rare Books Booth 218 -- also presenting 50 book... [more]
Blog Posts tagged "pasadena" | https://www.abaa.org/blog/tag/pasadena |
Planning your Financial Future
Most of the time, our sources of information determine so much about the direction we take. When it comes to finances, what you know about money often dictates how you will behave with your money. It is everyone’s desire to one day be financially independent. What we are doing about it is what differs among different people.
Money moves in two ways; what comes in and what goes out – what we earn and what we spend. Someone in financial ruin tends to spend more than they earn. Someone who is not so badly off but not doing well at the same time spends the same amount they make. A better situation is where you spend less than you earn. It is a faster road to financial independence, where you no longer have to worry about ever spilling back to the previous levels.
When you look at this simplified explanation of financial situations, you find that what we do with the amounts we make matters more than how much we make. Our spending in whatever situation we are in ultimately determines where we will be in the future. Our attitudes and behavior towards money and the material things in life comes down to the discipline we have towards money, and what we know about money. When your information is limited, it is wise to seek the right advice, and so you turn to financial advisors.
Financial independence is not a status achieved overnight. For most people, it will reflect in full when they approach or at their retirement age. It needs you to have a solid plan that you stick to throughout. You, for example, need to keep an eye on all your expenses. You need to decide what you will do about the non-discretionary expenses, since this is where most people make the most mistakes, yet stand a chance to make the biggest difference.
There is also dealing with the changes that come over time. Expenses do not remain the same, and neither does life situations. At the same time, you may develop new needs, such as taking your family for vacation, which also needs a proper plan. Another important change may be your income status. What would you do in case you were out of work? How would you support the family? In case you got a promotion, how would you allocate the added income?
A good plan of approach includes a thorough analysis of your present savings and assets and all your expenses. You need to see how well what you have supports what you have to spend. You need to then look at how those necessary expenses balance with the non-discretionary ones. You need to also factor in how your spending needs will change over time as circumstances and prevailing market forces change.
Another area that needs attention is your reaction to when you may end up not sticking to the set plan with the discipline required. You may, for example, think you are saving enough, only to review the progress and find you need to save more. You need to, therefore, know how to factor in future conditions so that you have enough when you need it the most.
To ensure your plan works out well, seek the right advice. That calls for hiring the best financial advisor out there. | https://healthcareclock.com/2019/12/18/my-most-valuable-advice-6/ |
---
abstract: 'In this introductory talk we will establish connections between the statistical analysis of galaxy clustering in cosmology and recent work in mainstream spatial statistics. The lecture will review the methods of spatial statistics used by both sets of scholars, having in mind the cross-fertilizing purpose of the meeting series. Special topics will be: description of the galaxy samples, selection effects and biases, correlation functions, nearest neighbor distances, void probability functions, Fourier analysis, and structure statistics.'
---
Statistics of Galaxy Clustering
===============================
Introduction
------------
One of the most important motivations of these series of conferences is to promote vigorous interaction between statisticians and astronomers. The organizers merit our admiration for bringing together such a stellar cast of colleagues from both fields. In this third edition, one of the central subjects is cosmology, and in particular, statistical analysis of the large-scale structure in the universe. There is a reason for that — the rapid increase of the amount and quality of the available observational data on the galaxy distribution (also on clusters of galaxies and quasars) and on the temperature fluctuations of the microwave background radiation.
These are the two fossils of the early universe on which cosmology, a science driven by observations, relies. Here we will focus on one of them — the galaxy distribution. First we briefly review the redshift surveys, how they are built and how to extract statistically analyzable samples from them, considering selection effects and biases. Most of the statistical analysis of the galaxy distribution are based on second order methods (correlation functions and power spectra). We comment them, providing the connection between statistics and estimators used in cosmology and in spatial statistics. Special attention is devoted to the analysis of clustering in Fourier space, with new techniques for estimating the power spectrum, which are becoming increasingly popular in cosmology. We show also the results of applying these second-order methods to recent galaxy redshift surveys.
Fractal analysis has become very popular as a consequence of the scale-invariance of the galaxy distribution at small scales, reflected in the power-law shape of the two-point correlation function. We discuss here some of these methods and the results of their application to the observations, supporting a gradual transition from a small-scale fractal regime to large-scale homogeneity. The concept of lacunarity is illustrated with some detail.
We end by briefly reviewing some of the alternative measures of point statistics and structure functions applied thus far to the galaxy distribution: void probability functions, counts-in-cells, nearest neighbor distances, genus, and Minkowski functionals.
Cosmological datasets
---------------------
Cosmological datasets differ in several respects from those usually studied in spatial statistics. The point sets in cosmology (galaxy and cluster surveys) bear the imprint of the observational methods used to obtain them.
The main difference is the systematically variable intensity (mean density) of cosmological surveys. These surveys are usually magnitude-limited, meaning that all objects, which are brighter than a pre-determined limit, are observed in a selected region of the sky. This limit is mainly determined by the telescope and other instruments used for the program. Apparent magnitude, used to describe the limit, is a logarithmic measure of the observed radiation flux.
It is usually assumed that galaxies at all distances have the same (universal) luminosity distribution function. This assumption has been tested and found to be in satisfying accordance with observations. As the observed flux from a galaxy is inversely proportional to the square of its distance, we can see at larger distances only a bright fraction of all galaxies. This leads directly to the mean density of galaxies that depends on their distance from us $r$.
This behaviour is quantified by a selection function $\phi(r)$, which is usually found by estimating first the luminosity distribution of galaxies (the luminosity function).
One can also select a distance limit, find the minimum luminosity of a galaxy, which can yet be seen at that distance, and ignore all galaxies that are less luminous. Such samples are called volume-limited. They are used for some special studies (typically for counts-in-cells), but the loss of hard-earned information is enormous. The number of galaxies in volume-limited samples is several times smaller than in the parent magnitude-limited samples. This will also increase the shot (discreteness) noise.
In addition to the radial selection function $\phi(r)$, galaxy samples also are frequently subject to angular selection. This is due to our position in the Galaxy — we are located in a dusty plane of the Galaxy, and the window in which we see the Universe, also is dusty. This dust absorbs part of galaxies’ light, and makes the real brightness limit of a survey dependent on the amount of dust in a particular line-of-sight. This effect has been described by a $\phi(b)\sim(\sin b)^{-1}$ law ($b$ is the galactic latitude); in reality the dust absorption in the Galaxy is rather inhomogeneous. There are good maps of the amount of Galactic dust in the sky, the latest maps have been obtained using the COBE and IRAS satellite data [@dust].
Edge problems, which usually affect estimators in spatial statistics, also are different for cosmological samples. The decrease of the mean density towards the sample borders alleviates these problems. Of course, if we select a volume-limited sample, we select also all these troubles (and larger shot noise). From the other side, edge effects are made more prominent by the usual observing strategies, when surveys are conducted in well-defined regions in the sky. Thus, edge problems are only partly alleviated; maybe it will pay to taper our samples at the side borders, too?
Some of the cosmological surveys have naturally soft borders. These are the all-sky surveys; the best known is the IRAS infrared survey, dust is almost transparent in infrared light. The corresponding redshift survey is the PSCz survey, which covers about 85% of the sky [@pscz]. A special follow-up survey is in progress to fill in the remaining Galactic Zone-of-Avoidance region, and meanwhile numerical methods have been developed to interpolate the structures seen in the survey into the gap [@vs; @ballinger].
Another peculiarity of galaxy surveys is that we can measure exactly only the direction to the galaxy (its position in the sky), but not its distance. We measure the radial velocity $v_r$ (or redshift $z=v_r/c$, $c$ is the velocity of light) of a galaxy, which is a sum of the Hubble expansion, proportional to the distance $d$, and the dynamical velocity $v_p$ of the galaxy, $v_r=H_0d+v_p$. Thus we are differentiating between redshift space, if the distances simply are determined as $d=v_r/H_0$, and real space. The real space positions of galaxies could be calculated if we exactly knew the peculiar velocities of galaxies; we do not. The velocity distortions can be severe; well-known features of redshift space are fingers-of-God, elongated structures that are caused by a large radial velocity dispersion in massive clusters of galaxies. The velocity distortions expand a cluster in redshift space in the radial direction five-ten times.
For large-scale structures the situation is different, redshift distortions compress them. This is due to the continuing gravitational growth of structures. These differences can best be seen by comparing the results of numerical simulations, where we know also the real-space situation, in redshift space and in real space.
The last specific feature of the cosmology datasets is their size. Up to recent years most of the datasets have been rather small, of the order of $10^3$ objects; exceptions exist, but these are recent. Such a small number of points gives a very sparse coverage of three-dimensional survey volumes, and shot noise has been a severe problem.
This situation is about to change, swinging to the other extreme; the membership of new redshift surveys already is measured in terms of $10^5$ (160,000 for the 2dF survey, quarter of a million planned) and million-galaxy surveys are on their way (the Sloan Survey). More information about these surveys can be found in their Web pages: *http:/-2pt/www.mso.anu.edu.au/2dFGRS/* for the 2dF survey and *http:/-2pt/www.sdss.org/* for the Sloan survey. This huge amount of data will force us to change the statistical methods we use. Nevertheless, the deepest surveys (e.g., distant galaxy cluster surveys) will always be sparse, so discovering small signals from shot-noise dominated data will remain a necessary art.
Correlation analysis
--------------------
There are several related quantities that are second-order characteristics used to quantify clustering of the galaxy distribution in real or redshift space. The most popular one in cosmology is the two-point correlation function, $\xi({\mathbf r})$. The infinitesimal interpretation of this quantity reads as follows: $$\label{xi} dP_{12} = \bar{n}^2[ 1 + \xi({\mathbf r})]dV_1 dV_2$$ is the joint probability that in each one of the two infinitesimal volumes $dV_1$ and $dV_2$, with separation vector $\mathbf r$, lies a galaxy. Here $\bar{n}$ is the mean number density (intensity). Assuming that the galaxy distribution is a homogeneous (invariant under translations) and isotropic (invariant under rotations) point process, this probability depends only on $r=|{\mathbf r}|$. In spatial statistics, other functions related with $\xi(r)$ are commonly used: $$\lambda_2(r)=\bar{n}^2\xi(r)+1, \qquad g(r)= 1+\xi(r), \qquad
\Gamma(r)= \bar{n}(\xi(r)+1),
\label{relxi}$$ where $\lambda_2(r)$ is the second-order intensity function, $g(r)$ is the pair correlation function, also called the radial distribution function or structure function, and $\Gamma(r)$ is the conditional density proposed by .
Different estimators of $\xi(r)$ have been proposed so far in the literature, both in cosmology and in spatial statistics. The main differences are in correction for edge effects. Comparison of their performance can be found in several papers [@pons; @kerm; @stoyan]. There is clear evidence that $\xi(r)$ is well described by a power-law at scales $0.1 \leq r \leq 10 \, h^{-1}$ Mpc where $h$ is the Hubble constant in units of 100 km s$^{-1}\,$Mpc$^{-1}$: $$\xi(r)=\left ( \frac{r}{r_0} \right )^{-\gamma},$$ with $\gamma \simeq 1.8$ and $r_0 \simeq 5.4 \, h^{-1}$ Mpc. This scaling behavior is one of the reasons that have lead some astronomers to describe the galaxy distribution as fractal. A power-law fit for $g(r) \propto r^{3-D_2}$ permits to define the correlation dimension $D_2$. The extent of the fractal regime is still a matter of debate in cosmology, but it seems clear that the available data on redshift surveys indicate a gradual transition to homogeneity for scales larger than 15–20 $h^{-1}$ Mpc [@mart99]. Moreover, in a fractal point distribution, the correlation length $r_0$ increases with the radius of the sample because the mean density decreases [@pietro87]. This simple prediction of the fractal interpretation is not supported by the data, instead $r_0$ remains constant for volume-limited samples with increasing depth [@mart01].
Several versions of the volume integral of the correlation function are also frequently used in the analysis of galaxy clustering. The most extended one in spatial statistics is the so-called Ripley $K$-function $$\label{rip}
K(r)= \int_0^r 4 \pi s^2 (1 + \xi(s)) ds$$ although in cosmology it is more frequent to use an expression which provides directly the average number of neighbors an arbitrarily chosen galaxy has within a distance $r$, $N(<r)=\bar{n}K(r)$ or the average conditional density $$\Gamma^{\ast}(r) = \frac{3}{r^3}\int_0^r \Gamma(s)s^2ds$$ Again a whole collection of estimators are used to properly evaluate these quantities. Pietronero and coworkers recommend to use only minus–estimators to avoid any assumption regarding the homogeneity of the process. In these estimators, averages of the number of neighbors within a given distance are taken only considering as centers these galaxies whose distances to the border are larger than $r$. However, caution has to be exercised with this procedure, because at large scales only a small number of centers remain, and thus the variance of the estimator increases.
Integral quantities are less noisy than the corresponding differential expressions, but obviously they do contain less information on the clustering process due the fact that values of $K(r_1)$ and $K(r_2)$ for two different scales $r_1$ and $r_2$ are more strongly correlated than values of $\xi(r_1)$ and $\xi(r_2)$. Scaling of $N(<r) \propto r^{D_2}$ provides a smoother estimation of the correlation dimension. If scaling is detected for partition sums defined by the moments of order $q$ of the number of neighbors $$Z(q,r)=\frac{1}{N}\sum_{i=1}^N n_i(r)^{q-1} \propto r^{D_q/(q-1)},$$ the exponents $D_q$ are the so-called generalized or multifractal dimensions [@mart90]. Note that for $q=2$, $Z(2,r)$ is an estimator of $N(<r)$ and therefore $D_q$ for $q=2$ is simply the correlation dimension. If different kinds of cosmic objects are identified as peaks of the continuous matter density field at different thresholds, we can study the correlation dimension associated to each kind of object. The multiscaling approach [@jensen] associated to the multifractal formalism provides a unified framework to analyze this variation. It has been shown [@mart95] that the value of $D_2$ corresponding to rich galaxy clusters (high peaks of the density field) is smaller than the value corresponding to galaxies (within the same scale range) as prescribed in the multiscaling approach.
Finally we want to consider the role of lacunarity in the description of the galaxy clustering [@msbook]. In Fig. \[lacun\], we show the space distribution of galaxies within one slice of the Las Campanas redshift survey, together with a fractal pattern generated by means of a Rayleigh-Lévy flight [@mandel]. Both have the same mass-radius dimension, defined as the exponent of the power-law that fits the variation of mass within concentric spheres centered at the observer position. $$M(R)=FR^{D_M}.
\label{mrr}$$ The best fitted value for both point distributions is $D_M \simeq 1.6$ as shown in the left bottom panel of Fig. \[lacun\]. The different appearance of both point distributions is a consequence of the different degree of lacunarity. have proposed to quantify this effect by measuring the variability of the prefactor $F$ in Eq. \[mrr\], $$\Phi = \frac{ E \{(F-\bar{F})^2 \}}{\bar{F}^2}$$ The result of applying this lacunarity measure is shown in the right bottom panel of Fig. \[lacun\]. The visual differences between the point distributions are now well reflected in this curve.
![Comparison of a Las Campanas survey slice (upper left panel) with the Rayleigh-Lévy flight model (upper right panel). The fractal dimensions of both distributions coincide, as shown by the $\ln M$–$\ln R$ curves in the lower left panel, but the lacunarity curves (in the lower right panel) differ considerably. The solid lines describe the galaxy distribution, dotted lines – the model results. From . []{data-label="lacun"}](martinez1.eps){width="\textwidth"}
Power spectra
-------------
The current statistical model for the main cosmological fields (density, velocity, gravitational potential) is the Gaussian random field. This field is determined either by its correlation function or by its spectral density, and one of the main goals of spatial statistics in cosmology is to estimate those two functions.
In recent years the power spectrum has attracted more attention than the correlation function. There are at least two reasons for that — the power spectrum is more intuitive physically, separating processes on different scales, and the model predictions are made in terms of power spectra. Statistically, the advantage is that the power spectrum amplitudes for different wavenumbers are statistically orthogonal: $$E\left\{\widetilde{\delta}({{\mathbf k}})\widetilde{\delta}^\star({{\mathbf k}}')\right\}=
(2\pi)^3\delta_D({{\mathbf k}}-{{\mathbf k}}')P({{\mathbf k}}).$$ Here $\widetilde{\delta}({{\mathbf k}})$ is the Fourier amplitude of the overdensity field $\delta=(\rho-\bar{\rho})/\bar{\rho}$ at a wavenumber ${{\mathbf k}}$, $\rho$ is the matter density, a star denotes complex conjugation, $E\{\}$ denotes expectation values over realizations of the random field, and $\delta_D({{\mathbf x}})$ is the three-dimensional Dirac delta function. The power spectrum $P({{\mathbf k}})$ is the Fourier transform of the correlation function $\xi(\mathbf{r})$ of the field.
Estimation of power spectra from observations is a rather difficult task. Up to now the problem has been in the scarcity of data; in the near future there will be the opposite problem of managing huge data sets. The development of statistical techniques here has been motivated largely by the analysis of CMB power spectra, where better data were obtained first, and has been parallel to that recently.
The first methods developed to estimate the power spectra were direct methods — a suitable statistic was chosen and determined from observations. A good reference is .
The observed samples can be modeled by an inhomogeneous point process (a Gaussian Cox process) of number density $n({{\mathbf x}})$: $$n({{\mathbf x}})=\sum_i\delta_D({{\mathbf x}}-{{\mathbf x}}_i),$$ where $\delta_D({{\mathbf x}})$ is the Dirac delta-function. As galaxy samples frequently have systematic density trends caused by selection effects, we have to write the estimator of the density contrast in a sample as $$D({{\mathbf x}})=\sum_i\frac{\delta_D({{\mathbf x}}-{{\mathbf x}}_i)}{\bar{n}({{\mathbf x}}_i)}-1,$$ where $\bar{n}({{\mathbf x}})\sim\bar{\rho}({{\mathbf x}})$ is the selection function expressed in the number density of objects.
The estimator for a Fourier amplitude (for a finite set of frequencies ${{\mathbf k}}_i$) is $$F({{\mathbf k}}_i)=\sum_j\frac{\psi({{\mathbf x}}_j)}
{\bar{n}({{{\mathbf x}}}_j)}e^{i{{\mathbf k}}_i\cdot{{\mathbf x}}} -\widetilde{\psi}({{\mathbf k}}_i),$$ where $\psi({{\mathbf x}})$ is a weight function that can be selected at will. The raw estimator for the spectrum is $$P_R({{\mathbf k}}_i)=F({{\mathbf k}}_i)F^\star({{\mathbf k}}_i),$$ and its expectation value $$E\left\{\langle|F({{\mathbf k}}_i)|^2\rangle\right\}
=\int G({{\mathbf k}}_i-{{\mathbf k}}')P({{\mathbf k}}')\,\frac{d^3k'}{(2\pi)^3}
+\int_V\frac{\psi^2({{\mathbf x}})}{\bar{n}({{\mathbf x}})}\,d^3x,$$ where $G({{\mathbf k}})=|\tilde{\psi}({{\mathbf k}})|^2$ is the window function that also depends on the geometry of the sample volume. Symbolically, we can get the estimate of the power spectra $\widehat{P}$ by inverting the integral equation $$G\otimes \widehat{P}=P_R-N,$$ where $\otimes$ denotes convolution, $P_R$ is the raw estimate of power, and $N$ is the (constant) shot noise term.
In general, we have to deconvolve the noise-corrected raw power to get the estimate of the power spectrum. This introduces correlations in the estimated amplitudes, so these are not statistically orthogonal any more. A sample of a characteristic spatial size $L$ creates a window function of width of $\Delta k\approx 1/L$, correlating estimates of spectra at that wavenumber interval.
As the cosmological spectra are usually assumed to be isotropic, the standard method to estimate the spectrum involves an additional step of averaging the estimates $\widehat{P}({{\mathbf k}})$ over a spherical shell $k\in[k_i,k_{i+1}]$ of thickness $k_{i+1}-k_i> \Delta k=1/L$ in wavenumber space. The minimum-variance requirement gives the FKP [@fkp] weight function: $$\psi({{\mathbf x}})\sim\frac{\bar{n}({{\mathbf x}})}{1+\bar{n}({{\mathbf x}})P(k)},$$ and the variance is $$\frac{\sigma^2_P(k)}{P^2_R(k)}\approx \frac{2}{\mathcal N},$$ where $\mathcal N$ is the number of coherence volumes in the shell. The number of independent volumes is twice as small (the density field is real). The coherence volume is $V_c(k)\approx(\Delta k)^3\approx 1/L^3\approx 1/V$.
As the data sets get large, straight application of direct methods (especially the error analysis) becomes difficult. There are different recipes that have been developed with the future data sets in mind. A good review of these methods is given in .
The deeper the galaxy sample, the smaller the coherence volume, the larger the spectral resolution and the larger the wavenumber interval where the power spectrum can be estimated. The deepest redshift surveys presently available are the PSCz galaxy redshift survey (15411 redshifts up to about $400 h^{-1}\,$Mpc, see ), the Abell/ACO rich galaxy cluster survey, 637 redshifts up to about 300$\,h^{-1}\,$Mpc [@miller]), and the ongoing 2dF galaxy redshift survey (141400 redshifts up to $750 h^{-1}\,$Mpc [@2dfnature]). The estimates of power spectra for the two latter samples have been obtained by the direct method [@batwig; @2dfpower]. Fig. \[2dfpower\] shows the power spectrum for the 2dF survey.
![ Power spectrum of the 2dF redshift survey, divided by a smooth model power spectrum. The spectrum is not deconvolved. Error bars are determined from Gaussian realizations; the dotted lines show the wavenumber region that is free of the influence of the window function and of the radial velocity distortions and nonlinear effects. (Courtesy of W. J. Percival and the 2dF galaxy redshift survey team.) []{data-label="2dfpower"}](martinez2.eps){width=".8\textwidth"}
The covariance matrix of the power spectrum estimates in Fig. \[2dfpower\] was found from simulations of a matching Gaussian Cox process in the sample volume. The main new feature in the spectra, obtained for the new deep samples, is the emergence of details (wiggles) in the power spectrum. While sometime ago the main problem was to estimate the mean behaviour of the spectrum and to find its maximum, now the data enables us to see and study the details of the spectrum. These details have been interpreted as traces of acoustic oscillations in the post-recombination power spectrum. Similar oscillations are predicted for the cosmic microwave background radiation fluctuation spectrum. The CMB wiggles match the theory rather well, but the galaxy wiggles do not, yet.
Thus, the measurement of the power spectrum of the galaxy distribution is passing from the determination of its overall behaviour to the discovery and interpretation of spectral details.
Other clustering measures
-------------------------
To end this review we briefly mention other measures used to describe the galaxy distribution.
### Counts-in-cells and void probability function
The probability that a randomly placed sphere of radius $r$ contains exactly $N$ galaxies is denoted by $P(N,r)$. In particular, for $N=0$, $P(0,r)$ is the so-called void probability function, related with the empty space function or contact distribution function $F(r)$, more frequently used in the field of spatial statistics, by $F(r)=1-P(0,r)$. The moments of the counts-in-cells probabilities can be related both with the multifractal analysis [@borgani93] and with the higher order $n$-point correlation functions [@white79; @stoy95; @statstat].
### Nearest-neighbor distributions
In spatial statistics, different quantities based on distances to nearest neighbors have been introduced to describe the statistical properties of point processes. $G(r)$ is the distribution function of the distance $r$ of a given point to its nearest neighbor. It is interesting to note that $F(r)$ is just the distribution function of the distance $r$ from an arbitrarily chosen point in ${{{{\rm I\kern-0.16em{}R}}}}^3$ — not being an event of the point process — to a point of the point process (a galaxy in the sample in our case). The quotient $$J(r) = \frac{1-G(r)}{1-F(r)}$$ introduced by is a powerful tool to analyze point patterns and has discriminative power to compare the results of $N$-body models for structure formation with the real distribution of galaxies [@kerscher99].
### Topology
One very popular tool for analysis of the galaxy distribution is the genus of the isodensity surfaces. To define this quantity, the point process is smoothed to obtain a continuous density field, the intensity function, by means of a kernel estimator for a given bandwidth. Then we consider the fraction of the volume $f$ which encompasses those regions having density exceeding a given threshold $\rho_t$. The boundary of these regions specifies an isodensity surface. The genus $G(S)$ of a surface $S$ is basically the number of holes minus the number of isolated regions plus 1. The genus curve shows the variation of $G(S)$ with $f$ or $\rho_t$ for a given window radius of the kernel function. An analytical expression for this curve is known for Gaussian density fields. It seems that the empirical curve calculated from the galaxy catalogs can be reasonably well fitted to a Gaussian genus curve [@canavezes] for window radii varying within a large range of scales.
### Minkowski functionals
A very elegant generalization of the previous analysis to a larger family of morphological characteristics of the point processes is provided by the Minkowski functionals. These scalar quantities are useful to study the shape and connectivity of a union of convex bodies. They are well known in spatial statistics and have been introduced in cosmology by . On a clustered point process, Minkowski functionals are calculated by generalizing the Boolean grain model into the so-called germ-grain model. This coverage process consists in considering the sets $A_r
= \cup_{i=1}^N B_r({\mathbf x}_i)$ for the diagnostic parameter $r$, where $\{ {\mathbf x}_i \}_{i=1}^N$ represents the galaxy positions and $B_r({\mathbf x}_i)$ is a ball of radius $r$ centered at point ${\mathbf x}_i$. Minkowski functionals are applied to sets $A_r$ when $r$ varies. In ${{{{\rm I\kern-0.16em{}R}}}}^3$ there are four functionals: the volume $V$, the surface area $A$, the integral mean curvature $H$, and the Euler-Poincaré characteristic $\chi$, related with the genus of the boundary of $A_r$ by $\chi=1-G$. Application of Minkowski functionals to the galaxy cluster distribution can be found in . These quantities have been used also as efficient shape finders by .
This work was supported by the Spanish MCyT project AYA2000-2045 and by the Estonian Science Foundation under grant 2882. Enn Saar is grateful for the invited professor position funded by the Vicerrectorado de Investigación de la Universitat de València.
[xx]{}
Blumenfeld R Mandelbrot B 1997 [*Phys. Rev. E*]{} [ **56**]{}, 112–118.
S 1993 [*Mon. Not. R. Astr. Soc.*]{} [**260**]{}, 537–549.
A, [Springel]{} V, [Oliver]{} S J, [Rowan-Robinson]{} M, [Keeble]{} O, [White]{} S D M, [Saunders]{} W, [Efstathiou]{} G, [Frenk]{} C S, [McMahon]{} R G, [Maddox]{} S, [Sutherland]{} W [Tadros]{} H 1998 [*Mon. Not. R. Astr. Soc.*]{} [**297**]{}, 777–793.
Feldman H A, Kaiser N Peacock J A 1994 [*Astrophys. J.*]{} [ **426**]{}, 23–37.
Jensen M H, Paladin G Vulpiani A 1991 [*Phys. Rev. Lett.*]{} [ **67**]{}, 208–211.
M, [Pons-Bordería]{} M, [Schmalzing]{} J, [Trasarti-Battistoni]{} R, [Buchert]{} T, [Martínez]{} V J [Valdarnini]{} R 1999 [ *Astrophys. J.*]{} [**513**]{}, 543–548.
M, [Schmalzing]{} J, [Retzlaff]{} J, [Borgani]{} S, [Buchert]{} T, [Gottlober]{} S, [Muller]{} V, [Plionis]{} M [Wagner]{} H 1997 [ *Mon. Not. R. Astr. Soc.*]{} [**284**]{}, 73–84.
Kerscher M, Szapudi I Szalay A S 2000 [*Astrophys. J.*]{} [ **535**]{}, L13–L16.
B B 1982 [*The fractal geometry of nature*]{} W.H. Freeman San Francisco.
Martínez V J 1999 [*Science*]{} [**284**]{}, 445–446.
Martínez V J, Jones B J T, Domínguez-Tenreiro R van de Weygaert R 1990 [*Astrophys. J.*]{} [**357**]{}, 50–61.
Martínez V J, L[ó]{}pez-Martí B Pons-Bordería M J 2001 [*Astrophys. J.*]{} [**554**]{}, L5–L8.
V J, [Paredes]{} S, [Borgani]{} S [Coles]{} P 1995 [ *Science*]{} [**269**]{}, 1245–1247.
Martínez V J Saar E 2001 [*Statistics of the Galaxy Distribution*]{} Chapman and Hall/CRC Press Boca Raton.
K R, [Buchert]{} T [Wagner]{} H 1994 [*Astron. Astrophys.*]{} [**288**]{}, 697–704.
Miller C J Batuski D J 2001 [*Astrophys. J.*]{} [ **551**]{}, 635–642.
Miller C J, Nichol R C Batuski D J 2001. astro-ph/0103018, submitted to Astrophys. J.
Peacock J A, Cole S, Norberg P, Baugh C M, Bland-Hawthorn J, Bridges T, Cannon R D, Colless M, Collins C, Couch W, Dalton G, Deeley K, Propris R D, Driver S P, Efstathiou G, Ellis R S, Frenk C S, Glazebrook K, Jackson C, Lahav O, Lewis I, Lumsden S, Maddox S, Percival W J, Peterson B A, Price I, Sutherland W Taylor K 2001 [*Nature*]{} [**410**]{}, 169–173.
Percival W J, Baugh C M, Bland-Hawthorn J, Bridges T, Cannon R, Cole S, Colless M, Collins C, Couch W, Dalton G, Propris R D, Driver S P, Efstathiou G, Ellis R S, Frenk C S, Glazebrook K, Jackson C, Lahav O, Lewis I, Lumsden S, Maddox S, Moody S, Norberg P, Peacock J A, Peterson B A, Sutherland W Taylor K 2001. astro-ph/0105252, submitted to Mon. Not. R. Astr. Soc.
Pietronero L 1987 [*Physica A*]{} [**144**]{}, 257.
Pons-Bordería M J, Martínez V J, Stoyan D, Stoyan H Saar E 1999 [*Astrophys. J.*]{} [**523**]{}, 480–491.
V, [Sathyaprakash]{} B S [Shandarin]{} S F 1998 [ *Astrophys. J.*]{} [**495**]{}, L5–L8.
Saunders W Ballinger B E 2000 [*in*]{} R. C Kraan-Korteweg, P. A Henning H Andernach, eds, ‘The Hidden Universe, ASP Conference Series’ Astronomical Society of the Pacific, San Francisco. astro-ph/0005606, in press.
Saunders W, Sutherland W J, Maddox S J, Keeble O, Oliver S J, Rowan-Robinson M, McMahon R G, Efstathiou G P, Tadros H, White S D M, Frenk C S, Carrami[ñ]{}ana A Hawkins M R S 2000 [*Mon. Not. R. Astr. Soc.*]{} [**317**]{}, 55–64.
Schlegel D J, Finkbeiner D P Davis M 1998 [*Astrophys. J.*]{} [**500**]{}, 525–553.
Schmoldt I M, Saar V, Saha P, Branchini E, Efstathiou G P, Frenk C S, Keeble O, Maddox S, McMahon R, Oliver S, Rowan-Robinson M, Saunders W, Sutherland W J, Tadros H White S D M 1999 [*Astron. J.*]{} [ **118**]{}, 1146–1160.
Stoyan D, Kendall W Mecke J 1995 [*Stochastic Geometry and its Applications*]{} John Wiley & Sons Chichester.
Stoyan D Stoyan H 2000 [*Scand. J. Satist.*]{} [**27**]{}. 641–656.
Szapudi I, Colombi S Bernardeau F 1999 [*Mon. Not. R. Astr. Soc.*]{} [**310**]{}, 428–444.
Tegmark M, Hamilton A J S, Strauss M A, Vogeley M S Szalay A S 1998 [*Astrophys. J.*]{} [**499**]{}, 555–576.
van Lieshout M N M Baddeley A 1996 [*Stat. Neerlandica*]{} [ **50**]{}, 344.
S D M 1979 [*Mon. Not. R. Astr. Soc.*]{} [**186**]{}, 145–154.
| |
Why we should examine urine sediment microscopically?
Microscopic examination of urine sediments helps in the diagnosis of renal and urinary tract diseases with the chemical analysis by reagent strips (click here to see the chemical analysis of urine) where with microscopy
- One can detect those cellular and non-cellular elements of urine that do not give distinct chemical reactions.
- Microscopy can also serve as a confirmatory test in some circumstances (e.g., erythrocytes, leukocytes, bacteria).
- Useful for those samples with abnormal dipstick results.
For example if the reagent strip doesn’t give positive reaction for WBCs while the sample contain large amounts of WBCs, you will detect it during microscopic examination.
also if the reagent strip give positive result for WBCs, you can confirm this under microscope.
What items can we see in urine sediment under microscope?
The Chemist or clinician who is in training on urine analysis for first time will ask about what he search for in urine sediment under microscope and how he evaluate each item
The items you find in urine under microscope are classified as following:
A) Cellular Items:
Are derived from two sources:
(1) Spontaneously exfoliated epithelial lining cells of the kidney and lower urinary tract, and
(2) cells of hematogenous origin (leukocytes and erythrocytes).
B) Non-Cellular Items:
(1) Crystals.
(2) Casts.
(3) Mucus.
C) Organisms:
(1) Bacteria.
(2) Fungi.
(3) Sperms.
(4) Parasites.
D) Artifacts
(1) Starch
(2) Pollen grains
(3) Oil droplets
(4) Air bubbles
(5) Fibers
(6) Fecal contamination
What is the normal value for each item?
Actually there in no reference or normal values for items found in urine sediments because
(1) There is variation in the concentration of random urine samples in the same day where there is no normal reference value for each item in normal urine so it is better to use the first morning sample.
(2) there is no specific standard procedure for sediments concentration by centrifugation
Therefore individual laboratories should establish there own reference values …… that is good but how i can standardize the microscopic examination in my lab? its simple… there are some factors if you standardized them then you have a standard procedure for microscopic examination of urine sediments ….. they are according to National Committee for Clinical Laboratory Standards (NCCLS):-
- Volume of Urine examined: 12 ml is the recommended amount for examination (You should make your reference values using this volume using normal samples or commercial standards) …. if the sample volume is less than 12 ml, a corresponding factor must be applied to all numeric sediments counts and if the sample volume is more than 12 ml, mix it will then take 12 ml.
- Time of centrifugation: to ensure equal sedimentation of all specimens …. the recommended time for centrifugation is 5 minutes.
- Speed of centrifugation: the recommended speed is 400 g and according to the WHO the speed is 2000 g
- Volume of sediment examined: you should re-suspend the sediment in constant volume by distilled water.
- Reporting format: Every person at a given laboratory who performs a microscopic examination should use the same terminology, reporting format and reference ranges. Decisions about which formed elements should be reported and quantitated should be made by the individual laboratory based on the patient population and professional skill level of the people performing the testing.
Now I should ask you a question …. in case of the speed of the centrifugation, Is the speed of any centrifuge using the same rpm equal??? of course not because the speed depends on the Rotor Radius (from center of rotor to sample) in centimeters ….. That is good, but how you can calculate the speed of the centrifuge? you will use the following equation:-
Where
- RCF = relative centrifugal force (expressed in in units of times gravity (× g))
- R = Rotor radius (i.e. the rotating disc of the centrifuge) in CM
- S =the speed of the centrifuge in rotations per minute (rpm)
For example, in the WHO manual, the speed for centrifugation to prepare the urine sediment is 2000 g (as we will see) and the rotor radius of your centrifuge is 7 CM, so to know the rpm you should adjust your centrifuge at, you can calculate it from the previous equation (2000 g = 1.118 x 10-5) 7 x S2), then the (S) = 5000 rpm. | http://medicine-science-and-more.com/from-a-to-z-in-complete-urine-analysis-urinalysis-part-two/ |
The future of neurosurgery: a white paper on the recruitment and retention of women in neurosurgery.
The leadership of Women in Neurosurgery (WINS) has been asked by the Board of Directors of the American Association of Neurological Surgeons (AANS) to compose a white paper on the recruitment and retention of female neurosurgical residents and practitioners. Neurosurgery must attract the best and the brightest. Women now constitute a larger percentage of medical school classes than men, representing approximately 60% of each graduating medical school class. Neurosurgery is facing a potential crisis in the US workforce pipeline, with the number of neurosurgeons in the US (per capita) decreasing. The number of women entering neurosurgery training programs and the number of board-certified female neurosurgeons is not increasing. Personal anecdotes demonstrating gender inequity abound among female neurosurgeons at every level of training and career development. Gender inequity exists in neurosurgery training programs, in the neurosurgery workplace, and within organized neurosurgery. The consistently low numbers of women in neurosurgery training programs and in the workplace results in a dearth of female role models for the mentoring of residents and junior faculty/practitioners. This lack of guidance contributes to perpetuation of barriers to women considering careers in neurosurgery, and to the lack of professional advancement experienced by women already in the field. There is ample evidence that mentors and role models play a critical role in the training and retention of women faculty within academic medicine. The absence of a critical mass of female neurosurgeons in academic medicine may serve as a deterrent to female medical students deciding whether or not to pursue careers in neurosurgery. There is limited exposure to neurosurgery during medical school. Medical students have concerns regarding gender inequities (acceptance into residency, salaries, promotion, and achieving leadership positions). Gender inequity in academic medicine is not unique to neurosurgery; nonetheless, promotion to full professor, to neurosurgery department chair, or to a national leadership position is exceedingly rare within neurosurgery. Bright, competent, committed female neurosurgeons exist in the workforce, yet they are not being promoted in numbers comparable to their male counterparts. No female neurosurgeon has ever been president of the AANS, Congress of Neurological Surgeons, or Society of Neurological Surgeons (SNS), or chair of the American Board of Neurological Surgery (ABNS). No female neurosurgeon has even been on the ABNS or the Neurological Surgery Residency Review Committee and, until this year, no more than 2 women have simultaneously been members of the SNS. Gender inequity serves as a barrier to the advancement of women within both academic and community-based neurosurgery. To overcome the issues identified above, the authors recommend that the AANS join WINS in implementing a strategic plan, as follows: 1) Characterize the barriers. 2) Identify and eliminate discriminatory practices in the recruitment of medical students, in the training of residents, and in the hiring and advancement of neurosurgeons. 3) Promote women into leadership positions within organized neurosurgery. 4) Foster the development of female neurosurgeon role models by the training and promotion of competent, enthusiastic, female trainees and surgeons.
| |
London to Sydney Flight Time, Distance, Route Map
Flight time from London, United Kingdom to Sydney, Australia is 21 hours 8 minutes under avarage conditions. Our flight time calculator assumes an average flight speed for a commercial airliner of 500 mph, which is equivalent to 805 km/hr or 434 knots. Actual flight times may vary depending on aircraft type, cruise speed, routing, weather conditions, passenger load, and other factors.
What is the Flight Distance Between London and Sydney?
The flight distance from London (United Kingdom) to Sydney (Australia) is 10559 miles. This is equivalent to 16993 kilometers or 9170 nautical miles. The calculated distance (air line) is the straight line distance or direct flight distance between cities. The distance between cities calculated based on their latitudes and longitudes. This distance may be very much different from the actual travel distance. The nearest airport to London, is London City Airport (LCY) and the nearest airport to Sydney, is Kingsford Smith Airport (SYD).
London - Sydney Timezones & Time Difference
Current local time in London is 2021-09-20, 15:40:46 BST
Current local time in Sydney is 2021-09-21, 00:40:46 AEST.
Time difference between London (United Kingdom) and Sydney (Australia) is 9 Hours.
Sydney time is 9 Hours ahead of London.
London to Sydney Flight Route MapFlight map from London, United Kingdom to Sydney, Australia is given below.
Click the map to view London to Sydney nonstop flight path and travel direction.
|
|
London GPS Coordinates: Latitude: N 51° 30' 26.5'' Longitude: W 0° 7' 39.9''
Sydney GPS Coordinates: Latitude: S 33° 52' 2.9'' Longitude: E 151° 12' 25.1''
London Map, Where is London located?
Sydney Map, Where is Sydney located? | https://airportdistancecalculator.com/london-united-kingdom-to-sydney-australia-flight-time.html |
Q:
Sign of the totally anti-symmetric Levi-Civita tensor $\varepsilon^{\mu_1 \ldots}$ when raising indices
I am confused with the sign we get when we want to raise or lower all indices of the totally anti-symmetric tensor of any rank. Take the metric to be mostly plus ($-+\ldots+$). Then is it
$$\varepsilon^{ijk}=\varepsilon_{ijk}$$
or
$$\varepsilon^{ijk}=-\varepsilon_{ijk}?$$
so I am confused to as which one is true.
And if we consider higher rank does something change? For example
$$\varepsilon^{ijkl}=\varepsilon_{ijkl}$$
or
$$\varepsilon^{ijkl}=\varepsilon_{ijkl}?$$
A:
In flat spacetime, the isomorphism between contra- and covariant components is furnished by the Minkowski metric $\eta_{\mu\nu}$. The Minkowski metric in $n$ spacetime dimensions is just $\operatorname{diag}(-1,1,\dots,1)$. Thus for all dimensions, $$\operatorname{det}\eta=-1$$
For any $n\times n$ matrix $A_{\mu\nu}$, there is a well-known theorem
$$\epsilon_{\mu_1\cdots\mu_n}\operatorname{det}A=\sum_{i}\sum_{\nu_i}\epsilon_{\nu_1\cdots\nu_n}A_{\nu_1\mu_1}\cdots A_{\nu_n\mu_n}$$
where we have made no distinction between covariance and contravariance. But suppose $A=\eta$. Then all the multiplications by $\eta$ will raise the indices on $\epsilon$. And since $\operatorname{det}\eta=-1$ in any spacetime dimension, we get
$$\epsilon^{\cdots}=-\epsilon_{\cdots}$$
where the dots represent any number of indices.
A:
Sean Carroll's Spacetime and Geometry has a thorough discussion of this, and, even better, this discussion is in the lecture notes that turned into the book (see Chapter 2: Manifolds).
In full generality (at least for any right-handed coordinate system), we start off with the symbol $\tilde{\epsilon}_{\mu_1\cdots\mu_n} \equiv [\mu_1\,\cdots\,\mu_n]$, which is $0$ or $\pm1$ depending on the sign of $\mu_1\,\cdots\,\mu_n$ as a permutation of $0\,\cdots\,(n-1)$. Then the tensor with lower indices obeys
$$ \epsilon_{\mu_1\cdots\mu_n} = \sqrt{\lvert g \rvert} \tilde{\epsilon}_{\mu_1\cdots\mu_n} = \sqrt{\lvert g \rvert}\ [\mu_1\,\cdots\,\mu_n], $$
where $g$ is the determinant of the metric.
Some people define a symbol with upper indices that is the same as that with lower indices, but on the other hand some people include an extra factor of $\operatorname{sgn}(g)$. In fact, this is ambiguous enough that Carroll states one version in the linked notes, and the other version in the final published book.
In any event, the tensor with upper indices is unambiguously
$$ \epsilon^{\mu_1\cdots\mu_n} = \frac{1}{g} \epsilon_{\mu_1\cdots\mu_n} = \frac{\operatorname{sgn}(g)}{\sqrt{\lvert g \rvert}} [\mu_1\,\cdots\,\mu_n]. $$
In the end, you should be using the full tensors in calculations, at least if everything else in the formula is a true tensor. In flat spacetime, the metric is $\operatorname{diag}(-1,+1,+1,+1)$, so $g = -1$ and we have
\begin{align}
\epsilon_{\mu_1\cdots\mu_n} & = \phantom{-}[\mu_1\,\cdots\,\mu_n], \\
\epsilon^{\mu_1\cdots\mu_n} & = -[\mu_1\,\cdots\,\mu_n].
\end{align}
A:
To not confusing by these things you must know exactly what are the Levi-Civita symbols $\varepsilon$ and what are the Levi-Civita tensor $\epsilon$. Then you also need to know that the Hodge dual has no any confusion when they had defined with Levi-Civita tensor.
$$
\star H^{\mu \nu} = \frac{1}{2}\epsilon^{\mu \nu}{}_{ \rho \sigma} H^{\rho \sigma}\Bigg( = \frac{1}{2}\epsilon^{\mu \nu \rho \sigma} H_{\rho \sigma}=-\frac{1} {2\sqrt{-g}}\varepsilon^{\mu \nu \rho \sigma} H_{\rho \sigma}\Bigg)
$$
and so
$$
\star H_{\mu \nu} = \frac{1}{2}\epsilon_{\mu \nu}{}^{ \rho \sigma} H_{\rho \sigma}\Bigg( = \frac{1}{2}\epsilon_{\mu \nu \rho \sigma} H^{\rho \sigma}=-\sqrt{-g}\varepsilon_{\mu \nu \rho \sigma} H^{\rho \sigma}\Bigg)
$$
| |
The Board of Supervisors voted Tuesday to evaluate a social host ordinance that would hold parents responsible for underage drinking parties in unincorporated areas of the Palos Verdes Peninsula.
Supervisor Janice Hahn proposed the ordinance at the request of residents and officials from the four cities that make up that area — Rancho Palos Verdes, Palos Verdes Estates, Rolling Hills and Rolling Hills Estates.
Rancho Palos Verdes and Palos Verdes Estates already have an ordinance in place, with thousands of dollars in possible fines for adults who give alcohol to underage drinkers. Rolling Hills is still in the process of developing its social host law, and the Rolling Hills Estates City Council was expected to vote Tuesday evening on whether to adopt a version of the law at its meeting.
RELATED: Rancho Palos Verdes targets parents who give alcohol to underage drinkers
Underage drinking has been described as a rampant problem on the Palos Verdes Peninsula. At a Rancho Palos Verdes City Council meeting this year, Peninsula High School Principal Mitzi Cress said parents often provide alcohol at parties in an effort to help their child’s social standing.
“They think that makes their kids very popular by providing the alcohol,” Cress said at the meeting. “I’m aware of parents that have actually done the bartending and mixed the margaritas for the kids.”
According to the 2016 California Healthy Kids Survey, 26 percent of juniors in the Palos Verdes Peninsula Unified School District drank alcohol in the 30 days leading up to the survey, a 3 percent drop from the same survey in 2015.
“I think this is a good idea,” Hahn said. “I think the days are gone when the parents use the excuse, ‘Well, if the kids are going to drink, they may as well drink in my house.’ … It’s dangerous.”
RELATED: Hermosa Beach targets teen drinking with social host ordinance
Former Manhattan Beach police chief Rod Uyeda, who started his law enforcement career in the Pasadena Police Department, said he would never forget three high school senior girls gunned down in Pasadena following a underage drinking party and “countless others (who died) in car wrecks and overdoses.”
Uyeda placed the blame on parents.
“Too many parents and adults want to be cool rather than be a responsible adult,” Uyeda told the board.
Several parents and PTA officials agreed.
“A social host ordinance would help parents say no,” said Heather Matson, president of the Palos Verdes High School Parent Teacher Student Association.
Matson and others said teenage parties can end up hosting hundreds of kids as the word spreads via social media.
Hahn said in her motion that more than 40 California cities and counties have adopted some form of social host laws and that those ordinances have been “highly successful at reducing the number of parties serving alcohol to minors.”
Supervisor Sheila Kuehl said she’d like to see more data on the impact on underage drinking overall.
“As a teenage drunk … it wasn’t hard to find a place to drink,” Kuehl said, even though her parents didn’t allow any alcohol in her home growing up.
Ventura County has social host ordinances that apply countywide, though fines and enforcement varies from city to city. About one-quarter of law enforcement officers surveyed for a 2009 study by Ventura County reported fewer calls for underage drinking parties after the regulations were passed and some data showed the size of parties had dropped.
A 2014 study published by the Journal of Studies on Alcohol and Drugs that surveyed 50 California cities found that strong social host policies were correlated with less drinking at parties but did not affect overall teen alcohol use.
Manhattan Beach, which has had an ordinance in place for 10 years, fines first-time violators $1,000. Nearby Hermosa Beach, which passed its law in 2016, levies a first-time fine of $2,500, the same amount charged by Rancho Palos Verdes and Palos Verdes Estates.
The board directed county attorneys to work with representatives of the Sheriff’s Department and District Attorney’s Office to research the issue and report back in 30 days.
Cynthia Washicko contributed to this report. | https://www.dailybreeze.com/2017/05/09/la-county-considering-social-host-law-to-combat-underage-drinking-on-palos-verdes-peninsula/ |
READ MORE History of archaeology No doubt there have always been people who were interested in the material remains of the past, but archaeology as a discipline has its earliest origins in 15th- and 16th-century Europe , when the Renaissance Humanists looked back upon the glories of Greece and Rome. Popes, cardinals, and noblemen in Italy in the 16th century began to collect antiquities and to sponsor excavations to find more works of ancient art. These collectors were imitated by others in northern Europe who were similarly interested in antique culture. All this activity, however, was still not archaeology in the strict sense. It was more like what would be called art collecting today. The Mediterranean and the Middle East Archaeology proper began with an interest in the Greeks and Romans and first developed in 18th-century Italy with the excavations of the Roman cities of Pompeii and Herculaneum. Classical archaeology was established on a more scientific basis by the work of Heinrich Schliemann , who investigated the origins of Greek civilization at Troy and Mycenae in the s; of M. Conze was the first person to include photographs in the publication of his report. Schliemann had intended to dig in Crete but did not do so, and it was left to Arthur Evans to begin work at Knossos in and to discover the Minoan civilization , ancestor of classical Greece. He brought with him scholars who set to work recording the archaeological remains of the country.
Dating in Archaeology
During this stage, man colonized the New World and Australia. The main Palaeolithic cultures of Europe were, in chronological order: The term was introduced in by John Lubbock in Prehistoric Times”. The Palaeolithic was originally defined by the use of chipped stone tools but later an economic criterion was added and the practice of hunting and gathering is now regarded as a defining characteristic. A term referring to an artifact in the form of a ring, but with a small break at one point, used particularly for forms of brooch and torc.
It means not a complete ring”.
Explainer archaeology almost from relatively recent history, he says, dating really important step in historical archaeology. A archaeology is it has limitations: relative dating. Is typological method of the hieroglyphic; 3 methods in archaeology.
See Article History Dating, in geology , determining a chronology or calendar of events in the history of Earth , using to a large degree the evidence of organic evolution in the sedimentary rocks accumulated through geologic time in marine and continental environments. To date past events, processes, formations, and fossil organisms, geologists employ a variety of techniques. These include some that establish a relative chronology in which occurrences can be placed in the correct sequence relative to one another or to some known succession of events.
Radiometric dating and certain other approaches are used to provide absolute chronologies in terms of years before the present. The two approaches are often complementary, as when a sequence of occurrences in one context can be correlated with an absolute chronlogy elsewhere. Ankyman General considerations Distinctions between relative-age and absolute-age measurements Local relationships on a single outcrop or archaeological site can often be interpreted to deduce the sequence in which the materials were assembled.
This then can be used to deduce the sequence of events and processes that took place or the history of that brief period of time as recorded in the rocks or soil. For example, the presence of recycled bricks at an archaeological site indicates the sequence in which the structures were built. Similarly, in geology, if distinctive granitic pebbles can be found in the sediment beside a similar granitic body, it can be inferred that the granite, after cooling, had been uplifted and eroded and therefore was not injected into the adjacent rock sequence.
Although with clever detective work many complex time sequences or relative ages can be deduced, the ability to show that objects at two separated sites were formed at the same time requires additional information. A coin, vessel, or other common artifact could link two archaeological sites, but the possibility of recycling would have to be considered.
It should be emphasized that linking sites together is essential if the nature of an ancient society is to be understood, as the information at a single location may be relatively insignificant by itself. Similarly, in geologic studies, vast quantities of information from widely spaced outcrops have to be integrated. Some method of correlating rock units must be found.
MyElimu
Learning Dating in Egyptian archaeology The dating of remains is essential in archaeology, in order to place finds in correct relation to one another, and to understand what was present in the experience of any human being at a given time and place. Inscribed objects sometimes bear an explicit date, or preserve the name of a dated individual. In such cases, dating might seem easy.
However, only a small number of objects are datable by inscriptions, and there are many specific problems with Egyptian chronology, so that even inscribed objects are rarely datable in absolute terms. In the archaeology of part-literate societies, dating may be said to operate on two levels: The contrast might also be drawn between two ‘dimensions’, the historical, and the archaeological, corresponding roughly to the short-term and long-term history envisaged by Fernand Braudel.
Archaeology Introduction Archaeology is the study of past cultures through the material (physical) remains people left behind. These can range from.
I will tell a little about myself: That lot investigates in the found worldview from poverty to help crossroads feeds to pray this common time. That lot reasons in the announcement worldview from poverty to help crossroads feeds to pray this common take. We bring you all the information you need to know on all the TS and TG john websites.
Influential True Nudist for Election and Counting. Importance of dating methods in archaeology From the day of the rock’s creation this archaeolofy begins to spill. The annual rings vary in size, depending on the weather conditions in each you, but they are similar for all trees of the same area. For topper periods there are several problems. For earlier periods there are several helps.
From the day of the rock’s creation this radioactivity begins to deplete. The Egyptians dated by the year of Importance of dating methods in archaeology of the king on the throne for example ‘year 3 of king X’. If we knew the precise length of reign for every Egyptian dahing, chronology would be no problem.
What is Archaeology
Last Edited March 4, For those researchers working in the field of human history, the chronology of events remains a major element of reflection. Archaeologists have access to various techniques for dating archaeological sites or the objects found on those sites. Crossdating is an important principle in dendrochronology. It consists in comparing and matching two or more series of ring widths measured on different trees. The partial overlap of sets of trees that died at different times allows the construction of average chronological sequences courtesy Groupe de recherche en dendrochronologie historique; illustration C.
Dec 01, · Archaeological dating methods set to the tune of 99 Red Balloons by Nena. I animated the video, played the ukulele, sang, and wrote the lyrics (with .
List of biblical figures identified in extra-biblical sources Objects with unknown or disproved biblical origins[ edit ] Biblical archaeology has also been the target of several celebrated forgeries, which have been perpetrated for a variety of reasons. One of the most celebrated is that of the James Ossuary , when information came to light in regarding the discovery of an ossuary , with an inscription that said ” Jacob , son of Joseph and brother of Jesus “.
In reality the artifact had been discovered twenty years before, after which it had exchanged hands a number of times and the inscription had been added. This was discovered because it did not correspond to the pattern of the epoch from which it dated. Their authenticity is highly controversial and in some cases they have been proved to be fakes. The Ark of the Covenant: Local tradition claims that it was brought to Ethiopia by Menelik I with divine assistance, while a forgery was left in the Temple in Jerusalem.
Objects originating from the “antiques” dealer Oded Golan. As described above, the Israeli police accused Golan and his accomplices of falsifying the James Ossuary in , they were also accused of falsifying a number of other objects: The Jehoash Inscription , which describes repairs to the temple in Jerusalem. It is suspected that the inscription has been falsified onto authentic ancient stones.
Various ostracas mentioning the temple or biblical names. A stone candelabra with seven arms, decorated with a menorah from the temple.
Chronological dating
May 15, From strict academic and institutional discourse to a personal approach, this seminar will introduce students from Latin America to the opportunities for personal and scientific enrichment through international experience. Students from developing countries often encounter challenges while trying to travel abroad and this seminar will discuss funding options and ways to overcome these challenges.
International institutions, forums, conferences, and universities where Latin American students can engage in fieldwork will also be highlighted. Objectives The goals of this one-hour course are to:
Chronological dating, or simply dating, is the process of attributing to an object or event a date in the past, allowing such object or event to be located in a previously established usually requires what is commonly known as a “dating method”. Several dating methods exist, depending on different criteria and techniques, and some very well known examples of disciplines using.
Reservations and tribal communities comprise over a quarter of Arizona’s lands. Each tribe, their people, has a history, some of which goes back more than 12, years in Arizona. This section of T-RAT. COM, despite it’s title, is only an introduction, and is far from complete; much work in Arizona archaeology will take place in the future, and therefore nothing written today will even come close to being “complete.
So much is yet to be discovered, and much to learn. Many of these sites, across Arizona, are being destroyed by development , government infrastructure construction, and natural erosion. Far more sites have been compromised by legal endeavors than by all illegal activities combined.
Archaeology Wordsmith
In the prehistoric periods, it provides the backbone for any narrative and in historical periods allows us to relate individual events to the larger political context. In addition to the human dimension, chronology allows us to link environmental and archaeological records on a global scale. Oxford has helped pioneer many of the scientific dating methods used today and still has significant active research into Radiocarbon, Luminescence and Tephrochronology.
This research is focussed both on methodological advances and on a whole range of applications of chronological research to archaeological questions, from the replacement of Neanderthals by modern humans to the precise placement of the Egyptian historical chronology. Much of the research is collaborative between different members of the School of Archaeology, UK academics and those from the international research community.
Archaeology is the study of the human past using material remains. These remains can be any objects that people created, modified, or used. Portable remains are usually called artifacts. Artifacts include tools, clothing, and decorations.
Resources Introduction Archaeology is the study of the ancient and recent human past through material remains. It is a subfield of anthropology, the study of all human culture. From million-year-old fossilized remains of our earliest human ancestors in Africa, to 20th century buildings in present-day New York City, archaeology analyzes the physical remains of the past in pursuit of a broad and comprehensive understanding of human culture. Back to top How does archaeology help us understand history and culture?
Archaeology offers a unique perspective on human history and culture that has contributed greatly to our understanding of both the ancient and the recent past. Archaeology helps us understand not only where and when people lived on the earth, but also why and how they have lived, examining the changes and causes of changes that have occurred in human cultures over time, seeking patterns and explanations of patterns to explain everything from how and when people first came to inhabit the Americas, to the origins of agriculture and complex societies.
Unlike history, which relies primarily upon written records and documents to interpret great lives and events, archaeology allows us to delve far back into the time before written languages existed and to glimpse the lives of everyday people through analysis of things they made and left behind. Archaeology is the only field of study that covers all times periods and all geographic regions inhabited by humans. It has helped us to understand big topics like ancient Egyptian religion, the origins of agriculture in the Near East, colonial life in Jamestown Virginia, the lives of enslaved Africans in North America, and early Mediterranean trade routes.
In addition archaeology today can inform us about the lives of individuals, families and communities that might otherwise remain invisible. Back to top Types of Archaeology Prehistoric archaeology focuses on past cultures that did not have written language and therefore relies primarily on excavation or data recovery to reveal cultural evidence. Historical archaeology is the study of cultures that existed and may still during the period of recorded history–several thousands of years in parts of the Old World, but only several hundred years in the Americas.
Within historical archaeology there are related fields of study that include classical archaeology, which generally focuses on ancient Greece and Rome and is often more closely related to the field of art history than to anthropology, and biblical archaeology, which seeks evidence and explanation for events described in the Bible and therefore is focused primarily on the Middle East. Underwater archaeology studies physical remains of human activity that lie beneath the surface of oceans, lakes, rivers, and wetlands.
The Importance of Dating
Welcome back to our dig blog. Well the dig at East Lomond Hill has now come to a close. This post is to update you about the final week of discoveries: They had tour of the site and had a go at digging for the afternoon.
importance of dating in archaeology, archaeological dating definition, dating methods in archaeology pdf, absolute dating archaeology, how do archaeologists date artifacts, relative dating in archaeology, dating methods in history, dating methods in archaeology ppt, online dating .
The International History Project Date: Archaeology studies past human behavior through the examination of material remains of previous human societies. These remains include the fossils preserved bones of humans, food remains, the ruins of buildings, and human artifacts—items such as tools, pottery, and jewelry. From their studies, archaeologists attempt to reconstruct past ways of life. Archaeology is an important field of anthropology, which is the broad study of human culture and biology.
Archaeologists concentrate their studies on past societies and changes in those societies over extremely long periods of time. | http://thaiplace.info/archaeological-methods/ |
In Pozzallo, a town overlooking the shores of the Mediterranean Sea, we offer for sale a spacious apartment, currently used as a B & B.
The apartment is located in the central area and is about 230 square meters; it consists of a spacious living area with kitchen, which allows access to the covered veranda where there is a second kitchen with balcony; 5 double bedrooms each with en suite bathroom, utility room and laundry.
In the basement there is a garage of exclusive relevance of about 80 square meters.
Thanks to the characteristics and size of the property, you can easily decide to divide into two apartments; the first to be used as a main residence and the second to be put to income.
We are located in a central area and equipped with the main services such as bars, tobacconists, supermarkets and other commercial activities necessary for daily life activities, to be carried out comfortably on foot.
The apartment is also located just 600 meters from the beautiful beach of the Pietrenere promenade and about 800 meters from Piazza delle Rimembranze where the nightlife that characterizes Pozzallo 365 days a year is concentrated. | https://www.millevaniimmobiliare.it/en/apartment-used-as-a-bb-for-sale-in-pozzallo.html |
Do you have a question?
Huzzle Cast Square
Disassemble and assemble puzzle:
This puzzle consists of 4 pieces that are combined to form a square. …Will you be able to take it apart? What’s more, these pieces have a secret: each time their arrangement is changed when recreating the original shape, the solution changes. Are you up to the challenge of completely solving the riddle of this square? The theme of this puzzle is “Orientation.” Created by Finland’s Timonen.
© 2010, 2018 Hanayama Co., Ltd. All rights reserved.
Specifications
|Ean||5407005150924|
|Level||5|
|Materiaal||Metal|
|Mission||Disassemble and assemble puzzle|
|Players||1|
|Size Box||7,5x11,7x4,5cm|
Having trouble solving one of our puzzles? Need a hint?
There’re several ways to get help. But note to have access to your product solution,
you may need to enter the complete barcode of the original product box. | https://www.eureka-puzzle.eu/product/huzzle-cast-square/ |
Another day, another major car company announcing a recall: this time it’s Fiat Chrysler, which is calling back around 900,000 SUVs around the world to address problems with anti-lock brakes and how the airbags deploy.
In this case, the company said it’s recalling 284,089 model-year 2003 Jeep Liberty and 2004 Jeep Grand Cherokees SUVs in the United States to replace some components linked to the deployment of airbags. In addition to the U.S. vehicles, the car maker is recalling about 13,411 vehicles in Canada, 6,277 in Mexico and 48,212 elsewhere in the world to fix the same problem.
Thus far, there have been seven injuries related to the problem that Fiat Chrysler is aware of, but the airbags haven’t caused any crashes or accidents.
We know — the word “airbags” comes up and you automatically think of Takata airbags shooting shrapnel at drivers. But Fiat Chrysler really wants to make sure you know that is not the case with this recall, emphasizing that the airbags involved are not produced by Takata (although the carmaker is involved with the Takata recalls elsewhere in its lineup).
Another set of SUVs is being recalled because water could get into the vehicles’ anti-lock braking and electronic stability control system, Fiat Chrysler says: 275,614 model year 2012-2015 Dodge Journey cross-utility vehicles (CUVs) in the U.S. are being recalled to replace certain parts of their anti-lock brake systems, as well as about 78,148 vehicles in Canada, 36,471 in Mexico and 151,476 in other parts of the world.
Customers with additional questions can call the FCA US Customer Information Center at 1-800-853-1403. | https://consumerist.com/2015/10/30/fiat-chrysler-recalling-900k-suvs-to-fix-issues-with-airbag-deployment-anti-lock-brakes/ |
Q:
Running Battlefield 3 with 2 x 9600gt in SLI
I am currently running:
Quad core Q6600 overclocked to 3.2ghz
4GB DDR3 RAM
2 x 9600gt 1GB running in an SLI configuration
MSI Platinum motherboard
SSD for storage
Will this rig be expected to be capable of running the upcoming Battlefield 3 with reasonable performance or will I need an upgrade?
In an ideal world I'd upgrade the entire rig, but this is not currently possible. If expected performance is quite poor, what suitable graphics card upgrade would this rig accept?
A:
Battlefield 3 is expected to have similar requirements as Bad Company 2. Here is the official information.
Below i have included the Minimum and Recommmended system requirements. Your 2x9600gt is not quite as powerful as a GTX 460, but it is not that slow either. So if you reduce the settings a little, i think you should do fine. GTX460 scores 23332 points in 3dMark 2006, while 2x9600gt scores 19857 points.
Note that 9600GT does not support DirectX 11, so if BF3 makes good use of that, the effects will be a lot nicer. See the Crysis 2 Direct X 11 review for more information on the benefits of DirectX 11.
Minimum systen
OS: Windows Vista or Windows 7
Processor: Core 2 Duo @ 2.0GHz
RAM: 2GB
Graphic card: DirectX 10 or 11 compatible Nvidia or AMD ATI card.
Graphics card memory: 512 MB
Hard drive: 15 GB for disc version or 10 GB for digital version
Recommended system
OS: Windows 7 64-bit
Processor: Quad-core Intel or AMD CPU
RAM: 4GB
Graphics card: DirectX 11 Nvidia or AMD ATI card, GeForce GTX 460, Radeon Radeon HD 6850
Graphics card memory: 1 GB
Hard drive: 15 GB for disc version or 10 GB for digital version
| |
When Twilight Breaks is my first read by Sarah Sundin. I love historical fiction and in that arena the story did not disappoint. I appreciate that history is not muted for the sake of the reader. Too often, “so as not to offend the sensibilities” of “Christian readers” authors will remove important details or even key parts of the story. Sundin is willing to face the atrocities of the time period head on. A story set in war-time Germany warrants an accurate telling of the history and also of what the characters would really be like, what they’d be thinking and participating in, and how that causes them to interact with the world. Sundin has a nice balance of story, character development, and history. Evelyn is a female American foreign correspondent and her counterpart in the story Peter is an American graduate student Peter Lang is working on his PhD in German. Their paths cross and their views and experiences lend itself to a tumultuous and interesting relationship that accompanies the complex and horrific time in history.
I appreciate that both Peter and Evelyn seem to have true-to-history perspectives, albeit they are very different. This also made the story a bit challenging to read as it so mirrors how people are viewing and experiencing our current climate, societal issues, and the vast chasm between differing views. I appreciate that Sundin makes an attempt at telling this story in a way that almost parallels today. If we can’t learn from history we are certainly doomed to repeat it.
The characters themselves develop a tumultuous relationship and I found myself a bit frustrated at first with their lack of open mindedness to how the other felt or by their lack of desire to understand each other. But, it was accurate in many senses. Peter and Evelyn’s lived experiences brought two very different conclusions about what was happening in the world around them and to what they viewed as the solution. One believing freedom was the answer, the other order. This dichotomy made for an interesting plot evolution in terms of the pair’s relationship.
I will admit I found it hard to get in the story at first despite be very intrigued initially by the characters. I am grateful to have completed it despite wondering if I would. There were some unexpected moments in the story, but what I felt the most thrilling was seeing the characters’ evolution as they learned and finally opened up to each other. It is one of the most powerful things we can do, to listen to each other in order to better understand. While Evelyn and Peter didn’t agree, they could understand each other and ultimately that understanding led to a beautiful relationship. I think we can learn from this underlying message in the story.
There are several other minor characters that provided depth to the story and to the main characters themselves and I always appreciate when I enjoy the minor characters as much as the protagonist and antagonist. The faith component was present and entwined in a realistic way. I think many can relate to the mindset that the characters have of “we can do it [on our own]” and find that in seasons of our life we only turn to God in crisis. To have Evelyn and Peter learning, in a simple way, how to incorporate God and faith into their daily lives, was done with time and opportunity not just some LARGE life event. This makes the faith and spiritual experience both realistic and encouraging. It felt genuine and true to what I have experienced in my own life. God working in steadfast, faithful ways to grow my heart and produce fruit in my life.
ABOUT THE BOOK
Munich, 1938. Evelyn Brand is an American foreign correspondent as determined to prove her worth in a male-dominated profession as she is to expose the growing tyranny in Nazi Germany. To do so, she must walk a thin line. If she offends the government, she could be expelled from the country–or worse. If she fails to truthfully report on major stories, she’ll never be able to give a voice to the oppressed–and wake up the folks back home.
In another part of the city, American graduate student Peter Lang is working on his PhD in German. Disillusioned with the chaos in the world due to the Great Depression, he is impressed with the prosperity and order of German society. But when the brutality of the regime hits close, he discovers a far better way to use his contacts within the Nazi party–to feed information to the shrewd reporter he can’t get off his mind.
This electric standalone novel from fan-favorite Sarah Sundin puts you right at the intersection of pulse-pounding suspense and heart-stopping romance.
Thanks to Revell for a copy of the story. I shared my own, honest opinions here simply because I love to read and love sharing new stories and authors with all of you. Thanks for following along, let me know if you have any questions or if you snag the book what you liked about it! | http://www.inspirationclothesline.com/whentwilightbreaksreview/ |
Mathematical models are used to simulate, and sometimes control, the behavior of physical and artificial processes such as the weather and very large-scale integration (VLSI) circuits. The increasing need for accuracy has led to the development of highly complex models. However, in the presence of limited computational, accuracy, and storage capabilities, model reduction (system approximation) is often necessary. Approximation of Large-Scale Dynamical Systems provides a comprehensive picture of model reduction, combining system theory with numerical linear algebra and computational considerations. It addresses the issue of model reduction and the resulting trade-offs between accuracy and complexity. Special attention is given to numerical aspects, simulation questions, and practical applications.
Audience
This book is for anyone interested in model reduction. Graduate students and researchers in the fields of system and control theory, numerical analysis, and the theory of partial differential equations/computational fluid dynamics will find it an excellent reference.
Contents
List of Figures;
Foreword;
Preface;
How to Use this Book;
Part I: Introduction;
Chapter 1: Introduction;
Chapter 2: Motivating Examples;
Part II: Preliminaries;
Chapter 3: Tools from Matrix Theory;
Chapter 4: Linear Dynamical Systems: Part 1;
Chapter 5: Linear Dynamical Systems: Part 2;
Chapter 6: Sylvester and Lyapunov equations;
Part III: SVD-based Approximation Methods;
Chapter 7: Balancing and balanced approximations;
Chapter 8: Hankel-norm Approximation;
Chapter 9: Special topics in SVD-based approximation methods;
Part IV: Krylov-based Approximation Methods;
Chapter 10: Eigenvalue Computations;
Chapter 11: Model Reduction Using Krylov Methods;
Part V: SVDÐKrylov Methods and Case Studies;
Chapter 12: SVDÐKrylov Methods;
Chapter 13: Case Studies;
Chapter 14: Epilogue;
Chapter 15: Problems;
Bibliography;
Index.
ISBN: 9780898716580
This product hasn't received any reviews yet. Be the first to review this product! | http://bookstore.siam.org/DCS06/ |
CURRENT ISSUE
Phoenix lander to touch down on Mars
NASA's latest mission to the Red Planet will focus on its frozen arctic, where conditions suitable for life may have once existed.
Published: Tuesday, May 20, 2008
NASA's Phoenix Mars Lander monitors the atmosphere overhead and reaches out to the soil below in this artist's depiction of the fully deployed spacecraft on the surface of Mars.
NASA/JPL/UA/Lockheed Martin
May 20, 2008After a nearly 10-month journey, the Phoenix lander will touch down on Mars May 25. Confirmation of the landing could come as early as 7:53 P.M. EST. The descent and landing will be a nail-biter for mission controllers. The lander must slow from a speed of 13,000 miles per hour (20,900 kilometers per hour) to 5 miles per hour (8 kilometers per hour) in 7 minutes. The 904-pound (410 kilograms) craft will use rocket engines for the final 25 seconds of its descent after braking at higher altitude with parachutes. (View an animation of the landing here.) In more recent missions, like the Spirit and Opportunity rovers in 2003, airbags provided cushioning for the final landing.
Launched August 4, 2007, Phoenix is the first in NASA's Scout Program, which aims to conduct smaller, low-cost science missions. The craft's onboard laboratory will study the history of water and habitability potential in the martian arctic's ice-rich soils.
"Our landing area has the largest concentration of ice on Mars outside of the polar caps," said the mission's principal investigator, Peter Smith, of the University of Arizona, Tucson. "If you want to search for a habitable zone in the arctic permafrost, then this is the place to go."
Phoenix carries hardware from two unsuccessful missions: the Mars Polar Lander and the Mars Surveyor 2001 Lander. The Polar Lander crashed on Mars. Mars Surveyor was built but never launched.
Cold, arid Mars lacks standing water on its surface. But in the arctic regions, copious ice lies right below the surface. In 2002, the Mars Odyssey Orbiter located large amounts of subsurface water ice in the northern arctic plain.
Phoenix will visit this ice-rich area. Its robotic arm will dig through the protective topsoil layer to the water ice below. Then it will carry a sample of soil and ice to the lander's various onboard scientific instruments for study.
Phoenix's planned mission period is 92 Earth days. After that, the martian winter will settle over the craft. The waning light will deprive the solar-powered laboratory of electricity to run its instruments. Mission planners expect the lander to end its life covered in frozen carbon dioxide gas (dry ice). Chances are remote that Phoenix will hum back to life in spring.
The mission takes its name from the mythical phoenix bird that symbolizes rebirth. When the bird dies, it bursts into flames. A new bird then rises from the ashes of the pyre.
Determine whether life ever arose on MarsContinuing the Viking missions' quest, but in an environment known to be water-rich, Phoenix will search for signs of life at the soil-ice interface just below the martian surface. Phoenix will land in the arctic plains, where its robotic arm will dig through the dry soil to reach the ice layer, bring soil and ice samples to the lander platform, and analyze these samples using advanced scientific instruments. These samples may hold the key to understanding whether the martian arctic is a habitable zone where microbes could grow and reproduce during moist conditions.
Characterize the climate of MarsPhoenix will land during the retreat of the martian polar cap, when cold soil is first exposed to sunlight after a long winter. The interaction between the ground surface and the martian atmosphere that occurs at this time is critical to understanding the present and past climate of Mars. To gather data about this interaction and other surface meteorological conditions, Phoenix will provide the first weather station in the martian polar region — no others are currently planned. Data from this station will have a significant impact on improving global climate models of Mars.
Characterize the geology of MarsAs on Earth, the past history of water on Mars is written below the surface. Phoenix will use a suite of experiments to thoroughly analyze soil chemistry and mineralogy. Some scientists speculate the landing site for Phoenix may have been a deep ocean in the planet's distant past leaving evidence of sedimentation. If fine sediments of mud and silt are found at the site, this may support the hypothesis of an ancient ocean. Alternatively, coarse sediments of sand might indicate past flowing water, especially if these grains are rounded and well-sorted. Using the first true microscope on Mars, Phoenix will examine the structure of these grains to better answer questions about water's influence on the planet's geology.
Prepare for human explorationPhoenix will provide evidence of water ice and assess soil chemistry in the martian arctic. Water will be a critical resource to future human explorers, and Phoenix may provide appreciable information on how water may be acquired on Mars. Scientists study the soil chemistry to build an understanding of the potential resources available for human explorers to the northern plains.
| |
Q:
It rains in Gadyukino
It rains in Gadyukino village every other day on average. Assume the probability of rain on any given day is 50% and independent. A local weather forecaster has the sole duty every day of predicting whether it will rain the following day. His predictions have a 75% success rate. What quantity of information (in bits) is carried by a single forecast of his?
A:
"How much information" sounds vague. But it has a rock-solid definition, in the grand tradition of Shannon et al.
Assume an independent 1/2 chance of rain every day. Then the weather transmits one bit of information to Gadyukino every day - namely, whether or not it is raining. Say the forecaster can receive $x$ bits of information per day about the future. Then what is the minimum value of $x$ so that in the limit, the forecaster predicts with 75% accuracy?
Consider a time period of $4n$ days for large $n$. There are $2^{4n}$ possible weathers during this period. For any particular forecaster prediction sequence, the number of weather sequences that let the forecaster claim a 75% accuracy rate is $\binom{4n}{\leq n}$, where the notation means the number of ways to choose at most $n$ wrong days out of $4n$ total days.
Stirling's approximation says that $\binom{4n}{\leq n} = 2^{(4\log 4 - 3 \log 3)n}\times poly(n)$. Call this quantity $X$ for ease of typing.
Each forecaster prediction sequence allows $X$ possible weather sequences. There are $2^{4n}$ total weather sequences. Therefore, the forecaster must have at least $2^{4n}/X = 2^{(4 - 4 \log 4 + 3 \log 3)n}$ possible prediction sequences that he might say.
In order to be able to choose from that many prediction sequences, the forecaster must receive at least $\log(\text{that many})=(4 - 4 \log 4 + 3 \log 3)n$ bits for those $4n$ days. (EDIT: This doesn't account for the fact that the forecaster can change his mind based on whether he's been right so far. My intuition says an information-theoretic bound should still work, I'll think about it.)
This works out to $\boxed{1-\log 4 + \frac 34 \log 3 \approx 0.43766...}$ bits per day. This is an upper bound on the amount of information the forecaster needs. Does this much information suffice? I don't know. Shannon probably solved this problem when he created information theory, but I'm having trouble finding a reference.
| |
Weights of Other Materials in Pounds Per Cubic Foot Earth, Common Loam 75.00-90.00 Earth, DryLoose 76.00 Earth, DryPacked 95.00 Earth, MudPacked 115.00 Elm, White 45.00 Fats 58.00 Fir, Douglas 30.00 Fir, Eastern 25.00 Flour, Loose 28.00 Flour, Pressed 47.00 Gasoline 42.00 Glass, CommonWindow 156.00 Granite 170.00 Graphite 131.00 Gravel,Dry ...
Bushel weights are shown in pounds. To obtain pounds per cubic foot, multiply bushels x .8036 To obtain cubic foot per bushel, multiply x 1.25 Alfalfa - 60 Barley - 48 Bermuda Grass - 35 Blue Grass - 22 Bluegrass Seed - 44 Bran - 20 Brome Grass - 14 Buckwheat - 52 Cane Seed - 50 Carrot Seed - 50 Castor Bean - 46 Chickpea Garbonzo - 58
Dec 23, 2015 Note on measures Specific gravity is a measure of an objects density. A cubic centimeter of water at 4 C weighs 1 gram, and has a specific gravity of 1. The specific gravity numbers below can be read as grams per cubic centimeter or kgliter. A solid object with a specific gravity greater than 1 will sink in water.
The density of pure water is also 62.4 lbscu.ft pounds per cubic foot and if we know that ammonium nitrate has a sg of 0.73 then we can calculate that its density is 0.73 x 62.4 45.552 lbscu.ft.
Fertilizer Weight and Measures Pounds of Active Nutrient per Gallon Liquid N P K 1 gallon 28 28-0-0 10.66 lbs. 2.98 0 0 1 gallon 10-34-0 11.65 lbs. 1.16 3.96 0 1 gallon 7-21-7 11.00 lbs. 0.77 2.31 0.77 1 gallon 9-18-9 11.11 lbs. 0.99 1.99 0.99 1 gallon NH 82-0-0 5.15 lbs. 4.22 0 0
Lead Weight Calculator STEP 1 Choose casting or extrusion shape Rectangle, Sheet, Cube Parrallelogram Trapezoid Triangle Pipe Cylinder, Road, Bar,
Calculate Lava Rock. Type in inches and feet of your project and calculate the estimated amount of Decorative Stones in cubic yards, cubic feet and Tons, that your need for your project. The Density of Lava Rock 2,410 lbyd or 1.21 tyd or 0.8 yd t.
Nov 17, 2020 A yard of other agricultural products weighs differently than mulch. For example, a yard of compost weighs approximately 1,000-1,600 lbs per cubic yard. On the other hand, soil blendsmixture weighs approximately 2200-2700 lbs per cubic yard. As you can see, a yard of mulch weighs much lighter than a yard of compost and soil blends.
Description. This section is from the book Beplers Handy Manual of Knowledge And Useful Infomation , by David Bepler.Also available from Amazon Beplers Handy Manual of Knowledge and Useful Information. Weight Of A Cubic Foot Of Earth, Stone, Metal, Wood, Etc
May 17, 2020 Answer One cubic yard of topsoil generally weighs about one ton 2000 pounds. Topsoils weight can vary greatly due to moisture content. How much does a yard of sand weigh The approximate weight of 1 cubic yard of sand is 2,600 to 3,000 pounds. This amount is also roughly equal to 1 12 tons. A cubic yard of gravel will weigh slightly less ...
3 Answers. Water is 62.4 pounds per cubic foot. There are 27 3x3x3 cubic feet in one cubic yard. So one cubic yard of water weighs 1,684.8 pounds. thanked the writer. blurted this.
May 01, 2013 How to Calculate Asphalt Weight Per Yard eHow.com. Given a certain estimated weight of asphalt, the weight of asphalt per cubic yard can be calculated, Standard Weights for Crushed Rock Per Meter. More detailed crushed asphalt weight per cubic yard Grinding Mill . how much does 6 cubic yards of old asphalt weigh-China Gulin I Need 42 Yards Of Crushed Asphalt.It Is
The exception is the Metals table, since there is seldom much variation in weight between different batches of the same metal I have given the weight per cubic metre reasonably accurately. Weights in lbsft 3 have been calculated at 116 kgm 3 and rounded out to the nearest pound, which should be accurate enough for roleplaying purposes.
Purchase Order Calculator. Metal, shape, weight, size, and number of pieces.
Sheet metal weight and gauge chart. Stainless steel, copper, zinc, aluminum, steel and galvanized steel. Custom Architectural Sheet Metal Fabrication Our Craftsmen are Copper and Zinc Specialists 15 Reardon Road - Medford, MA - 02155 Tel. 781 396-0070 - Fax 781 396-8890 Home ...
Nov 18, 2020 A cubic yard of topsoil weighs around 2,200 pounds.How much does a yard of dirt weigh Dirt Type.
ONeal Steel offers a simple and easy to use metal weight calculator including weight and conversions. just supply your specs and youre ready to go.
A bucket of feathers will weigh significantly less than a bucket of lead. You can learn more about densities in our article about the density formula . If you would like to convert a volume of water gallon, liter, cup or tablespoon to pounds, ounces, grams or kilograms then please give our water weight
88 Tel 1. 14.79.9191 1.800.288.2726 Fax 1. 9.5880 www.tapcoinc.com BULK MATERIAL DENSITY TABLE Material Description Loose Bulk
Riprap is paid for by volume in cubic yard or by weight in ton 1.0ton2,000lbs in the United States. To convert riprap layer in volume to weight, a riprap in-place density value is required. The riprap in-place density is measured at its placement status with the volume including the voids porosity.
4.1887 times the density of lead, which is 0.409 pounds per cubic inch, gives a weight of 1.713 pounds. What would a three inch diameter lead ball weigh The radius of 1.5 inches cubed equals 3.375 4 42.410, divided by 3, equals 14.137 cubic inches, times 0.409 the density of lead
How much does a cubic yard of sndstone weigh Products. As a leading global manufacturer of crushing, grinding and mining equipments, we offer advanced, reasonable solutions for any size-reduction requirements including, How much does a cubic yard of sndstone weigh, quarry, aggregate, and different kinds of minerals.
Divide 25 by 27 to reach 0.93 cubic yards. River rock weighs about 2,600 pounds or 1.3 tons per cubic yard. In this example, your project would need 1.2 tons of rock. What if Your Area is a Circle Circular areas require a bit more calculation. A circle with height is a cylinder. The volume of a cylinder radius x radius x 3.14 x height in feet.
Cubic measurements of pine straw weigh much less than pine bark, for example. Because of the variation in types of mulch, we can give a range of weights. A cubic yard of mulch, which visually is 3 feet long by 3 feet wide by 3 feet tall, weighs between 400 and 800 pounds. The more moisture content in the mulch, the more it will weigh.
What is the average weight of a cubic metre of Sep 11, 2006 1 cubic metre 1.4 tonne. how heavy in tons is 1 cubic meter of concrete how much does a cubic metre of crushed concrete type 1 More detailed
Copper pipe, whole 1 cubic yard 1,047.62 Tellus Copper, cast 1 cubic yard 210.94 Tellus Copper, ore 1 cubic foot 542 FEECO Copper, scrap 1 cubic foot 120150 FEECO Copper, wire, whole 1 cubic yard 1,093.52 Tellus Chrome ore chromite 1 cubic yard 337.5 Tellus Lead, commercial 1 cubic foot 125140 FEECO
Product . Volume . Weight lbs Source . Books, hardback, loose . 1 cubic yard 529.29 . Tellus Books, paperback, loose . 1 cubic yard 427.5 . Tellus
WEIGHT IN LBS. 2 YARDS 3 YARDS 4 YARDS 5 YARDS Asphalt 2,700 5400. 8100 10800 13500 Concrete gravel or stone mix 4,050. 8100. 12150 16200 20250 Concrete average wet mix 3,730. 7460. ... Material Weight Pounds per Cubic Yard. All posted weights were gathered from the EPA amp NTEA. DownEaster Title CUBICYARDAGECALCSHEET
1 cubic foot 7.81 gallons 1 cubic inch 16.39 cubic centimeters 1 cubic inch 0.554 ounces fluid 1 cubic yard 27 cubic feet 1 cubic yard 46,656 cubic inches 1 cubic yard 202 gallons 1 cubic yard 764.5 liters Capacities Cylinder - diameter2 x depth x 0.785 cubic feet Rectangle - breadth x depth x length cubic feet
Particle Briefings from READE Weight Per Cubic Foot And Specific Gravity Typical of metals, minerals, ceramics, and organics.
Liquid zinc weighs 6.62 gram per cubic centimeter or 6 620 kilogram per cubic meter, i.e. density of liquid zinc is equal to 6 620 kgm at 419.5 C 787.1 F or 692.65K at standard atmospheric pressure.In Imperial or US customary measurement system, the density is equal to 413.273 pound per cubic foot lbft , or 3.827 ounce per cubic inch ozinch .
Metal Converter. Use this metal conversion tool to convert between different units of weight and volume. Please note that this type of conversion requires a substance density figure. A list of some common metal density approximations is provided below. Please enter a density figure, select a unit to convert from and to, enter a conversion value ...
Mar 17, 2021 How much does 1 cubic yard of 57 gravel weigh One cubic yard of gravel can weigh between 2,400 to 2,900 lbs. Or up to one and a half tons approximately. Generally, a cubic yard of gravel provides enough material to cover a 100-square-foot area with 3 inches of gravel.
Two cubic yards is about body level full. When picking up soils, sands and gravels, one cubic yard is all that is recommended on a pick-up truck. How much does crusher run weight per yard approximately 2,500 lb. What does a yard of crushed gravel weigh One cubic yard of gravel can weigh between 2,400 to 2,900 lbs. Or up to one and a half tons ...
Cubic Weight Definition. Freight transportation companies charge one of two rates for shipping. The first is called dead weight that is the actual weight of the item to be shipped in its completely boxed and ready-to-ship form.. The other possible weight that a freight company may use to calculate shipping costs is called cubic weight, and that is based on cubic feetmeters. | https://han-tech.pl/news/2007-05_27687.html |
There are total 4 letters in List, Starting with L and ending with T.
List is a scrabble word? Yes (4 Points) List has worth 4 Scrabble points.
There are some words list based on poppularity created by adding extra letters to List, These may helps in word games like scrabble and word puzzle.
Definition of the word List, Meaning of List word :
n. - A line inclosing or forming the extremity of a piece of ground, or field of combat, hence, in the plural (lists), the ground or field inclosed for a race or combat.
An Anagram is collection of word or phrase made out by rearranging the letters of the word. All Anagram words must be valid and actual words.
Browse more words to see how anagram are made out of given word.
In List L is 12th, I is 9th, S is 19th, T is 20th letters in Alphabet Series. | http://wordcreation.info/how-many-words-made-out-of-list.html |
From the west coast to the east coast of the United States, it is approximately 3,000 miles across.
How many miles is it across United States?
Depending on your route, the coast-to-coast drive across America ranges in distance from approximately 2,500 to 3,500 miles. If you’re prepared to clock eight-plus hours behind the wheel per day, the shortest route should takes four days and the longest six.
How many miles is it across the United States coast to coast?
On average, it’s anywhere from 2,400 to 3,500 miles coast to coast across the U.S. One popular route known as the Southern Route is 2,650 miles from Miami, Florida, to San Diego, California. Other routes have longer mileages but expose you to different parts of the country.
How much does it cost to road trip across America?
Our road trip across America cost us $2,382, or an average of $149/day between both of us for a 16-day road trip across the US. It’s more than the $125/day that we planned on for our USA road trip budget, but we’re not kicking ourselves for it.
What is the longest distance across the United States?
Greatest distance between any two mainland points in the contiguous 48 states (linear distance): 2,892 miles (4,654 km), from Point Arena, California, to West Quoddy Head, Maine.
What is the distance across called?
Any distance between two things is called a span. It came to refer to various other measurements, such as the distance across an arch.
How long would it take to drive across the Atlantic Ocean?
The shortest distance would be 3424.87 miles. Because it’s a straight line a speed of 80 mph could be considered reasonable. That’s 42.81 hours non stop driving.
How long would it take to road trip all 50 states?
Assuming no traffic, this road trip will take about 224 hours (9.33 days) of driving in total, so it’s truly an epic undertaking that will take at least 2-3 months to complete. The best part is that this road trip is designed so that you can start anywhere on the route as long as you follow it from then on.
How long would it take to walk across the USA?
A couple dozen people have completed a cross-country trek on foot. Based upon their journeys, you can expect it to take about six months to complete such a trip if you’re well-prepared. Of course, it can take much, much longer or, if you’re an exceptional walker and planner, you might make it in less than six months!
What is the best route to take cross-country?
The 5 Best Cross-Country Road Trip Routes America’s Mother Road: Historic Route 66. The Oregon Trail: US-20 Route. The Loneliest Road: US-50 Route. The Pacific Coast: US-101 Route to California State Route 1. The Atlantic Coast: I-95 Route.
How far is it from the East coast to the West coast of the United States?
This covers approximately 2,671 miles and is along Route 80. The longest route covers approximately 3,527 miles and is referred to as Route 50 (the Backbone of America).
How far can you walk in a day?
While your body is made for walking, the distance you can achieve at an average walking pace of 3.1 miles per hour depends on whether you have trained for it or not. A trained walker can walk a 26.2-mile marathon in eight hours or less, or walk 20 to 30 miles in a day.
How fast can you drive across the United States?
It takes about 45 hours, or six 8-hour days, to drive coast-to-coast. You will need to decide if you want to take one of four coast-to-coast interstates or traverse the country as the old-timers did on U.S. highways. If you have about three months to travel, you can even see all 48 continental states.
How long will it take to walk from California to New York?
So to walk the New York to L.A. distance of 2,448 miles, divide 2,448 by 18 to get 136 days, about 4.5 months. This straight path distance from New York to L.A. is unrealistic due to the terrain and private property. So a more realistic estimate, if you walked about 6 hours a day, would be about 6 or more months.
What is the time difference between east and west coast?
Pacific Standard time stretches across the West Ccoast of North America and lags three hours behind Eastern Standard time, at “GMT – 8.” In Daylight Savings Time, Pacific time is “GMT – 7,” still three hours behind the eastern part of the country.
Has anyone walked America?
Curious about how to walk across America or whether it is even possible? A few years back, Nate Damm set out from the Atlantic Ocean in Delaware and began walking west. More than 3,200 miles and seven months later, Nate walked across America and dipped his toes in the Pacific Ocean in California.
What is the longest state to drive through?
Alaska is officially the longest state to drive at just over 1000 miles (1,073 to be exact).
How long does it take to drive from the top of California to the bottom?
On average, you can expect between 12 and 14 hours to drive across California from south to north or from the top of California to the bottom and around 8 hours from the east coast to the west coast.
Is it safe to drive across America?
Driving across the country can be time-consuming and tiring, but it is also quite dangerous. Staying behind the wheel for so long gets both your body and your mind too tired. At some point the exhaustion is so heavy you may stop seeing any of the beauty around.
What highway takes you across the US?
Route 66 may be the most iconic path for an east-to-west road trip. But the I-80 takes the crown as the best interstate travel route through the middle of the USA, passing 11 states and 2,902 miles.
What’s the longest drivable distance on earth?
What is the longest drivable distance? The longest drivable distance on the world is from Khasan, Russia to Cape Town in South Africa. The two cities are approximately 22.000km / 13,600 miles apart and it takes 322 hours to complete. The rules for longest drivable distance are simple.
How many miles is it to the moon? | https://cyclinghikes.com/how-many-miles-across-america/ |
It is a palace owned by the family Borsellino, Baron Scunda, which stands in the main square of Cattolica Eraclea and only 13 kilometers away from the seaside and archaeological area of Eraclea Minoa. The original building of 1800 was rebuilt 'ab imis' from the ground up, while safeguarding the original portal in stone of 1800 that was caged and kept intact despite the reconstruction of the building, which took place in 1978. The works were followed by the famous Florentine architect Ettore Chelazzi that has maintained the original architectural and historical style, but has added the unique partticularities and details, such as the famous 'Serena stone' in the prospectus of Florence, the Florentine tile and other details such as the main staircase that make unique this imposing building, built with the most stringent architecture and earthquake standards . Once you cross the threshold this leads to a wide staircase that takes you to the main floor. The cellar floor with access only from inside covers an area of about 230 square meters. On the ground floor, however there are some storerooms that give access to both on the Piazza Roma and also the Via Marchese Borsellino, a well-kept courtyard garden and a small Museum of Country Life from the 1800s with over a hundred of original Sicilian artworks from the 1800s and 1900s in addition to the fine and antique wooden barrels, the entire floor has an area of approx 440 square meters of indoor and outdoor (inner courtyard). On the first floor, therfore the main floor with a large triple living room giving entirely on the main facade of the Piazza Roma that has three balconies. All the roofs of the hall are 'caisson' made of fine wood by a wise local cabinetmaker. Also on the first floor there is a large dining room, a study, a living room with a wall bookcase and fireplace, a guest bedroom with master bathroom, a room of closets, another bathroom and a kitchen with an adjacent laundry room and adjoining daily dining room. The entire floor has an area 360 square meters. On the second floor there is the sleeping area with three bedrooms, each with its own bathroom, a total of 200 square meters, and a large loft overlooking the main facade (Piazza Grande) of about 150 square meters. | https://www.ciancianamyhouse.it/gb/home/428-nobile-house-in-cattolica-property-in-sicily-.html |
PORTLAND, Ore.—What's the sound of one molecule clapping? Researchers have demonstrated a device that can pick up single quanta of mechanical vibration similar to those that shake molecules during chemical reactions, and have shown that the device itself, which is the width of a hair, acts as if it exists in two places at once—a "quantum weirdness" feat that so far had only been observed at the scale of molecules.
"This is a milestone," says Wojciech Zurek, a theorist at the Los Alamos National Laboratory in New Mexico. "It confirms what many of us believe, but some continue to resist—that our universe is 'quantum to the core'."
Physicists have long known that, following the laws of quantum mechanics, objects at the scale of atoms or smaller can exist in multiple simultaneous states. For example, a single electron can move along multiple different paths or an atom can be placed in two different places, simultaneously. This so-called superposition of states should in principle apply to larger objects, as well, as in the proverbial thought experiment in which a cat is simultaneously dead and alive. And in recent years various teams have shown that the weird phenomenon does occur among objects as big as molecules, and also in truly macroscopic systems such as electrical currents in superconductors.
In the new experiment Aaron O'Connell, a graduate student at the University of California, Santa Barbara, and his co-workers have shown for the first time that larger objects can also be in two places at once. "It tells us that quantum mechanics works for macroscopic objects in space," says O'Connell, who presented the results here at a meeting of the American Physical Society. The results were also published online Wednesday in Nature. (Scientific American is part of Nature Publishing Group.)
The team used computer-chip manufacturing techniques to create a mechanical resonator—akin to a small tuning fork. The device is a piece of piezoelectric material (a material that expands or contracts in the presence of an electric field as well as generates an electrical field when put under stress) sandwiched between two layers of aluminum, which act as electrodes. It is one micron thick and 40 microns long, just enough to be visible "with your naked eye," O'Connell says.
The resonator's electrodes are attached to an electronic readout based on superconducting circuits, and the whole contraption is kept in a vacuum and cooled to within 20 thousandths of a degree above absolute zero. But the electronic circuitry can also be used to apply a voltage to the electrodes, so that the team can get the resonator to expand and contract at will. This motion takes place at a characteristic, or resonant, frequency of six gigahertz, or six billion cycles per second. (Tuning forks also have a resonant frequency—in the order of kilohertz—but the mode of resonant vibration in that case is to oscillate sideways rather than to expand and contract.)
The team's first result was to show that at such chilly temperatures the width, or amplitude, of the resonator's vibration becomes quantized—in other words, there is a small amount of vibrational energy, called a phonon, below which the resonator is essentially still. The existence of discrete packets of energy is a hallmark of quantum behavior, and phonons are the mechanical equivalent of light's photons—they are the ultimate, indivisible quanta of vibration, whether thermal or acoustic.
Next, the team put the superconducting circuit into a superposition of two states, one with a current and the other one without. Correspondingly, the resonator was in a superposition of vibrating and not vibrating. These quantum states continued for about six nanoseconds—about as long as the team expected—before fading away.
In a vibrating state each atom in the resonator only moves by an extremely small distance—less than the size of the atom itself. Thus, in the superposition of states the resonator is never really in two totally distinct places. But still, the experiment showed that a large object (the resonator is made of about 10 trillion atoms) can display just as much quantum weirdness as single atoms do. "Yup, quantum mechanics still works," says U.C.S.B.'s Andrew Cleland, O'Connell's co-author and adviser. As to how the day-to-day reality of objects that we observe, such as furniture and fruit, emerges from such a different and exotic quantum world, that remains a mystery.
In addition to its theoretical implications, the device could also find applications in the study of phonons that occur in nature, because a phonon that perturbs the resonator can be detected through the electronic circuit—it is essentially a quantum microphone. "This is a fantastically sensitive detector of acoustic vibration," Cleland says. In principle, one could even place molecules on the resonator and "hear them" interact, chemically or otherwise.
| |
How many phenotypes are in a Trihybrid cross?
27:9:9:9:3:3:3:1 ratio: As can be seen in the forked line diagram above, a trihybrid cross yields a phenotypic ratio of 27:9:9:9:3:3:3:1. This reflects the phenotypes generated by the 64 genotypic combinations resulting from 8 different male gametes fertilizing 8 different female gametes.
Is it possible to have a Trihybrid cross?
A trihybrid cross involves the same steps as a dihybrid cross, but instead of looking at the inheritance pattern of two specific traits, it is possible to look at three different traits and the probability of their combination showing up in the genotype.
What is Trihybrid cross with example?
trihybrid, tetrahybrid, etc. are all crosses in which three, four, etc. number of hybrid traits are monitored in a cross between two organisms that are heterozygous for each trait in question. e.g.: AaBbCc x AaBbCc (trihybrid); AaBbCcDd x AaBbCcDc (tetrahybrid), and so on.
What would be the sum of phenotypes and genotypes obtained from a Trihybrid test cross?
The sum of phenotypes and genotypes obtained from a trihybrid test cross: As a “dihybrid cross”, the phenotypic ratio is 9:3:3:1. … Therefore, the total number of phenotypes and genotypes produced in a “dihybrid cross” are 4 + 9 = 13. The phenotype ratio concluded for dihybrid cross is “9:3:3:1”.
How do you find the phenotypic ratio?
Write the amount of homozygous dominant (AA) and heterozygous (Aa) squares as one phenotypic group. Count the amount of homozygous recessive (aa) squares as another group. Write the result as a ratio of the two groups. A count of 3 from one group and 1 from the other would give a ratio of 3:1. | https://dyslexiclibrary.com/genes/question-how-many-phenotypes-are-possible-in-a-trihybrid-cross.html |
Here are some things to consider doing. Some items may not be applicable to you.
Before buying a gift:
- Determine if the recipient has indicated preferences, e.g., through a registry at stores, with family or friends
- Avoid buying a gift if it might:
- Make the recipient feel too much in debt to you
- Make others giving gifts feel their gifts are inadequate
- Make others not receiving gifts (e.g., a sibling) feel left out
When giving:
- Give the recipient the option of exchanging the gift, e.g., for a different size
Receiving gifts:
Before receiving gifts:
-
Tell those who are planning to give you gifts to not spend too much
When opening:
-
Write down who gift is from and a description of the gift
After receiving:
- Send or give a thank you note
- Try to use the gift when the giver is around
EXPLORE: Money
Thanks for reading! | https://checklists.com/money/gifts |
1 Kilogram (kg) is equal to 1000 grams (g). To convert kilograms to grams, multiply the kilogram value by 1000.
For example, to find out how many grams there are in 2 kg, multiply 2 by 1000, that makes 2000g in 2 kg.
kilograms to grams formula
gram = kilogram * 1000
What is a Kilogram?
Kilogram (kilo) is a metric system mass unit. 1 kg = 1000 grams. The symbol is "kg".
Please visit all weight and mass units conversion to convert all weight and mass units.
2015 - asknumbers.com. All rights reserved. | https://www.asknumbers.com/kilograms-to-grams.aspx |
---
abstract: 'We consider discriminating between bipartite boxes with 2 binary inputs and 2 binary outputs ($2\times 2$) using the class of completely locality preserving operations i.e. those, which transform boxes with local hidden variable model (LHVM) into boxes with LHVM, and have this property even when tensored with identity operation. Following approach developed in entanglement theory we derive linear program which gives an upper bound on the probability of success of discrimination between different isotropic boxes. In particular we provide an upper bound on the probability of success of discrimination between isotropic boxes with the same mixing parameter. As a counterpart of entanglement monotone we use the non-locality cost. Discrimination is restricted by the fact that non-locality cost does not increase under considered class of operations. We also show that with help of allowed class of operations one can distinguish perfectly any two extremal boxes in $2\times 2$ case and any local extremal box from any other extremal box in case of two inputs and two outputs of arbitrary cardinalities.'
author:
- 'K. Horodecki$^{1,2}$'
title: 'On distinguishing of non-signaling boxes via completely locality preserving operations'
---
Introduction
============
Asking two distant parties that do not communicate for certain set of answers is the well known scenario for a non-local game [@Bell; @Brassard-pseudo]. Depending on the resource the parties share, they can obtain higher or lower probability of success in winning the game. Sharing a quantum state can allow for the probability higher with respect to classical resources, and sharing arbitrary non-local but not-signaling system can make it sometimes even equal to 1 [@PR].
For this reason, among others, the non-locality represented by the so called box (a conditional probability distribution) has been treated as a resource in recent years [@Barret-Roberts]. The world of non-signaling, non-local boxes bares analogy with the world of entangled states [@Masanes-NStheor; @Pawlowski-Brukner; @Ekert91; @BHK_Bell_key; @Bell-security; @Hanggi-phd; @BrunnerSkrzypczyk; @Allcocketal2009; @Forster; @Brunneretal2011]. Therefore it is clear, that investigation of non-locality and entanglement can help each other. We develop this analogy, in the scenario of distinguishing between systems (see in this context [@Bae-dist]). Namely we consider scenario in which two distant parties know that they share a box drawn from some ensemble. Their task is to tell the box they share with as high probability as it is possible, using allowed class of operations. In our case, these operations will be such that transform local boxes into local ones and has this property even when tensored with identity operation.
An analogous scenario in entanglement theory was considered in recent years (see e.g. [@Bandyopadhyay-dist] and references therein), where one asks about discriminating orthogonal quantum states by means of Local Operations and Classical Communication (LOCC). In our method, we base on one of the first results on this subject [@BellPRL], where it was shown that one can not distinguish between 4 Bell states $\{
|\psi_i\>\}_{i=1}^4$ by LOCC. The common method in entanglement theory is that something is not possible or else some monotone would increase under LOCC operation, which is a contradiction. Here this approach was not directly applicable, since the states (and their entanglement) can be completely destroyed in process of distinguishing. The Ghosh et al. (GKRSS) method of [@BellPRL] gets around this problem, by considering entanglement of the Bell states classically correlated with themselves: \_[ABCD]{}= \_i [14]{} |\_i\_i|\_[AB]{}|\_i\_i|\_[CD]{}. \[eq:4Bells\] Indeed, if Alice and Bob could distinguish the Bell states on system AB, they could transform the states on CD into the singlet state by local control operations, that transform each of $|\psi_i\>$ into $|\psi_0\>$ which would mean that distillable entanglement of $\rho_{ABCD}$ is at least 1 e-bit. This contradicts the fact that the state $\rho_{ABCD}$ is separable as it can be written as $\rho_{ABCD}= \sum_i {1\over 4} |\psi_i\>\<\psi_i|_{AC}\ot |\psi_i\>\<\psi_i|_{BD}$. Hence the states $\{|\psi_i\>\}$ can not be perfectly distinguishable.
In what follows, we consider an analogue of the above state (\[eq:4Bells\]), based on the so called isotropic boxes $B^{\alpha_i}$ ($B^{\beta_i}$), i.e. boxes that are mixtures of Popescu-Rohrlich (PR) boxes and ’anti’ PR box with probability $\alpha_i$ ($\beta_i$). B\_[in]{} = \_[i=0]{}\^[n-1]{} p\_i B\_[AB]{}\^[\_i]{}B\_[CD]{}\^[\_i]{}. A class of operations we consider is similar to that of locality preserving [@Joshi-broadcasting] but we demand also completeness i.e. that the operations should be locality preserving even when tensored with identity operation on some part of a box. Moreover we demand that they have special output i.e. that they are [*discriminating*]{} operations. As a monotone we choose a [*nonlocal cost*]{} of a box [@Elitzur-nonloc]. We also show that with help of these operations one can distinguish any 2 extremal boxes in $2\times 2$ case and any local extremal bipartite box with 2 inputs and 2 outputs from any other extremal box of this form for arbitrary cardinalities of inputs and outputs. This partially resembles result of [@Walgate-twoent] from entanglement theory where it is proven that any two orthogonal (multipartite) states can be perfectly distinguished.
The rest of the paper is organized as follows: section \[sec:scenario\] provides the scenario and definition of the class of operations. Section \[sec:iso-twir-cost\] provides useful definitions and some properties of nonlocal cost. In section \[sec:bound\] we give the main reasoning behind the bound on probability of success in discrimination of isotropic boxes, as well as some corollaries. The proof goes thanks to main inequality: C(B\_[in]{}) C(B\_[out]{}) with $B_{out}$ being $B_{in}$ after discrimination on system $AB$. Finally in section \[sec:distinguish\] we consider perfect distinguishability of two extremal boxes in bipartite case $2\times 2$ as well as in bipartite case of 2 inputs of arbitrary cardinality and 2 outputs of arbitrary cardinality.
Scenario of distinguishing {#sec:scenario}
==========================
By a bipartite box $X$ we mean a family of probability distributions that have support on Cartesian product of spaces $\Omega_{A}\times \Omega_{B}$. Each of the spaces may contain (the same number of) $n$ systems. In special case of bipartite boxes with $n=1$, we denote them as $P_X(a,b|x,y)$ where $x,y$ denotes the inputs to the box and $a,b$ its output. We say that two boxes are [*compatible*]{} if they are defined for the same number of parties, with the same cardinalities of each corresponding input and each corresponding output. Definition of multipartite box is analogous. We will consider only boxes that satisfy some [*non-signaling*]{} conditions. To specify this we need to define general non-signaling condition between some partitions of systems [@Barrett-GPT; @Barret-Roberts].
[Consider a box of some number of systems $n+m$ and its partition into two sets: $A_1,...,A_n$ and $B_1,...,B_{m}$. A box on these systems given by probability table $P(\bar{a},\bar{b}|\bar{x},\bar{y})$ is non-signaling in cut $A_1,...,A_n$ and $B_1,...,B_{m}$ if the following two conditions are satisfied: \_[|[a]{},|[x]{},|[y]{},|[y]{}’]{} \_[|[b]{}]{}P(|[a]{},|[b]{}||[y]{},|[y]{}) = \_[|[b]{}]{}P(|[a]{},|[b]{}||[y]{},|[y]{}’)\
\_[|[b]{},|[x]{},[|[x]{}’]{},|[y]{}]{} \_[|[a]{}]{}P(|[a]{},|[b]{}||[y]{},|[y]{}) = \_[|[a]{}]{}P(|[a]{},[|[b]{}]{}||[x]{}’,|[y]{}) If the first condition is satisfied, we denote it as A\_1,...,A\_n B\_1,...,B\_[m]{},if the second we write B\_1,...,B\_[m]{} A\_1,...,A\_n,and if both: A\_1,...,A\_n B\_1,...,B\_[m]{}. We say that A box of systems $A_1,...,A_n,B_1,...,B_{m}$ is fully non-signaling if for any subset of systems $A^IB^J \equiv A_{i_1},...,A_{i_k}B_{j_1},...,B_{j_l}$ with $I\equiv\{i_1,...,i_k\}\subseteq N \equiv\{1,...,n\}$ and $J\equiv\{j_1,....,j_l\}\subseteq M \equiv\{1,...,m\}$ such that not both I and J are empty, there is A\^IB\^JA\^[N-I]{}B\^[M-J]{}. ]{}
In what follows we will consider only boxes that are fully non-signaling, according to the above definition. The set of all boxes compatible to each other , that satisfy the above definition, we denote as $NS$.
By locally realistic box we mean the following ones: [Locally realistic box of $2n$ systems $A_1,...,A_n,B_1,...,B_n$ is defined as \_p()P[(|[a]{}||[x]{})]{}\_[A\_1,...,A\_n]{}\^[()]{}P[(|[b]{}||[y]{})]{}\_[B\_1,...,B\_n]{}\^[()]{} \[eq:local\] for some probability distribution $p(\lambda)$, where we assume that boxes $P{(\bar{a}|\bar{x})}_{A_1,...,A_n}^{(\lambda)}$ and $P{(\bar{b}|\bar{y})}_{B_1,...,B_n}^{(\lambda)}$ are fully non-signaling. The set of all such boxes we denote as $LR_{ns}$. All boxes that are fully non-signaling but do not satisfy the condition (\[eq:local\]), are called non-$LR_{ns}$. ]{}
We consider a family $\cal L$ of operations $\Lambda$ on a box shared between Alice and Bob, which preserve locality, as defined below (see in this context [@Joshi-broadcasting]).
An operation $\Lambda$ is called locality preserving (LP) if it satisfies the following conditions:
\(i) [*validity*]{} i.e. transforms boxes into boxes.
\(ii) [*linearity*]{} i.e. for each mixture $X = p P + (1-p) Q$, there is $\Lambda(X) = p \Lambda(P) + (1-p) \Lambda(Q)$
\(iii) [*locality preserving* ]{} that is transforms boxes from $LR_{ns}$ into boxes from $LR_{ns}$.
\(iv) [*non-signaling*]{} that is transforms fully non-signaling boxes into fully non-signaling ones. \[def:LP\]
In what follows we will focus on special locality preserving operations, namely those which are completely locality preserving.
[An operation $\Lambda$ acting on system $AB$ is called completely locality preserving (CLP) if $\Lambda$ is locality preserving and $\Lambda \ot I_{CD}$ is locality preserving where $I_{CD}$ is identity operation on arbitrary but finite-dimensional bipartite system of subsystems $C$ and $D$. ]{}
[Note, that $CLP \subsetneq LP$, since the swap operation $V$ is in LP, but is not in CLP: $V_{AB}\otimes I_{A'B'}$ acting on product of two nonlocal boxes on $AA'$ and $BB'$ respectively, makes non-locality across $AA'$ versus $BB'$ cut. Similarly like the swap operation on quantum states transforms separable states into separable ones, yet is not completely separable to separable state operation, as $V_{AB}\otimes I_{A'B'}$ creates entanglement in $AA'$ vs $BB'$ cut, when applied to tensor product of two singlet states: $|\psi_0\>_{AA'}\otimes|\psi_0\>_{BB'}$. ]{}
Finally, we will be interested in those CLP maps which are discriminating between some boxes from a given [*ensemble*]{}, where by ensemble we mean the family of pairs $\{p_i,X_i\}_{i=0}^{n-1}$ where $X_i$ is a bipartite box, and $p_i$ is the probability with which Alice and Bob share this box such that $\sum_{i=0}^{n-1} p_i = 1$. We will need also the notion of a [*flag-box*]{} which is a box denoted as $F(j)$ defined as deterministic box with single input $s$ of cardinality 1, and as a (single) output a probability distribution on $\{1,...,n\}$ which is Kronecker delta $\delta_{j,e}$. To indicate its input and output, we will denote it also as $P^{(j)}(e|s)$. It can be viewed as a counterpart of quantum state $|j\>\<j|$, and it is equivalent to probability distribution [@Short-Wehner]. We say, that an $F(j)$ is a flag-box with flag $j$. In what follows an operation returning flag-boxes with flag $j$ means that Alice and Bob claim that they were given box number $j$ from the ensemble.
[$\Lambda$ discriminates the ensemble $\{p_i,X_i\}_{i=0}^{n-1}$ if for every $i$, there is (X\_i) = \_[j=0]{}\^[n-1]{}q\^[X\_i]{}\_[j]{}F\^A(j)F\^B(j) where $\{q^{X_i}_{j}\}_{j=1}^n$ is a probability distribution that may depend on $X_i$. The box $F^A(j)$ is a flag-box with flag $j$ on system of Alice and $F^B(j)$ is that on Bob’s. \[def:discrim\] ]{}
Note, that the above definition could be defined without reference to ensemble: just on any box $X$ discriminating operation should provide flag-boxes. However we find the latter, in principle more restrictive one.
We can describe now the scenario of discrimination of an ensemble. The Referee creates a box on systems $RAB$ of the form: \_[i=0]{}\^[n-1]{} p\_i F\^R(i)X\^[AB]{}\_i and then sends system $A$ to Alice and $B$ to Bob, distributing thereby between them the box $X_i$ with probability $p_i$. The Referee holds flag-box $F(i)$ and waits for their answer. Alice and Bob are allowed to apply some operation which is (i) CLP and (ii) discriminates the ensemble $\{p_i,X_i\}_{i=0}^{n-1}$, denoted as $\Lambda$. Due to Definition \[def:discrim\], by linearity of CLP operations, $\Lambda$ results in the following box shared between the Referee, Alice and Bob (see Fig. \[fig:scenario\]):
\_[i=0]{}\^[n-1]{}\_[j=0]{}\^[n-1]{}p\_i q\^[(i)]{}\_[j]{}F\^R(i)F\^A(j)F\^B(j)
We define now the [*probability of success $p_{\Lambda}^s$ with which $\Lambda$ discriminates the ensemble*]{}. It is computed from the joint probability distribution of the Referee’s ’flags’ and the Alice’s and Bob’s ’flags’ $p_{\Lambda}(i,j)\equiv p_i q^{(i)}_{j}$ as p\_\^s \_[i=0]{}\^[n-1]{} p\_(i,i)
We can finally define the problem of distinguishing between boxes as follows:
[*Given an ensemble of bipartite non-signaling boxes {p\_i, X\_i}, find the maximal value of probability of success $p_s$ in discriminating between the given boxes using operations CLP that discriminates the ensemble*]{}.
One can be interested if the set of $CLP$ operations that distinguishes an ensemble is not empty. It is easy to observe, that any composition of local operations on both sides is a valid CLP operation providing the local operations satisfy non-signaling condition. It is however not easy to see, if such operations could produce the same flag-boxes for Alice and for Bob, which are correlated with given ensemble [^1]. However, as we show in the Appendix \[subsec:comparing-ops\], the following operation is CLP operation which discriminates the ensemble: it is a composition of (i) local measurements, (ii) exchanging the results, (iii) grouping them into disjoined sets and (iv) creating the same flag-boxes for each group followed by tracing out of the results of measurements (see example below). We will call this type of operation the [*comparing operations*]{} as the parties decide on the guess after comparing their outputs. We note here, that output of the form of the same two flag-boxes is crucial for further considerations, as thanks to having the same flag-boxes, both Alice and Bob can transform their box conditionally on output of distinguishing.
Consider a pair of boxes: the PR box, defined as
P\_1(a,b|x,y) = {
[ll]{} [12]{} &\
0 & ,
. and anti-PR box defined as P\_2(a,b|x,y) = {
[ll]{} [12]{} &\
0 & .
. Then, by (i) choosing $x=1$ (Alice) and $y=1$ (Bob), comparing the results (ii) deciding to output flags $F(1)^A\otimes F(1)^B$ if the results are not equal (and hence $a\oplus b = 1$ while $F^A(2)\otimes F^B(2)$ providing the results are equal (and hence $a\oplus b = 0$) (iii) tracing out the results of measurements, they distinguish perfectly the PR box from anti-PR box via a CLP operation.
![ Depiction of the considered scenario: Alice and Bob are give by the Referee R one of the boxes $B^{\alpha_i}_i$ with probability $p_i$. They apply CLP operation to distinguish between them, and send the guess $i$ to the Referee[]{data-label="fig:scenario"}](scenario1v1.jpg){width="5cm"}
Isotropic boxes, twirling and non-local cost {#sec:iso-twir-cost}
============================================
In what follows, we will use numerously the boxes locally equivalent to PR box:
B\_[rst]{}(a,b|x,y) = {
[ll]{} 1/2 &\
0 & .
. (where $a,b,x,y,r,s,t$ are binary), which we call here maximally nonlocal boxes.
More specifically, we will focus on distinguishing between [*isotropic*]{} boxes [@Short] B\_i\^[\_i]{} = \_i B\_i + (1-\_i) |[B]{}\_i. $B_i\in\{B_{rst}\}_{rst=000}^{111}$, where $\alpha_i \in (3/4, 1]$, and $\bar{B}_i$ denotes $B_{rs\bar{t}}$ with $\bar t$ being negation of bit t. We define here a function $f$ which maps indices $i$ into strings $rst$, that is $f(i):=rst$ iff $B_i^{\alpha_i} = \alpha_i B_{rst} + (1-\alpha_i)B_{rs\bar{t}}$. In other words, this function groups isotropic boxes according to which maximally nonlocal box it is built of. By $B_{rst}^{\alpha_i}$ we will denote $B_i^{\alpha_i}$ such that $f(i) = rst$ i.e. $\alpha_i B_{rst} + (1-\alpha_i)B_{rs\bar{t}}$. We exemplify this notation on Fig. \[fig:fig1\]
![Exemplary ensemble $\{{1\over 5},B^{\alpha_i}_{i}\}_{i=1}^5$. The members of ensemble are depicted as green circles. The square depicts the set of local boxes. The function $f$ is defined as: $f(1)=000, f(2)=001, f(3)=100, f(4)=000, f(5)=100$ and there is $\alpha_1=\alpha_2=\alpha_3=1$, $\alpha_4 = {1\over 6}$ and $\alpha_5 = {1\over 4}$. []{data-label="fig:fig1"}](ensembles.jpg){width="7cm"}
The boxes $B_{000}$ and $B_{001}$ are invariant under the following transformation [@Short; @Nonsig-theories] called twirling: [A twirling operation $\tau$ is defined by flipping randomly 3 bits $\delta,\gamma,\theta$ and applying the following transformation to a 2x2 box: x && x\
y &&y\
a &&a x\
b &&b y\
\[def:twirling\] ]{}
In what follows, as a measure of non-locality we take the non-locality cost $C(P)$ [@PR; @Brunneretal2011] defined like this: C(P) = {p: P= pX + (1-p)L, X NS, L LR\_[ns]{}}. We make the following easy observation, that non-locality cost is monotonous under CLP operations. [Nonlocality cost does not increase under LP and CLP operations \[obs:monot\] ]{}
[*Proof*]{}.- Let us consider any ensemble of $P = p X + (1-p)L$. By linearity of $\Lambda$ (P) = p(X) + (1-p)(L). Now, no matter $\Lambda$ is LP, or CLP, it is locality preserving which implies that $\Lambda(L)$ is some $L'$ from $LR_{ns}$. Moreover it is valid, hence transforms $X$ into some non-signaling box $\Lambda(X)$. (P)= p(X) + (1-p)L’. Hence, there is $p \geq C(\Lambda(P))$ since this valid decomposition into local $L'$ and nonlocal part $\Lambda(X)$ can be suboptimal for $C(\Lambda(P))$. Since this happens for any ensemble, and $C(P)$ is infimum of $p$ over the ensembles, we have that $C(\Lambda(P))\leq C(P)$, by definition of infimum. Indeed, for any $\delta$ there exists $p_{\delta}$ which is such that $C(P) + \delta > p_{\delta}$ thus by contradiction if $C(\Lambda(P))> C(P)$ then taking $\delta = C(P)-C(\Lambda(P))$ we would get $C(\Lambda(P)) > p_{\delta}$, which contradicts the above considerations.
We will need also an observation (see in this context [@Brunneretal2011; @Joshi-broadcasting])
[An isotropic box $B^{\alpha}_{000}= \alpha B_{000} + (1-\alpha) B_{001}$ with $\alpha \in ({3\over 4},1]$ satisfies C(B\^\_[000]{})=4-3. \[obs:iso\] ]{}
[*Proof*]{}.- Since the box $B^{\alpha}_{000}$ is invariant under twirling, it’s optimal decomposition in definition of $C(P)$ has both local part $L$ and nonlocal $X$ which are also invariant under twirling i.e. lays on a line between $B_{000}$ and $B_{001}$. Let us consider some decomposion $B^{\alpha}_{000} = p X + (1-p) L$, where $L$ is a local box. Note, that $p$ in this decomposition can be written in terms of CHSH value: (X) = 00 + 01 + 10 -11, with $\<ij\>=P(a=b|x=i,y=j)-P(a\neq b|x=i,y=j)$. Namely: p = [(B\^\_[000]{}) -(L) (X) - (L)]{}. It is now easy to see, that for fixed $L$, minimal $p$ is reached for $X = B_{000}$, as we can always lower the $p$ by setting $\gamma(X) = 4$. Hence we end up with optimization of a function , where $-2 \leq \gamma(L)\leq 2$. Using Mathematica 7.0, we find this function attains minimum at $4\alpha -3$, which we aimed to prove.
Upper bound on distinguishing of isotropic boxes {#sec:bound}
================================================
We focus now on distinguishing of the following ensemble: {p\_i, B\_i\^[\_i]{}}\_[i=0]{}\^[n-1]{}. with $\alpha_i \in [{1\over 2},1]$. Following GKRSS method [@BellPRL], we will consider a box obtained by classically correlating boxes $B_i^{\alpha_i}$ with other isotropic boxes, parametrised by some $\beta_i \in [{1\over 2},1]$. B\_[in]{}=\_[i=0]{}\^[n-1]{}p\_i B\_i\^[\_i]{}B\_i\^[\_i]{}, and compare its non-locality with the box after application of some optimal CLP discriminating operation $\Lambda$ (see Fig. \[fig:fig-GKRSS\]).
![Illustration of the analogue of the GKRSS method: Alice and Bob could apply the CLP distinguishing operation to the box $B_{in}=\sum_{i=0}^{n-1}p_i B_i^{\alpha_i}\ot B_i^{\beta_i}$ and via distinguishing on AB distill boxes $B^{\beta_i}_i$ on CD. If the initial non-locality of $B_i$ is small, the success in distinguishing is limited, as distillation can not exceed initial cost of non-locality of $B_i^{\alpha_i}$.[]{data-label="fig:fig-GKRSS"}](GKRSSadaptation.jpg){width="7cm"}
We obtain the following result:
[For an ensemble $\{p_i,B_i^{\alpha_i}\}_{i=0}^{n-1}$ with $\alpha_i \in [{1\over 2},1]$ and any CLP operation $\Lambda$ which discriminates the ensemble there is \_[i=0]{}\^[n-1]{} p\_(i,i)(\_i +\_k \_k -1) + \_k\_k-1, where $B_{in} = \sum_{i=0}^{n-1} p_i B_i^{\alpha_i}\ot B_i^{\beta_i}$. \[thm:main\] ]{}
Following the above theorem we have immediate corollary, considering distinguishing of isotropic states with the same parameter $\alpha_i=\alpha$, and considering $\beta_i = \alpha$.
[For the ensemble of isotropic boxes with the same parameter $\alpha_i =\alpha\in [{3\over 4},1]$: $\{p_i,B_i^{\alpha}\}$ with $n\leq 8$ the optimal probability of distinguishing by CLP operations that discriminate the ensemble satisfies: p\_s , where $B_{in} = \sum_{i=0}^{n-1} p_i B_i^{\alpha}\ot B_i^{\alpha}$. \[cor:alpha\] ]{}
The main corrolary concerns discriminating between the boxes while setting $\beta_i =1$ for all $i$ i.e. setting $B^{\beta_i}_i$ to be maximally non-local:
[For the ensemble of maximally nonlocal boxes $\{p_i,B^{\alpha}_i\}$ with $n\leq 8$, the optimal probability of distinguishing by CLP operations that discriminate the ensemble satisfies: p\_s , where $B_{in} = \sum_{i=0}^{n-1} p_i B^{\alpha}_i\ot B^{1}_i$. \[cor:nonlock-flags\] ]{}
To exemplify the consequences of the above corollary we first set also $\alpha =1$, that is consider distinguishing between maximally non-local boxes. We consider then the ensembles with a fixed number of maximally non-local boxes $k$ provided with equal weights $p_i={1\over k}$, and for each of them find $C(B_{in})$. We then take minimal of these values, obtaining universal bound $p_s(k)$ - on the probability of success of distinguishing $k$ maximally non-local boxes from each other.
To this end, we have used Mathematica 7.0 and approach of [@Brunneretal2011], but with much smaller class of deterministic boxes, since we demand stronger non-signaling condition. For $k=2$ the bound is trivial, as the cost is 1 for any pair. For $k=3$ there are only 6 ensembles for which cost is less than one (equal $1\over 3$ for all 6) which implies the bound ${11\over 12}$ for all of them. For example A\_3={[13]{}B\_[000]{},[13]{}B\_[010]{},[13]{}B\_[100]{}}, has $p_s(A_3)\leq {11\over 12}$. For $k=4$ an exemplary ensemble with the smallest non-locality cost $5\over 8$ is A\_4 = {[14]{}B\_[000]{},[14]{}B\_[001]{},[14]{}B\_[010]{},[14]{}B\_[100]{}} and there is $p_s(A_4)\leq {29\over 32}$. For $k\geq 5$ the non-locality cost is non-zero for every box that we consider and hence we have obtained the following general bounds:
p\_s(5) ,\
p\_s(6) ,\
p\_s(7) ,\
p\_s(8) .
One can be also interested if some bound can be obtained for a box which can be obtained physically, i.e. via measurement on a quantum state. We choose the boxes with $\alpha_i = {2 + \sqrt{2} \over 4}=\alpha_q$, which corresponds to CHSH quantity equal to $2\sqrt{2}$ [@CHSH; @Tsirelson]. We obtain via corollary \[cor:nonlock-flags\], denoting $p_s^{\alpha_q}(k)$ the upper bound on probability of success of discriminating between any $k$ boxes from the set $B^{\alpha}_{rst}$ that p\_s\^[\_q]{}(3) 0.975593,\
p\_s\^[\_q]{}(4) 0.926778,\
p\_s\^[\_q]{}(5) 0.874817,\
p\_s\^[\_q]{}(6) 0.833334,\
p\_s\^[\_q]{}(7) 0.785715,\
p\_s\^[\_q]{}(8) 0.750001,\
where we rounded numerical results at $6$th place. Interestingly, although corollaries \[cor:alpha\] and \[cor:nonlock-flags\] are not directly comparable as they have different $B_{in}$, in this case corollary \[cor:alpha\] leads to worse result than above, in particular $p_s^{(3)}$ in that case is bounded by more than 1.
Proof of theorem \[thm:main\]
------------------------------
Before we prove theorem \[thm:main\], we need to make some necessary observations. We will compare the initial value of non-local cost of a box with its value after applying distinguishing operation and special post-processing. The box $B_{in}$ after distinguishing equals B\_[out]{}=\_[i,j=0]{}\^[n-1]{} p\_(i,j) F\^A(j)F\^B(j)B\_i\^[\_i]{}. \[eq:out-box\] To this box we apply a post-processing transformation which is composition of (i) local reversible control-$O_j$ operation that is operation of certain rotation $O_j$ of $B_i^{\beta_i}$ controlled by index $j$ of $F(j)^A$ followed by (ii) application of the twirling $\tau$ of the target system (iii) tracing out of the control system. The role of the (i) operation is to use the fact that if Alice and Bob would discriminate well the boxes $B_i^{\alpha_i}$, then they would obtain on other system by control operation a box that has high non-locality cost (the $O_j$ rotations are such that resulting state has large fraction of PR box, like in GKRSS method, a singlet was obtained). The operations (ii) and (iii) has only technical meaning: they map the resulting box $B_{out}$ into $2\times 2$ isotropic box $B'_{out}$, so that we are able to calculate the non-locality cost for this box via observation \[obs:iso\], and hence lower bound non-locality cost of $B_{out}$.
After applying operations (i)-(iii), the output box is of the form B’\_[out]{}=(\_[i,j=0]{}\^[n-1]{} p\_(i,j)O\_j(B\_i\^[\_i]{})). where $O_j(B_i^{\beta_i})$ is an operation defined such that for $f(j)= rst$ and $f(i)=r's't'$ O\_j(B\^[\_i]{}\_i) B\^[\_i]{}\_[(rr’),(ss’),(r’ss’rtt’)]{}, and it is a local operation: some combination of flipping (or not) inputs x,y and output b [^2].
We make now some necessary observations.
[For a valid, linear operation $\Lambda$ which maps a non-signaling box $B$ into non-signaling box $\Lambda(B)$, and transforms $LR_{ns}$ boxes into $LR_{ns}$ there is: C(B) C((B)) \[obs:particular\] ]{}
[*Proof*]{}. The proof of this fact goes in full analogy to proof of observation \[obs:monot\]. The only difference is that the operation $\Lambda$ may not possess all properties of CLP operation for other boxes than $B$ and $\Lambda(B)$.
[The composition of operations of control-$O_j$, twirling on target system and tracing out control system applied to box (\[eq:out-box\]) does not increase the non-locality cost. \[cor:loc-pres\] ]{}
[*Proof*]{}. It is easy to check (see Appendix \[app:proof\] for full argument) that a box of the form (\[eq:out-box\]) and control-$O_j$ operation satisfy assumptions of the observation \[obs:particular\], while twirling and tracing out of a subsystem are just CLP operations, hence the composition of those three can not increase cost of non-locality.
From definition of $O_j$ operation, there follows directly an observation:
[The operations $O_j$ satisfy the following relations: O\_j(B\_i\^[\_i]{}) = B\_[000]{}\^[\_i]{} . Moreover for all $0 \leq i,j\leq n-1$ there is $O_j(B_i^{\beta_i})=B_{rst}^{\beta_i}$ for some $rst\in\{000,...,111\}$. \[obs:rotations\] ]{}
We will need now the following observation concerning twirling operation:
[For any $0\leq i \leq n-1$ there is (B\_i\^[\_i]{}) = B\_i\^[\_i]{} f(i)=000,001 and (B\_i\^[\_i]{})= [12]{}(B\_[000]{} + B\_[001]{}) f(i){010,...,111}. where $\tau$ is the twirling operation given in def \[def:twirling\]. \[obs:twirling\] ]{}
[*Proof*]{}. Follows directly from definition of the twirling operation and the boxes $B_{rst}$.
We can now pass to prove theorem \[thm:main\].
[*Proof*]{}. By monotonicity under locality preserving operations (observation \[obs:monot\]), the fact that $B_{out}$ is a result of CLP map, and corollary \[cor:loc-pres\] we have C(B\_[in]{}) C(B\_[out]{}) C(B’\_[out]{}), hence, to prove the thesis it suffice to show that there is $C(B'_{out}) \geq 4[\sum_i p_{\Lambda}(i,i)(\beta_i +\max_k \beta_k -1) + (1-\max_k \beta_k)] - 3$. Thus by observation \[obs:iso\], it suffice to show that if we decompose $B'_{out}$ as $q B_{000} + (1-q)B_{001}$, the mixing parameter $q$ will satisfy q \_[i=0]{}\^[n-1]{} p\_(i,i)(\_i +\_k \_k -1) + (1-\_k \_k). \[eq:aim\] Recall that B’\_[out]{}=(\_[i,j=0]{}\^[n-1]{} p\_(i,j)O\_j(B\_i\^[\_i]{})). This by linearity of twirling and observation \[obs:rotations\] equals B’\_[out]{}=\_[i=0]{}\^[n-1]{}p\_(i,i)B\_[000]{}\^[\_i]{} + \_[ij]{}\^[n-1]{} p\_(i,j)(B\_[j|i]{}\^[\_i]{}). with $j|i \in \{000,...,111\}$. Now by observation \[obs:twirling\] there is B’\_[out]{}=\_[i=0]{}\^[n-1]{}p\_(i,i)B\_[000]{}\^[\_i]{} +\
\_[ij]{}\^[n-1]{} p\_(i,j)\[u\_[ij]{}B\_[000]{} + (1- u\_[ij]{})B\_[001]{}\]. where $u_{ij} = 1/2$ for $i,j$ such that $j|i\neq 000$ and $j|i\neq 001$ while $u_{ij}= \beta_i$ for all i and j such that $j|i = 000$ and $u_{ij}=(1-\beta_i)$ for all i and j such that $j|i = 001$. Hence the multiplying coefficient of $B_{000}$ reads q =\_[i=0]{}\^[n-1]{} p\_(i,i)\_i + \_[ij, j|i000, j|i001]{} p\_(i,j)[12]{} +\
\_[i, j|i = 000]{}p\_(i,j)\_i + \_[i, j|i = 001]{}p\_(i,j)(1-\_i). Since we have $\beta_i \in [{1\over 2}, 1]$ there is $(1-\beta_i) \leq \beta_i$ and $(1-\beta_i)\leq {1\over 2}$. Thus there is q \_[i=0]{}\^[n-1]{} p\_(i,i)\_i + \_[ij]{} p\_(i,j)(1-\_i). and further q \_[i=0]{}\^[n-1]{} p\_(i,i)\_i + \_[ij]{} p\_(i,j)(1-\_k \_k), which is nothing but q\_[i=0]{}\^[n-1]{} p\_(i,i)\_i + (1-\_i p\_(i,i))(1-\_k \_k). which is equivalent to (\[eq:aim\]), and the assertion follows.
Discriminating between extremal boxes {#sec:distinguish}
=====================================
In this section, we apply the [*comparing operations*]{} to distinguish some boxes perfectly, i.e. with $p_s = 1$. More precisely, we show that in $2\times 2$ case any extremal boxes are distinguishable by comparing operations. We also prove, that in case of 2 inputs and 2 outputs, whatever the cardinality of inputs and outputs, any local extremal box is distinguishable from any extremal box, by these operations.
[For any two bipartite boxes $X_1 \neq X_2$ compatible with each other, there is a lower bound on probability of success in distinguishing them when provided with equal probabilities, via CLP operation: + [14]{}\_[x,y]{}\[\_[a,b]{} |P\_[X\_1]{}(a,b|x,y) - P\_[X\_2]{}(a,b|x,y)|\] ]{}
[*Proof*]{}. The proof is due to the fact that comparing operations given in eq. (\[eq:exCLP\]) of Appendix, are CLP. This means that the parties can choose the best measurement $(x,y)$ and then group the results according to Helstrom optimal measurement [@Hayashi-book], which attains the variational distance between the conditional probability distributions $P_{X_1}(a,b|x,y)$ and $P_{X_2}(a,b|x,y)$.
We now turn to special case, where we discriminate only between extremal boxes. The intuition is that they should be to some extent distinguishable, and this is the case as we show below. We first focus on $2\times 2$ case because they are extremal (similarly like pure quantum states). In what follows, by support of a box $E$, we will mean the following set: E {(a,b,x,y): P\_E(a,b|x,y) > 0}.
[Any two $2\times 2$ extremal boxes are perfectly distinguishable by some CLP operation. ]{}
[*Proof*]{}.
It is easy to see that each local boxes are distinguishable among others since by locality they need to have disjoined support of some probability distributions, and measuring this probability distribution determines which local box we have. For other cases the proof boils down to checking that there always exists a pair of entries $x$ and $y$ such that the resulting probability distributions for two extremal boxes have disjoined support. Hence, upon a [*comparing operation*]{} which starts from measuring this pair of entries, the boxes are perfectly distinguishable.
In order to partially generalize this result to the case of larger dimensions, we now observe general property of extremal boxes: support of one can not be contained in the support of the other or else the latter would not be extremal.
[For any two extremal $n$-partite boxes $E_1 \neq E_2$ of the same dimensionality, there is $\mbox{supp} E_1 \nsubseteq \mbox{supp} E_2$. \[lem:ext-dis\] ]{}
[*Proof*]{}. For clarity, we state the proof for a bipartite boxes, since that for $n$-partite, following similar lines. Suppose by contradiction, that $\mbox{supp} E_1 \subseteq \mbox{supp} E_2$. Then if all probabilities of $E_1$ are less than or equal to corresponding probabilities of $E_2$ (for every measurement), then $E_1 = E_2$. Indeed, if there was some $(a_0,b_0,x_0,y_0)$ such that P\_[E\_1]{}(a\_0,b\_0|x\_0,y\_0) < P\_[E\_2]{}(a\_0,b\_0|x\_0,y\_0), then \_[a,b]{} P\_[E\_1]{}(a,b|x\_0,y\_0) < \_[a,b]{} P\_[E\_2]{}(a,b|x\_0,y\_0) = 1, which is a contradiction since $\{P_{E_1}(a,b|x_0,y_0)\}$ is a probability distribution. Thus we may safely assume that there exists $(a_0,b_0,x_0,y_0)$ such that P\_[E\_1]{}(a\_0,b\_0|x\_0,y\_0) > P\_[E\_2]{}(a\_0,b\_0|x\_0,y\_0). Let us denote $T =\{P_{E_2}(a,b|x,y): P_{E_2}(a,b|x,y) \leq P_{E_1}(a,b|x,y)\}$, and $S=\{P_{E_1}(a,b|x,y): P_{E_2}(a,b|x,y) \leq P_{E_1}(a,b|x,y)\}$ By the above consideration we have that r\_1\_[(a,b,x,y) E\_2]{} T is well defined, and by definition satisfies $r_1 >0$. Moreover, for r\_2\_[(a,b,x,y) E\_1]{} S there is $r_2 > r_1$ as it follows from: $r_2 \geq P_{E_1}(a_0,b_0|x_0,y_0) > P_{E_2}(a_0,b_0|x_0,y_0)\geq r_1$. By positivity of $r_1$ and from the above inequality we have $p\equiv {r_1\over r_2}$ satisfies $0< p < 1$ i.e. it can be interpreted as non-trivial probability. This however gives, that is a valid box. Indeed, for all $(a,b,x,y)$ there is P\_[E\_1]{}(a,b|x,y) [r\_1r\_2]{} P\_[E\_2]{}(a,b|x,y) \[eq:order\] since either $P_{E_1}(a,b|x,y) \leq P_{E_2}(a,b|x,y)$, and then ${r_1 \over r_2}<1$ gives the above inequality, or $P_{E_1}(a,b|x,y) > P_{E_2}(a,b|x,y)$ and then $P_{E_1}(a,b|x,y) \in S$ and $P_{E_2}(a,b|x,y) \in T$. In the latter case, by definition of $S$ there is $P_{E_1}(a,b|x,y){1\over r_2} \leq 1$, while $P_{E_2}(a,b|x,y) \geq r_1$ by definition of $T$, which proves (\[eq:order\]).
The box $\tilde{E}$ is also non-signaling, as a difference of two (unnormalized) non-singalling boxes. In turn, there is: E\_2 = p E\_1 + (1-p) hence $E_2$ is a non-trivial mixture of two non-signaling boxes. This is desired contradiction, since $E_2$ is by assumption an extremal box, hence the assertion follows.
To state the result that follows from the above lemma, we need a definition of conclusive distinguishing:
[We say that a multipartite box $X$ can be conclusively distinguished from a multipartite box $Y$ compatible with $X$, with nonzero probability if for there exists measurement $x^{0}_1,...,x^{0}_n$ such that there exist(s) outcome(s) $(a^{i}_1,...,a^{i}_n)$ for which $p=\sum_i P_X(a_1^{i},...,a_n^{i}|x^{0}_1,...,x^{0}_n) > 0 $ but $P_Y(a_1^{i},...,a_n^{i}|x^{0}_1,...,x^{0}_n) = 0$ for all $i$. We then say that $X$ is conclusively distinguishable from $Y$ with at least probability $p$. ]{}
From lemma \[lem:ext-dis\] it direcly follows that
[For any two extremal multipartite boxes $E_1 \neq E_2$ of the same dimensions, $E_1$ can be conclusively distinguished from $E_2$ with nonzero probability. ]{}
Note, that the above theorem is symmetric in a sense that $E_2$ can also be conclusively distinguished from $E_1$ with nonzero probability, but there may be no common measurement that allows for simultaneous conclusive distinguishing $E_1$ from $E_2$ and $E_2$ from $E_1$ with nonzero probability.
In special case when at least one of the extremal boxes is local in case of 2 inputs and 2 outputs, again using lemma \[lem:ext-dis\] we obtain the following fact:
[Any extremal bipartite box with two inputs of arbitrary cardinalities $d_A$ and $d_B$ and two outputs of arbitrary cardinalities $d'_A$ and $d'_B$ is perfectly distinguishable from any extremal local bipartite box of the same dimensions by CLP operation.]{}
[*Proof*]{}. Fix arbitrarily a pair: an extremal box $E$ and a local extremal box $L$. Note, that in bipartite case of 2 inputs and 2 outputs any extremal local box is deterministic i.e. is a family of $d_A\times d_B$ distributions with single entry equal to 1, and all others zero. By lemma \[lem:ext-dis\] for some measurement $x_0,y_0$, the support of distribution $P_L(a,b|x_0,y_0)$ is not contained within the support of $P_E(a,b|x_0,y_0)$ which means in this case, that these supports are disjoined. This implies that $L$ is conclusively distinguishable from $E$ and vice versa for the same measurement with probability 1, hence the probability of success of discrimination between them equals 1.
Conclusions
===========
We have extended a paradigm of distinguishing entangled states to the world of boxes. We have considered distinguishing of isotropic boxes, and provided easy linear program that gives the bound on the probability of success of discrimination among them by means of completely locality preserving operations which discriminates the ensemble. As a corollary we obtained bounds for the probability of success of discrimination of maximally nonlocal boxes as well as isotropic boxes with the same parameter. The bound is obtained in terms non-local cost of special input box: the mixture of classically correlated copies of boxes that are to be discriminated. The key argument in this result was monotonicity of non-local cost under CLP operations. We have shown also an example of useful CLP operation which is the [*comparing operation*]{}: local measurement followed by communication of the results, grouping them according to some partition and tracing out the results. We proved that it can help in discriminating between pairs of extremal boxes in bipartite case for any pairs in $2\times 2$ case, or between any local extremal box and any other extremal local boxes in bipartite case of boxes with 2 inputs and 2 outputs of arbitrary cardinalities. It would be interesting if application of other monotone then non-locality cost would give better upper bounds. Note, that comparing operation is not the only one possible for boxes, as e.g. one could apply wiring between the parties [@Allcocketal2009].
Finally, we have to stress, that presented upper bounds on probability of success should be considered rather as a demonstration of analogy between entanglement and non-locality - two resource theories. This is because the bounds seems to be very rough, as most probably discriminating between two boxes perfectly by means of CLP e.g. between PR box and anti PR box, is the best strategy when one is given mixture of more than two maximally non-local boxes. This strategy yields probability of success equal to $2\over n$ when $8\geq n\geq 2$, which is far from obtained bounds. It would be then interesting if one could find more tight ones, perhaps using more direct approach by considering general form of LP operations [@Barrett-GPT] than via monotones presented here.
We thank M. Horodecki, R. Horodecki and D. Cavalcanti for discussion and M.T. Quintino and P. Joshi for helpful comments. This research is partially funded by QESSENCE grant and grants BMN nr 538-5300-0637-1 and 538-5300-0975-12.
Proof of corollary \[cor:loc-pres\] {#app:proof}
===================================
We first need to check that control-$O_j$ operation preserves locality on special class of local boxes, that appear in our considerations. Namely, consider a local box \_i p\_i P\_i(a,c|x,u)P\_i(b,d|y,v) where inputs $x$ and $y$ are unary. It is transformed by control-$O_j$ operation into \_i p\_i P\_i(a,h\_a(c)|x, \_a(u))P\_i(b,g\_b(d)|y,\_b(v)) where functions $h,\tilde{h},g,\tilde{g}$ are either identity or a bitflip respectively. Hence, the output box is a mixture of local boxes, we only need to check that $P_i(a,h_a(c)|x, \tilde{h}_a(u))$ and $P_i(b,g_b(d)|y, \tilde{g}_b(v))$ are fully non-signaling. It holds, indeed, as unary input can not signal, while \_c P\_i(a,h\_a(c)|x,\_a(u\_0)) = P\_i(a,h\_a(c)|x,\_a(u\_1)) for any $a$ and values $u_0$ and $u_1$, as for fixed $a$ $\tilde{h}_a$ just permutes the inputs, while $h_a$ changes order of summation, keeping the range of $c$, hence the thesis follows from non-signaling of box $P_i(ac|xu)$.
Next step is to show that control-$O_j$ operation transforms non-signaling boxes into non-signaling ones. There are 5 inequivalent ways to distinguish a subsystem out of a box of the form \_[ij]{} p(i,j) F\^[(j)]{}(e|s)F\^[(j)]{}(f|t)P\_i(c,d |u,v) where $P_i(c,d|u,v)$ are non-signaling boxes. After applying controlled-$O_j$ operation, there is: \_[ij]{} p(i,j) F\^[(j)]{}(e|s)F\^[(j)]{}(f|t)P\_i(h\_j(c),g\_j(d) |\_j(u),\_j(v)). We just show one example of full non-signaling condition, as the others follow similar lines. Namely we show now that inputs $s$ and $u$ does not signal to systems $v$ and $t$. Indeed: this condition reads
\_[s\_0,u\_0,u\_1]{} \_[v\_0,t\_0,d\_0,f\_0]{} \_e\_c\_[ij]{}p(i,j)\_[j,e\_0]{}\_[j,f\_0]{} P\_i(h\_j(c),g\_j(d)|\_j(u\_0),\_j(v\_0)) = LHS(v\_1)
where $LHS(u_1)$ denotes the equation on RHS of equality with $u_1$ in place of $u_0$. This happens iff
\_[u\_0,u\_1]{} \_[v\_0,d\_0,f\_0]{} \_c\_[i]{}p(i,f\_0)P\_i(h\_[f\_0]{}(c),g\_[f\_0]{}(d)|\_[f\_0]{}(u\_0),\_[f\_0]{}(v\_0)) = LHS(u\_1)
But we observe, that for all $i$ and $j$ there is \_c P\_i(h\_[f\_0]{}(c),g\_[f\_0]{}(d)|\_[f\_0]{}(u\_0),\_[f\_0]{}(v\_0)) = LHS(u\_1) which follows from non-signaling of the boxes $P_i(c,d|u,v)$ for each $i$, and that the functions $h,\tilde{h},g,\tilde{g}$ are only bit-flips.
Finally, we observe that control-$O_j$ operations is linear. It is easy to see, that partial trace of a subsystem, and twirling operation are CLP operations. This ends the proof of corollary \[cor:loc-pres\].
Comparing operations are CLP {#subsec:comparing-ops}
============================
In this section we show that a [*comparing operations*]{} are valid CLP operations. This operations transforms a box $P(ab|xy)$ into $\Lambda(P)$ given below, defined on systems $CDEF$, where to fix the considerations we assume that measurement $x=i,y=j$ has been performed on initial box, (P) := \_k \_[(a,b)I\_k]{} P(ab|x=i,y=j) P\^[(k)]{}\_E(e|s)P\^[(k)]{}\_F(f|t) \[eq:exCLP\] The family $\{I_k\}_k$ is a partition of the set of all pairs of outputs $(a,b)$ into disjoined sets of pairs, specific to given comparing operation, and $P^{(k)}_A(e|s)$ and $P^{(k)}_B(f|t)$ are the boxes with unary input $s=0,t=0$ and output probability distributions $\delta_{k,e}$ and $\delta_{k,f}$ respectively. In what follows we will write $x_i$,$y_j$,$s_0$,$t_0$ instead of $x=i$,$y=j$,$s=0$,$t=0$ respectively. Note, that one can obtain $\Lambda(P)$ via exchanging results control operation, and tracing out the results.
verifying CLP conditions
------------------------
We argue now, that operation (\[eq:exCLP\]) is CLP. Note that it is enough to show, that $\Lambda \otimes I$ is LP, as from it, we have immediately that $\Lambda$ itself is LP. Indeed, suppose it is not the case, that is $\Lambda_A$ is not LP on some box $P(a|x)$. We have then $\Lambda_A(P(a|x)) = \Lambda_A\ot I_B(P(a|x)\otimes P(b|y))$ where $P(b|y)$ is a trivial box on system B: with 1 input, and 1 output, with probability 1, because $P(a|x)P(b|y) = P(a|x)$ in this case. This however implies that $\Lambda_A$ is not CLP, which is desired contradiction.
Consider then a box M = P(a,b,|[c]{},|[d]{}|x,y,|[u]{},|[v]{}) on systems $ABCD$ with $C= C_1,...,C_n$ and $D=D_1,...,D_n$. We now apply $\Lambda_{AB}\ot I_{CD}$. Resulting box is on systems $CDEF$: $$\begin{gathered}
\Lambda\ot I (M) = \nonumber \\
\sum_k \sum_{(a,b)\in I_k} P(a,b,\bar{c},\bar{d}|x_i,y_j,\bar{u},\bar{v})\otimes P^{(k)}_E(e|s)\otimes P^{(k)}_F(f|t)\end{gathered}$$
We have to prove now the list of features (i)-(iv) given in definition \[def:LP\]. To prove validity of $\Lambda \otimes I$ it is enough to notice that fixing $\bar{u}_0,\bar{v}_0$ $s_0$ and $t_0$ and summing over outputs we get \_[|[c]{},|[d]{},e,f]{} \_k \_[(a,b)I\_k]{} P(a,b,|[c]{},|[d]{}|x\_i,y\_j,|[u]{}\_0,|[v]{}\_0) P\^[(k)]{}\_E(e|s\_0)P\^[(k)]{}\_F(f|t\_0) that equals \_[|[c]{},|[d]{},e,f]{} \_k \_[(a,b)I\_k]{} P(a,b,|[c]{},|[d]{}|x\_i,y\_j,|[u]{}\_0,|[v]{}\_0) which is desired 1, since initial box was valid for input $x_i,y_j,\bar{u}_0,\bar{v}_0,s_0,t_0$.
To prove linearity, we observe that if we allow mixture of boxes $M =\sum_{l} \alpha_l P_l(a,b,\bar{c},\bar{d}|x,y,\bar{u},\bar{v})$, then the result $\Lambda\ot I(M)$ would be \_k \_[(a,b)I\_k]{} \[\_[l]{}\_l P\_l(a,b,|[c]{},|[d]{}|x\_i,y\_j,|[u]{},|[v]{})\] P\^[(k)]{}\_E(e|s)P\^[(k)]{}\_F(f|t) which is the same as $$\begin{gathered}
\sum_{l}\alpha_l \Lambda\ot I(M_l) = \nonumber \\
\sum_{l}\alpha_l [\sum_k \sum_{(a,b)\in I_k} P(a,b,\bar{c},\bar{d}|x_i,y_j,\bar{u},\bar{v}) P^{(k)}_E(e|s)\otimes P^{(k)}_F(f|t)]\end{gathered}$$ since we can change the order of summation.
The argument that operation $\Lambda \ot I$ preserves non-signaling is more demanding. To show the full non-signaling we need to prove two conditions: C\^ID\^JEC\^[N-I]{}FD\^[N-J]{} \[eq:first-ns\]\
EC\^ID\^JC\^[N-I]{}FD\^[N-J]{} \[eq:sec-ns\] where $I,J\subset \{1,...,n\}\equiv N$ and we do not consider the case when $I$ and $J$ are empty at the same time. (Note that the cases C\^IFD\^JEC\^[N-I]{}D\^[N-J]{},\
EC\^IFD\^J C\^[N-I]{}D\^[N-J]{} are covered by the first two above). We will show (\[eq:first-ns\]) only, as (\[eq:sec-ns\]) follows from analogous considerations. In what follows, for any multivariable named $\bar{w}\equiv (w^1,...,w^n)$, by $w^I$ we mean the variables with indices indicated by set of indices $I\subseteq N=\{1,...,n\}$. By $\bar{w}_0^I$ we mean the variables $w^i$ fixed to some values $w^i_0$ each for all $i\in I$ and by $\bar{w}_0{\bs}w^I$ we mean that for all $i \notin I$ variables $w^i$ are fixed to some values $w^i_0$, but for $i\in I$ they are not fixed. Note, that in what follows we never put $s$ and $t$ under universal quantifier, since they have single value, we only fix them to $s_0$ and $t_0$ properly. To satisfy the non-signaling condition which we now focus on, there should be:
$$\begin{gathered}
\forall_{e_0,f_0,\bar{c}_0{\bs}c^I,\bar{d}_0{\bs}d^I,\bar{u}_0{\bs}u^I,\bar{v}_0{\bs}v^J}\quad \forall_{\bar{u}_0^{I},\bar{u}_1^{I},\bar{v}_1^{J},\bar{v}_1^{J}} \\
\sum_{c^I,d^J} \sum_k \sum_{(a,b)\in I_k} P(a,b,\bar{c}_0{\bs}c^I,c^I,\bar{d}_0{\bs}d^J,d^J|x_i,y_j,\bar{u}_0{\bs}u^I,\bar{u}_0^I,\bar{v}_0{\bs}v^J,\bar{v}_0^J) P^{(k)}_E(e_0|s_0)P^{(k)}_F(f_0|t_0) = \text{LHS}(\bar{u}_1^{I},\bar{v}_1^{J})\end{gathered}$$
where $LHS(\bar{u}_1^{I},\bar{v}_1^{J})$ denotes left-hand-side of the equation with $\bar{u}_1^{I}$ in place of $\bar{u}_0^{I}$ and $\bar{v}_1^{I}$ in place of $\bar{v}_0^{I}$. Due to definition of $P^{(k)}_F(f|t_1) $ and $P^{(k)}_E(e|s_1) $ we have that LHS of the above equation equals 0 if $e_0\neq f_0$, and so equals RHS then, while for $e_0=f_0$ the above set of equations reduces to: $$\begin{gathered}
\forall_{e_0,\bar{c}_0{\bs}c^I,\bar{d}_0{\bs}d^I,\bar{u}_0{\bs}u^I,\bar{v}_0{\bs}v^J}\quad \forall_{\bar{u}_0^{I},\bar{u}_1^{I},\bar{v}_1^{J},\bar{v}_1^{J}} \\
\sum_{c^I,d^J} \sum_{(a,b)\in I_{e_0}} P(a,b,\bar{c}_0{\bs}c^I,c^I,\bar{d}_0{\bs}d^J,d^J|x_i,y_j,\bar{u}_0{\bs}u^I,\bar{u}_0^I,\bar{v}_0{\bs}v^J,\bar{v}_0^J)=LHS(\bar{u}_1^{I},\bar{v}_1^{J})\end{gathered}$$ which happens for all choice of variables that we can vary over, since for any fixed $(a_0,b_0) \in I_{e_0}$ there is \_[c\^I,d\^J]{} P(a\_0,b\_0,|[c]{}\_0c\^I,c\^I,|[d]{}\_0d\^J,d\^J|x\_i,y\_j,|[u]{}\_0u\^I,|[u]{}\_0\^I,|[v]{}\_0v\^J,|[v]{}\_0\^J) P\^[(k)]{}\_E(e\_0|s\_0)P\^[(k)]{}\_F(f\_0|t\_0) = (|[u]{}\_1\^[I]{},|[v]{}\_1\^[J]{}) due to non-signaling $C^ID^J\not\hspace{-1.3mm}{\rightarrow}AC^{N-I}BD^{N-J}$ of the original box $M$. To prove the converse non-signaling condition we need to show the following equalities: $$\begin{gathered}
\forall_{\bar{c}_0^{I},\bar{d}_0^{J},\bar{u}_0^{I},\bar{v}_0^{J}} \quad
\forall_{\bar{u}_0{\bs}u^{I},\bar{u}_1{\bs}u^{I},\bar{v}_0{\bs}v^{J},\bar{v}_1{\bs}v^{J}} \\
\sum_{e,f,\bar{c}{\bs}c^I,\bar{d}{\bs}d^I} \sum_k \sum_{(a,b)\in I_k} P(a,b,\bar{c}{\bs}c^I,c_0^I,\bar{d}{\bs}d^J,d_0^J|x_i,y_j,\bar{u}_0{\bs}u^I,u_0^I,\bar{v}_0{\bs}v^J,\bar{v}_0^J) P^{(k)}_E(e|s_0)P^{(k)}_F(f|t_0) = \text{LHS}(\bar{u}_1^{I}{\bs}u^{I},\bar{v}_1^{J}{\bs}v^J)\end{gathered}$$ again, we notice, that we need to prove $$\begin{gathered}
\forall_{\bar{c}_0^{I},\bar{d}_0^{J},\bar{u}_0^{I},\bar{v}_0^{J}} \quad
\forall_{\bar{u}_0{\bs}u^{I},\bar{u}_1{\bs}u^{I},\bar{v}_0{\bs}v^{J},\bar{v}_1{\bs}v^{J}} \\
\sum_{\bar{c}{\bs}c^I,\bar{d}{\bs}d^I} \sum_k \sum_{(a,b)\in I_k} P(a,b,\bar{c}{\bs}c^I,c_0^I,\bar{d}{\bs}d^J,d_0^J|x_i,y_j,\bar{u}_0{\bs}u^I,u_0^I,\bar{v}_0{\bs}v^J,\bar{v}_0^J) = \text{LHS}(\bar{u}_1^{I}{\bs}u^{I},\bar{v}_1^{J}{\bs}v^J)\end{gathered}$$ which is true, as it follows from non-signaling condition $AC^{N-I}BD^{N-J}\not\hspace{-1.3mm}{\rightarrow}C^ID^J$ of the original box $M$. Thus we have proved (\[eq:first-ns\]).
Finally we need to prove that $\Lambda\ot I$ preserves locality. To this end consider a local box \_ p() P\^(a|[c]{}|x|[u]{})P\^(b|[d]{}|y|[v]{}) It is transformed into \_ \_k \_[(a,b)I\_k\^]{} p() \[P\^[(k)]{}\_E(e|s)P\^(a|[c]{}|x|[u]{})\]\[eq:last-form\] by definition of locality, there are well defined normalization factors: N\_[EC]{}\^[(,a)]{} = \_[|[c]{}]{} P\^(a,|[c]{}|x=i,|[u]{}\_0)\
N\_[FD]{}\^[(,b)]{} = \_[|[d]{}]{} P\^(b,|[d]{}|y=j,|[v]{}\_0) so that our box in (\[eq:last-form\]) looks like
\_[,k,(a,b)I\^\_k]{} p()N\_[EC]{}\^[(,a)]{}N\_[FD]{}\^[(,b)]{} \[P\^[(k)]{}\_E(e|s) P\^(a|[c]{}|x|[u]{})\]\[eq:box\]
to see that this is a valid $LR_{ns}$ box, consider a random variable $\lambda'$ with a distribution defined for all $(\lambda,k,a,b)$ as
{
[l]{} ’(,k,a,b) = p()N\_[EC]{}\^[(,a)]{}N\_[FD]{}\^[(,b)]{} (a,b)I\^\_k\
’(,k,a,b) = 0
.
Note that this is well defined distribution of a random variable over Cartesian product of ranges of $\lambda$, $k$ and ranges of $a$ and $b$. Indeed, \_[,k,a,b]{} ’(,k,a,b)= \_ \_[k]{}\_[(a,b)I\^\_k]{} p()N\_[EC]{}\^[(,a)]{}N\_[FD]{}\^[(,b)]{} which is nothing but \_[,a,b]{} p() \_[|[c]{},|[d]{}]{}P\^(a|[c]{}|x\_i,|[u]{}\_0)P\^(b|[d]{}|y\_j,|[v]{}\_0), and equals 1, since it is the distribution of outcomes of measurement $x_i,y_j,\bar{u}_0\bar{v}_0$ on the original box $M$. Now we can rewrite the box (\[eq:box\]) \_[,k,a,b]{} ’(,k,a,b) \[X\_[EC]{}\^[(,k,a,b)]{}\]where $X_{EC}^{(\lambda,k,a,b)}=P^{(k)}_E(e|s)\otimes {1\over N_{EC}^{(\lambda,a)}} P^{\lambda}(a\bar{c}|x\bar{u})$ and $Y_{FD}^{(\lambda,k,a,b)}= P^{(k)}_F(f|t)\otimes {1\over N_{FD}^{(\lambda,b)}}P^{\lambda}(b\bar{d}|y\bar{v})$ are legitimate boxes on Alice’s and Bob’s system respectively. It is also easy to see that the boxes $[X_{EC}^{(\lambda,k,a,b)}]$ and $[Y_{FD}^{(\lambda,k,a,b)}]$ are fully non-signaling, as the original box was $LR_{ns}$. Hence we proved that the output of $\Lambda\ot I$ acting on $LR_{ns}$ box is an $LR_{ns}$ box.
[27]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, ().
, , , ****, (), .
, ****, ().
, , , , , , ****, (), .
, , , ****, (), .
, ****, (), .
, ****, ().
, , , ****, ().
, , , , (), .
, Ph.D. thesis, ().
, ****, (), .
, , , , , , ****, (), .
, ****, (), .
, , , , ****, (), .
(), .
, , , ****, (), .
, , , , , ****, (), .
, , , , , (), .
, , , ****, ().
, , , , ****, (), .
(), .
, ****, (), .
(), .
, , , ****, (), .
, , , , ****, ().
, ****, ().
, ** (, ).
[^1]: To obtain flag-boxes uncorrelated with the ensemble is easy if we allow share randomness, but even with this resource it is not clear if demanding the output of the form of the same flag-boxes is not too rigorous
[^2]: More specifically, $O_j(B_{rst}^{\alpha_i})$ for $f(j)=000$ acts as identity, for $f(j)=001$ it is a b-flip (negation of output), for $f(j)=010$ is x-flip (negation of input x), for $f(j)=011$ is x-flip and b-flip, for $f(j)=100$ is y-flip (negation of input y) and for $f(j)=101$ is y-flip with b-flip. Finally for $f(j)=110$ it is both x-flip and y-flip, while for $f(j)=111$ it is both x and y-flip with b-flip.
| |
Q:
1D Hermite Cubic Splines with tangents of zero - how to make it look smoother
I am given 3 values y0, y1, y2. They are supposed to be evenly spaced, say x0 = -0.5, x1 = 0.5, x2 = 1.5. And to be able to draw a spline through all of them, the derivatives at all points are said to be dy/dx = 0.
Now the result of rendering two Catmull-Rom-Splines (which is done via GLSL fragment shader, including a nonlinear transformation) looks pretty rigit. I.e. where the curve bends, it does so smoothly, though, but the bending area is very small. Zooming out makes the bends look too sharp.
I wanted to switch to TCB-Splines (aka. Kochanek-Bartels Splines), as those provide a tension parameter - thus I hoped I could smooth the look. But I realized that all TCB-Parameters applied to a zero tangent won't do any good.
Any ideas how I could get a smoother looking curve?
A:
Obviously nobody had a good answer, but as it's my job, I found a solution: The Points are evenly spaced, and the idea is to make transitions smoother. Now it's given, that the tangents are zero at all given Points, so it is most likely that close to the points we get the strongest curvature y''(x). This means, we'd like to stretch these "areas around the points".
Considering that currently we use Catmull-Rom-Splines, sectioned between the points. That makes y(x) => y(t) , t(x) = x-x0.
This t(x) needs to be stretched around the 0- and the 1-areas. So the cosine function jumped into my mind:
Replacing t(x) = x-x0 with t(x) = 0.5 * (1.0 - cos( PI * ( x-x0 ) ) did the job for me.
Short explanation:
cosine in the range [0,PI] runs smoothly from 1 to -1.
we want to run from 0 to 1, though
so flip it: 1-cos() -> now it runs from 0 to 2
halve that: 0.5*xxx -> now it runs from 0 to 1
Another problem was to find the correct tangents. Normally, calculating such a spline using Matrix-Vector-Math, you simply derive your t-vector to get the tangents, so deriving [t³ t² t 1] yields [3t² 2t 1 0]. But here, t is not simple. Using this I found the right derived vector:
| 0.375*PI*sin(PI*t)(1-cos(PI*t))² |
| 0.500*PI*sin(PI*t)(1-cos(PI*t)) |
| 0.500*PI*sin(PI*t) |
| 0 |
| |
Birthday Dress I made for my little girl.
So. My daughter told me she wanted a princess dress for her 4th birthday. This is what I came up with.
I very new to sewing, so I had no idea how to do anything and I kind of just did everything along the way.
The first thing I did was took her measurements and drafted a corset pattern for the top part of the dress, and i did 7 panels 2 layers thick with the taffeta.
(if you don't know how to make a pattern, I can try to explain it the best I can over a message?)
I cut the pieces out and started to go, the middle part was folded so both sides were equal, and the panels were cut together to ensure they were all equal.
after sewing them all together, I stitched the tops together and cut off the excess string and fabric leaving about 1/2 inch. I put in eyelets in the back like a corset, while putting a modesty panel hooked onto one of the inseems with extra fabric (of course doubling under the fabric so it doesn't fray and stitching along there) and then sewed bias tape along the top and bottom of the corset. after doing that i set my iron on low, and followed the glamor glitz iron- on crystal hot fix instructions and added the crystals to the top part of the dress. with the extra bias tape i had, i made a bow by looping both ends then bringing the excess through the middle and hot glueing it together, and to the top of the corset part. i also put crystals on there to give it a little more dazzle.
The skirt was a pain. I took the tulle, single stitched? the entire 10 yards of both pink and fuschia colors and bunched them together by taking the top string and pulling it very carefully then closed it with a small stitch.. I then made a slip? that goes under the tulle with the taffeta and put the tafetta-elastic band-tulle in that order and sewed along while pulling the elastic to to ensure it all stayed together and bunched up together. the tafetta made it so you couldn't see the band or the excess tulle and it tucked into the dress...... if that makes sense.
so if your still with me, the dress is actually two parts. the top part is a corset with no boning, and the bottom is an elastic/taffeta/tulle combo, to make it look like the two pieces were one, i cut a little tulle off and saved it and made a belt-like piece that tied into a bow in the back.
next time i will take pictures along the way. I work fulltime at a nursing home and overtime is killer, so i had to do this project over a few weeks in my spare time. i did manage to make it in time for her birthday! she was very excited and so was i.
With me having absolutely so sewing experience except what I've learned over the hears, I was completely pleased with how it turned out.
Tags
- Angela S. added Natalie's 4th Birthday Dress to Kiddies 16 Aug 07:45
Kayla H. posted this project as a creation without steps
Here are some similar tutorials to help you make this one! | https://www.cutoutandkeep.net/projects/natalies-4th-birthday-dress |
If you were to poll cat owners, many would say that ginger tabby cats are truly special and unique. I mean, Marmalade is one and he’s pretty darn cool, right?
For one ginger tabby cat in Turkey named Tombi, he took it upon himself to take up residence where he felt he was needed. And it was a welcome surprise to a classroom of happy little third graders. They didn’t realize that this cute ball of ginger fur was a blessing in disguise. And he could transform their lives for the better.
We know that there is obviously a difference between feral cats and stray cats, but even most strays are somewhat suspicious and can be jumpy around human strangers. But Tombi? He waltzed right in to a classroom at a public elementary school in Izmir, Turkey and declared it his new home.
This is a ginger tabby who knew how to transform the lives of many, and in return, he’d get the loving furever home he deserved. <3
The classroom teacher, Özlem Pınar Ivaşcu, said that when he came into the classroom, the students were immediately interested in him and pleasantly surprised: “The children liked him very much.”
And just as cats do, little curious Tombi took a liking to the students, and their classroom. Little by little, he transformed this once strange place into his happy home. And, of course, the students didn’t seem to mind one bit.
Tombi and his classroom of delighted tiny cat servants were thrilled to be together. But it seems that the school administration wasn’t as pleased with the cat’s presence. They claimed that the cat could be a possible health risk, and insisted that poor little Tombi should be relocated to a more suitable home.
“We found a house for Tombi and he stayed there for three days, but he was not happy. He stopped eating,” says Özlem. “So I took him home, but here too he was not happy. ”
And not only was Tombi unhappy, but the students were heartbroken and lonely without him. You see, despite his presence in the classroom, Tombi served to boost the morale, and was by no means a distraction either. Two students who once had trouble focusing were at ease with Tombi’s calm presence among them. And the other students all loved having him just the same.
“When he first joined our class, I noticed a change in the children’s behavior. They were more careful and stopped running around in the classroom. They couldn’t wait to come back to school the next morning (which is remarkable for some 9-year-old children).”
Sad over the loss of their beloved feline friend, Tombi’s classmates sent letters and drawings when they learned that their feline friend was not doing well in their absence.
Beut Özlem is a dedicated teacher with a big heart. So she took it upon herself to assure that Tombi was reunited with the kids who loved him and missed him dearly. Quickly, her story spread like wildfire on social media, and shortly after was picked up by a local news station.
After all the buzz and commotion, the school administration decided to back down and allow the ginger tabby to come back “home” to the school where he knew he belonged:
“The children were very happy to have Tombi back… And he is happy again to be with the kids.”
Did this story bring a smile to your face? Share it with someone else who loves cats, just like you! The world needs more happy stories, and this is certainly one of them. 🙂
All Images Courtesy of Özlem Pınar Ivaşcu
REMEMBER: ADOPT, DON’T SHOP; FOSTERING SAVES LIVES & SPAY AND NEUTER!
Related Story: Beloved Cat Attends School For Almost 2 Decades; Fondly Remembered As He Graduates To The Rainbow Bridge
Related Story: Fluffy University Cat Earns His Very Own School ID
The ads on this site allow us to raise the necessary funds to continue helping cats in need. | https://coleandmarmalade.com/2019/11/16/stray-ginger-tabby-wanders-into-classroom-and-decides-this-is-his-home-now/ |
Q:
How to show that x, y and z are equal?
I would really appreciate help with this system of equations:
$$
\left\{
\begin{array}{c}
x^2 +3y=-2 \\
y^2 +3z=-2 \\
z^2+3x=-2
\end{array}
\right.
$$
It seems quite obvious that $x, y$ and $z$ are all equal and then the equation can be solved easily, but I don't know how to show that they are equal mathematically.
A:
It's clear that we must have $x,y,z<0$, and that by the symmetry of the equations we can take either $x \le y \le z$ or $x \le z \le y$.
Suppose $x \le y \le z$ and $x<z$. Then
$$x^2+3y=z^2+3x\ \Rightarrow\ x^2-z^2=3(x-y)$$
But $x^2-z^2 > 0$ and $x-y \le 0$, so the equation is both $>0$ and $\le 0$... clearly this can't happen, so we must have $x=z$ and hence $x=y=z$.
A similar argument works for $x \le z \le y$ by considering $x^2+3y=y^2+3z$.
| |
Warm-up before winter in the Midwest and northeast United States is the snowfall
Although the official beginning of winter in America, more months, in some regions of the country, the snow season is already opened. According to messages of weather forecasters, in the Midwest and in the northeastern part of the United States is moving winter storm which will bring “moderate snowfall”.
Photo: Depositphotos
The center for weather forecasting National weather service said that storm system will bring the first “moderate snowfall” in some parts of the Eastern Great lakes, the Northern part of the Middle Atlantic and New England until Friday inclusive. This writes Fox News.
According to senior meteorologist Fox News Janice Dean, this week a series of weak storms in the Northern plains caused the snow to rain in these regions, setting the stage for more winter weather.
“A little cold air is moving in this direction, — said Dean in “Fox & Friends” on Tuesday. — In the forecast of light snow. Along the coast snow is not expected but we will see moderate snowfall in the Great lakes and in the inner part of the North-East of the country.”
WPC said that the two “strong” cold front will pass through Central and Eastern United States, with the result that the temperature everywhere will be significantly below the average — this happens between Thursday and early next week.
Thursday, November 7, when the region will be a cold front in the North-East are expected rain and snow.
“On Friday, when the front exits the coast of New England, a low-pressure system can develop and strengthen, which will lead to a small amount of snow — light to moderate, says the WPC. — The best chance of increasing snow cover is larger than 4 inches (10 centimeters) currently extends from Northern Pennsylvania to Maine, but there remains uncertainty about the exact number and location”.
According to AccuWeather, in some parts of Pennsylvania and upstate new York the snow cover can range from 1 to 3 (2.5-5 cm) or 3 to 6 inches (or 7.6-15.2 cm), and in New England, the figures would be higher. Cold rain will probably pass in the direction from Philadelphia to new York and then later in Boston as you move the storm to the North.
Wet snow is also possible, because the first cold front will pass Friday night, and on Sunday and Monday the Great lakes will cross the second, according to forecasters. Once the storm leaves, it is expected that cold temperatures will remain.
Temperature below zero on Saturday to cover the territory from the southern to the Central parts of the Mississippi, and the minimum a single digit is possible in the Northern plains on Monday.
“Next week there are forecasts of more cold air with greater chance of snow,” said Dean.
Although winter officially begins in America on December 21, government weather forecasters reported that there may be some “large fluctuations” due to variable weather conditions. National oceanic and atmospheric administration in their forecast for the winter said that for most of US this winter, promise a higher than average temperature in winter with wetter than normal period from December to February in the Northern parts of the country.
But forecasters acknowledged that the global climate patterns that affect winter weather conditions, weak this year, leading to difficulty with forecasts.
“The farmers almanac” whose long-term prognosis “based on a mathematical and astronomical formula developed in 1818,” says that “extremely cold winter conditions” will take place in areas East of the Rocky mountains up to the Appalachian mountains — the coldest outbreak of the season will come in the last week of January and continue in early February.
The almanac reported that the upcoming winter will be “filled with lots of UPS and downs on the thermometer.” |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.