content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
Intellectual disability is a term used when a person has certain limitations in mental functioning and in skills such as communicating, taking care of him or herself, and social skills. These limitations will cause a child to learn and develop more slowly than a typical child.
Children with intellectual disabilities (sometimes called cognitive disabilities or mental retardation) may take longer to learn to speak, walk, and take care of their personal needs such as dressing or eating. They are likely to have trouble learning in school. They will learn, but it will take them longer. There may be some things they cannot learn.
(Editor’s Note, February 2011: “Intellectual Disability” is a new term in IDEA. Until October 2010, the law used the term “mental retardation.” In October 2010, Rosa’s Law was signed into law by President Obama. Rosa’s Law changed the term to be used in future to “intellectual disability.” The definition of the term itself did not change and is what has just been shown above.)
This information is from the National Dissemination Center for Children with Disabilities (NICHCY).
It’s very helpful to read more about intellectual disabilities. Following are links to additional information:
Center for Parent Information and Resources
The Center for Parent Information and Resources shares family friendly information and researched based materials on a variety of topics including intellectual disabilities.
Utah Developmental Disabilities Council (UDDC)
The UDDC’s mission is to be the state’s leading source of critical, innovative and progressive information, advocacy, leadership and collaboration to enhance the lives of individuals with developmental disabilities.
Division of Services for People with Disabilities (DSPD)
Utah’s DSPD promotes opportunities and provides support for persons with disabilities to lead self-determined lives. They oversee home and community-based services, supported employment services and support for people with disabilities and their families.
(801) 272-1051 or Toll-free (800) 468-1160
The Family-to-Family Network is a network of local volunteer leaders and groups that provides education and support to families who have a member with a disability. The Network’s particular area of experience is in providing supports to families who are on the waiting list or in services from the Utah Division of Services for People with Disabilities.
The UDSF continues today to link families together, to share common challenges, and to educate parents and the public in understanding and appreciating the needs of individuals with Down syndrome.
American Association on Intellectual and Developmental Disabilities (AAIDD)
AAIDD promotes progressive policies, sound research, effective practices, and universal human rights for people with intellectual and developmental disabilities.
Formerly the Mental Retardation Association of Utah we now represent Mutual Respect, Advocacy and Understanding for individuals with intellectual disabilities in the State of Utah. | https://utahparentcenter.org/disabilities/id/ |
When Can Words Lead to Arrest in Illinois?
The 1st Amendment protects every Americans right to speak freely, but Illinois has enacted legislation that precludes some forms of speech. Typically, a prior restraint on speech is a per se constitutional violation, but not all speech is protected.
Intimidation
A person commits intimidation when, with intent to cause another to perform or to omit the performance of any act, he or she communicates to another, directly or indirectly by any means, a threat to perform an unlawful act. Acts that fall under this section ordinarily involve acts involving a threat of physical harm (assault), a threat of confinement or restraint (kidnapping), or expose any person to hatred, contempt, or ridicule. Intimidation is a Class 3 felony for which an offender may be sentenced to a term of imprisonment of not less than 2 years and not more than 10 years.
A penalty for intimidation may be compounded depending on the context and/or the place where the conduct happens. For instance, if the act of intimidation is a byproduct of street gang activities, the charges will be increased. A person commits criminal street gang recruitment on school grounds or public property adjacent to school grounds when on school grounds or public property adjacent to school grounds, he or she knowingly threatens the use of physical force to coerce, solicit, recruit, or induce another person to join or remain a member of a criminal street gang, or conspires to do so. Additionally, A person who knowingly, expressly or impliedly, threatens to do bodily harm or does bodily harm to an individual or to that individual’s family or uses any other criminally unlawful means to solicit or cause any person to join, or deter any person from leaving, any organization or association regardless of the nature of such organization or association, is guilty of a Class 2 felony.
Intimidation Cases in Illinois
The leading case involving street gang intimidation in Illinois is Chicago v. Morales. The case involved an ordinance that attempted to stop gang-related activity in neighborhoods. The Council in charge of enacting the ordinance found, “a continuing increase in criminal street gang activity was largely responsible for the city’s rising murder rate, as well as an escalation of violent and drug related crimes. It noted that in many neighborhoods throughout the city, “the burgeoning presence of street gang members in public places has intimidated many law abiding citizens.” Furthermore, it found “that gang members “establish control over identifiable areas … by loitering in those areas and intimidating others from entering those areas.” “The ordinance creates a criminal offense punishable by a fine of up to $500, imprisonment for not more than six months, and a requirement to perform up to 120 hours of community service. Commission of the offense involves four predicates. First, the police officer must reasonably believe that at least one of the two or more persons present in a “public place” is a “criminal street gang membe[r].” Second, the persons must be “loitering,” which the ordinance defines as “remain[ing] in any one place with no apparent purpose.” Third, the officer must then order “all” of the persons to disperse and remove themselves “from the area.” Fourth, a person must disobey the officer’s order. If any person, whether a gang member or not, disobeys the officer’s order, that person is guilty of violating the ordinance.”
However, due to strong lawyering and a comprehensive understanding of the constitution, attorneys were able to have the ordinance declared unconstitutionally vague. Whether or not a person charged with such a crime is protected by the 1st Amendment is an issue to advocated by a qualified Illinois Criminal Defense Attorney. | https://www.criminallawyer-chicago.com/blog/when-can-words-lead-to-arrest-in-illinois/ |
Philippines: Duterte Epic Failure on Human Rights
The Universal Periodic Review (UPR) of the Philippines was adopted at the 36th session of the United Nations Human Rights Council (UNHRC) in September.
Some 257 recommendations of the UNHRC were aimed at improving the Philippines human rights situation.
Of those recommendations for human rights improvement the Philippines rejected 154.
Everything related to the following was rejected.
- Investigating extrajudicial kilings (EJKs).
- Blocking the imposition of the death penalty.
- Cancelling the lowering of the criminal age of criminal liability to nine years of age.
In late september twenty-nine nation-states including the USA called on the Philippines “to cooperate with the international community to pursue appropriate investigations into these incidents, in keeping with the universal principles of democratic accountability and the rule of law.”
UNHRC also expressed concern over threats against human rights defenders and urged the government to ensure they are accorded full protection.
They also called for a safe environment for journalists and indigenous communities.
Since that time Rodrigo Duterte has publicly announced a threat of violence against a United Nations investigator and has threatened human rights advocates with death. Duterte has previously said that he will kill 100,000 but later added that he wishes to kill at least 3 million people to put himself in the same league as adolf Hitler.
Agnes Callamard says Duterte has created a climate of fear and insecurity, feeding impunity and undermining the constitutional fabrics of the country. Duterte says he will slap her if she tries to investigate Human Rights violations in the Philippines.
Duterte’s threats are not hollow and his efforts have led to the cold-blooded murder of many thousands of Filipinos.
A large group of the economic elite tied to the Duterte government support these murders of the ‘great-unwashed poverty classes’ of the Philippines. This elite class expects a profitable 6-year-long ride in Duterte’s coattails and will do his bidding, including the commission of major crimes.
According to statistical data there may be as many as 1.1 million filipinos out of 105 million who have tried drugs in the past. A large percentage of persons who say they have tried drugs in the past are Overseas Foreign Workers (OFW) who have smoked marijuana in countries where that item is legally available. Marijuana is not an addictive substance.
The culture of murder in the Philippines and the shroud of fear are growing and in terms of criminality it is a violent crime running second to RAPE, the number one unreported criminal offence in the Philippines.
Philippines needs urgent action to reverse spiralling rights violations.
When your babies leave Philippines shores, they will never come back and you will never see them again. Watch your children like a hawk. The RINJ Foundation
Since mid-2017 the UN Human Rights High Commissioner has said that the Government of the Philippines must urgently address growing reports of human rights violations, including murder, threats against indigenous peoples and the summary execution of children, a panel of United Nations experts has said. The panel consists of Ms. Agnes Callamard, Special Rapporteur on extrajudicial, summary or arbitrary executions; Mr. Michel Forst, Special Rapporteur on the situation of human rights defenders; and Ms. Maud de Boer-Buquicchio, Special Rapporteur on the sale and sexual exploitation of children.
The child sex trade in the Philippines has reached awful proportions after being infiltrated by the Islamic State’s infrastructure of procurement and distribution of human slaves.
The UN Human Rights Panel for the Philippines has said it has witnessed severe, multiple human rights violations, especially against indigenous peoples and human rights defenders. Children are not being spared and continue to be at high risk in a climate of prevailing violence.
“We are shocked by the increasing levels of violence, killings, intimidation and harassment being suffered by human rights defenders – including those protecting indigenous peoples – trade union organizers, farmers and their family members.”
The panel says: “Allegations of summary executions, including of children, are also on the rise. All these cases must be investigated thoroughly and perpetrators should be brought to justice.”
Photo credit:Reuters. The death of an “Angel”
They also highlighted that some of those being attacked were defending the rights of Lumad indigenous peoples, who are reported to have suffered particularly severe threats on the island of Mindanao, often with the acquiescence or direct support of the security forces, while defending their ancestral land against businesses.
Numerous killings and extra-judicial executions of villagers, farmers and human rights defenders working with them have been reliably reported, the experts noted.
(Philippines President Rodrigo Duterte, speaking in a televised news conference on 24 July, threatened to bomb Lumad schools on Mindanao.)
The Government must also prevent incitement to violence or killings against indigenous communities, human rights defenders and farmers, the panel has concluded.
(The panel of UN human rights experts has been in contact with the Government of the Philippines regarding these concerns. Rodrigo Duterte has threatened at least one with physical violence.)
The Special Rapporteurs are part of what is known as the Special Procedures of the Human Rights Council. Special Procedures, the largest body of independent experts in the UN Human Rights system, is the general name of the Council’s independent fact-finding and monitoring mechanisms that address either specific country situations or thematic issues in all parts of the world. Special Procedures’ experts work on a voluntary basis; they are not UN staff and do not receive a salary for their work. They are independent from any government or organization and serve in their individual capacity. | https://rinj.press/news-rinj-press/november-2017/philippines-duterte-epic-failure-human-rights/ |
By Michelle Forman, Senior Media Specialist, APHL
Anyone who was in grade-school in the mid-1980s and 1990s likely remembers The Oregon Trail, a computer game where you had to navigate the treacherous conditions faced by American pioneers who used this lone passageway to travel from Independence, Missouri to Oregon City, Oregon. If you ask anyone who played the game what they learned, however, it was that the Apple II had terrible graphics. They also likely remember fording the river, hunting buffalo, and losing a family member to dysentery. What many children may not have realized as they played this game is that the experiences were real. The Oregon Trail was real and so were the many diseases faced by those who traveled it.
You have died of dysentery. Everyone has cholera. Susie has measles. As a young player, these messages would pop up on our screen and result and in “Aw, man!” They meant that your trek was delayed or even over. You had to find more food or water or just wait out whatever the disease was that had infected someone in your wagon. But what are these diseases? And do they still exist?
Three deadly diseases featured in The Oregon Trail – typhoid fever, cholera and dysentery– were caused by poor sanitation. Luckily, those of us living in industrialized nations (United States, Canada, Japan, Western Europe, etc.) have access to sophisticated modern sanitation and water treatment systems, which make these diseases so rare that they are nearly non-existent. But the “deadly three” are not rare in developing nations. Americans traveling to these destinations are often encouraged to get vaccinated and to be extra careful to thoroughly wash their hands, avoid tap water and consume only cooked foods.
If typhoid fever, cholera and dysentery are left untreated, they can become deadly, causing severe dehydration. In industrialized nations, anyone who contracts these diseases is protected by the medical and public health systems, and the spread of disease is halted. If an outbreak is suspected, public health laboratories test samples to positively identify the agent that is causing the disease, while epidemiologists and public health nurses work in coordination with sanitarians to identify the source of the infection. This quick action is what stops further transmission.
- Typhoid fever is caused by Salmonella Typhi, a bacterium that is contracted by consuming contaminated food or drink. Once a person has Typhoid fever, they can shed the bacteria in their stool or urine for days to weeks and potentially make others ill. Typhoid fever is rare in the United States – there are approximately 400 cases each year and 75% of those are acquired while traveling internationally.
- Cholera is a diarrheal illness caused by a toxigenic form of a bacterium called Vibrio cholera. The bacteria are generally transmitted in water or food that has been contaminated with infected feces. You may remember the cholera outbreak in Haiti following the deadly earthquake there. The quake caused serious damage to the nation’s infrastructure which led to deteriorated sanitation and public health systems. Cholera spreads rapidly in areas where drinking water is contaminated. That was the problem for those on the Oregon Trail just as it was in Haiti.
- Dysentery is also a diarrheal illness and is often caused by Shigella species (bacillary dysentery) or Entamoeba histolytica (amoebic dysentery). Like cholera and Typhoid fever, dysentery is contracted when people consume food or water that is contaminated with infected feces. In developed nations, dysentery is often more quickly identified, treated and the spread controlled.
The other diseases that plagued those traveling on – or playing – The Oregon Trail were highly contagious infectious diseases. When an entire family was living in their Conestoga wagon for months at a time, disease spread very quickly. Fortunately, widespread use of vaccines has all but wiped out these diseases.
- Diphtheria is caused by the bacteria Corynebacterium diphtheriae. It is spread person-to-person generally by respiratory droplets (that is, a cough or a sneeze) or cutaneous lesions (do NOT do a Google image search for that – trust me). Prior to the introduction of the vaccine, between 100,000-200,000 people in the U.S. contracted diphtheria each year. The vaccine (DTaP: diphtheria, tetanus, and pertussis) is routinely given to children in the U.S. and other industrialized nations. Thanks to this, there has not been a single CDC confirmed case of diphtheria in the United States since 2003. It is still endemic in certain developing nations so travelers are encouraged to check that their DTaP booster is up to date.
- Measles is a highly contagious respiratory disease caused by a virus called rubeola that is spread person-to-person. It is so contagious, that a child who is exposed to it and is not immune will almost certainly contract the disease. Now there is a vaccine available to all children in the United States (MMR: measles, mumps, and rubella) although there has recently been a decline in the number of people seeking vaccination. Prior to the vaccine being available, hundreds of Americans died annually and thousands experienced severe complications. Once the vaccine was made available, there was a dramatic decrease in cases. In fact, in 2000 CDC declared measles “eliminated” as almost all cases originated outside of the country. Now we’re seeing hundreds of cases in the U.S. – 118 cases during the first 19 weeks of 2011 (not even the entire year!). And last year there were over 26,000 cases in the World Health Organization (WHO) European Region (EUR). So while I wish I could say that measles is like the other diseases of The Oregon Trail, and is virtually nonexistent in the industrialized world, that is no longer true.
A common practice when playing The Oregon Trail was to name all of your family members for your friends. Then as the game progressed, we would all laugh when Sally got dysentery – Haha! You have dysentery! You have dysentery! It isn’t so funny now, huh? I certainly wouldn’t wish these terrible diseases on anyone, and can better understand how amazing it is that anyone made it along the Oregon Trail. It is a relief to be able to say that most of these diseases are gone, thanks to our strong public health system.
And, Sally, I’m sorry I laughed when you got dysentery. | https://www.aphlblog.org/the-diseases-of-the-oregon-trail/ |
---
abstract: 'We classify the finite-dimensional irreducible linear representations of the *Baumslag-Solitar groups* $\operatorname{BS}(p, q) = \langle a, b \; | \; ab^p = b^q a \rangle$ for relatively prime $p$ and $q$. The general strategy of the argument is to consider the matrix group given by image of a representation and study its Zariski closure in $\operatorname{\bf GL}_n$.'
address: 'Department of Mathematics, The University of Oklahoma, Norman, OK, USA'
author:
- Daniel McLaury
bibliography:
- 'references.bib'
title: 'Irreducible Representations of Baumslag-Solitar Groups'
---
Baumslag-Solitar Groups
=======================
A group $G$ is said to be *residually finite* if, for each $g \in G$, there is a finite group $H$ and a homomorphism $h : G \rightarrow H$ such that $h(g) \neq 1$. It is said to be *Hopfian* if each epimorphism $\phi : G \twoheadrightarrow G$ is an automorphism; equivalently, a group is non-Hopfian if it is isomorphic to one of its proper quotients. Residually finite groups are Hopfian, but not conversely. For nonzero integers $p$ and $q$, the family of *Baumslag-Solitar groups* $$\operatorname{BS}(p, q) = \langle a, b \; | \; ab^p = b^q a \rangle.$$ was defined in[@BaumslagSolitar] to provide examples of finitely presented non-Hopfian groups. Note that $BS(p, q) = BS(-p, -q)$, and further that $\operatorname{BS}(p, q) \cong \operatorname{BS}(q, p)$ by $a \mapsto a^{-1}$. The following definition helps classify the Baumslag-Solitar subgroups:
Positive integers $p$ and $q$ are said to be *meshed* if either
1. $p|q$ or $q|p$, or
2. $p$ and $q$ have precisely the same prime divisors.
([@BaumslagSolitar], Theorem 1; [@MeskinNonresiduallyFinite], Theorem C) Let $p$ and $q$ be nonzero integers. Then $\operatorname{BS}(p, q)$ is
1. residually finite, and thus Hopfian, if $|p| = |q|$ or $|p| = 1$ or $|q| = 1$;
2. Hopfian if $p$ and $q$ are meshed;
3. non-Hopfian if $p$ and $q$ are not meshed.
As finitely-generated matrix groups are known to be Hopfian (see [@MalcevFaithfulRepInfGrp]), the groups $\operatorname{BS}(p, q)$ with $p$ and $q$ unmeshed do not have any faithful representations.
The following fact about Baumslag-Solitar groups will be useful:
\[propbpkbqk\] For any integer $k$, $b^{p^k} = a^{-k} b^{q^k} a^k$ in $\operatorname{BS}(p, q)$.
Rewrite the Baumslag-Solitar relation as $b^p = a^{-1} b^q a$ and take the $j$-th power of each side, giving $$b^{pj} = (a^{-1} b^q a)^j = a^{-1} b^{qj} a$$ Now apply this to $b^{p^k}$ repeatedly. In the case $k > 0$, $$\begin{aligned}
b^{p^k} &= a^{-1} b^{q p^{k-1}} a \\
&= a^{-2} b^{q^2 p^{k-2}} a^2 \\
&\phantom{=}\vdots \notag \\
&= a^{-k} b^{q^k} a^k;\end{aligned}$$ the $k < 0$ case is analogous, and the $k = 0$ case is immediate.
As a reference for representations of non-Hopfian Baumslag-Solitar groups, see [@GoodmanPhD]. There, Goodman shows the existence of some particular representations of $\operatorname{BS}(p, q)$ when $p$ and $q$ are relatively prime, and characterizes the geometry of the variety of $n$-dimensional representations of $\operatorname{BS}(p, q)$ at these points.
Notation
========
Fix nonzero, relatively prime integers $p$ and $q$, not both $\pm 1$, and let $$\Gamma = \operatorname{BS}(p, q) = \langle a, b \; | \; a b^p = b^q a \rangle.$$ Fix also an $n$-dimensional complex $\Gamma$-module $V$, and write $\rho : \Gamma \rightarrow \operatorname{\bf GL}_n(\mathbb{C})$ for the corresponding representation. Let $G$ denote the image of $\rho$, and write $A$ and $B$ for $\rho(a)$ and $\rho(b)$, respectively. Finally, let $H$ denote the subgroup of $G$ generated by $B$.
The Structure of $G$
====================
Write ${\bf G} = \overline{G}$ and ${\bf H} = \overline{H}$ for the Zariski closures of $G$ and $H$ as subsets of $\operatorname{\bf GL}_n(\mathbb{C})$. We assume the standard results about linear algebraic groups, as found in chapters 1–3 of [@SpringerLinAlgGrp]. In particular, recall that:
\[lemmaFI\] Let ${\bf G}$ be an algebraic group. If $K \leq H \leq \bf G$ with $[H:K]$ finite,
1. $\left[\overline{H} : \overline{K}\right]$ is finite as well,
2. $\left[\overline{H} : \overline{K}\right]$ divides $[H:K]$,
3. $\overline{K}^0 = \overline{H}^0$.
Key to the classification is that, while $\langle b \rangle$ is not a normal subgroup of $\Gamma$, its image $H$ will be normal in $G$. This fact is a consequence of the following two theorems:
\[thmhnormg\] ${\bf H} \trianglelefteq {\bf G}$.
It suffices to check that $\bf H$ is normalized by $A$, as $B \in \bf H$ and $A$ and $B$ together generate $\bf G$ as an algebraic group. By the Baumslag-Solitar relation, $$A B^p A^{-1} = B^q.$$ In turn, we see that $$\begin{aligned}
A \langle B^p \rangle A^{-1} &= \langle B^q \rangle, \\
A \overline{\langle B^p \rangle} A^{-1} &= \overline{\langle B^q \rangle},\\
A \overline{\langle B^p \rangle}^0 A^{-1} &= \overline{\langle B^q \rangle}^0.\end{aligned}$$ Applying Fact \[lemmaFI\](iii) to $\langle B^k \rangle \leq \langle B \rangle \leq {\bf G}$, we have $\overline{\langle B^k \rangle}^0={\bf H}^0$ for all $k$, so $A {\bf H}^0 A^{-1} = {\bf H}^0$. Conjugation by $A$ therefore induces an automorphism of the finite group ${\bf H}/{\bf H}^0$ which takes $\overline{\langle B^p \rangle} / {\bf H}^0$ to $\overline{\langle B^q \rangle} / {\bf H}^0$; in particular, these two subgroups have the same order, and consequently the same index. By Fact \[lemmaFI\](ii), the former has index dividing $p$, whereas the latter has index dividing $q$. As $p$ and $q$ are relatively prime, both $\overline{\langle B^p \rangle} / {\bf H}^0$ and $\overline{\langle B^q \rangle} / {\bf H}^0$ must have index one in ${\bf H}/{\bf H}^0$, so $\overline{\langle B^q \rangle} = \overline{\langle B^p \rangle} = {\bf H}$. Now $$A {\bf H} A^{-1} = A \overline{\langle B^p \rangle} A^{-1} = \overline{\langle B^q \rangle} = \bf H,$$ so $A$ normalizes $\bf H$ as desired.
\[thmHFinite\] If $\rho$ is irreducible, then
1. $\bf H$ is diagonalizable,
2. $\bf H$ is a finite cyclic group generated by $B$, and
3. the order $\ell$ of $B$ divides $p^{\varphi(\ell)} - q^{\varphi(\ell)}$; in particular, $(\ell, p) = (\ell, q) = 1$.
\(i) Since $\langle B \rangle$ is commutative, so is its closure $\bf H$. Recall that a commutative linear algebraic group is the direct sum of its unipotent and semisimple parts, each of which is moreover a characteristic subgroup. Therefore ${\bf H}_u \trianglelefteq \bf G$, so we may consider the $\bf G$-submodule $V^{{\bf H}_u}$ of points fixed by ${\bf H}_u$. Since ${\bf H}_u$ is unipotent, $V^{{\bf H}_u}$ is nonzero. But $V$ was simple by assumption, so $V^{{\bf H}_u} = V$, which can only be the case if ${\bf H}_u = 1$. We must then have ${\bf H} = {\bf H}_s$, so $\bf H$ is a commutative group of diagonalizable matrices, and therefore diagonalizable. (ii) By the rigidity of diagonalizable subgroups, $N_{\bf G}({\bf H}) / Z_{\bf G}({\bf H})$ is finite, so conjugation by $A$ gives a finite-order automorphism of $\bf H$. Let $r$ denote this order. By Proposition \[propbpkbqk\] we have $B^{p^r} = A^{-r} B^{q^r} A^r$, so $B^{p^r} = B^{q^r}$, i.e. $B^{p^r - q^r} = 1$. Therefore $\langle B \rangle$ is finite. Consequently, $\langle B \rangle$ is already closed, so ${\bf H} = \langle B \rangle$. (iii) If $\ell = 1$ this is immediate, so suppose $\ell > 1$. As $\bf H$ is cyclic of order $\ell$, $\lvert\operatorname{Aut}{\bf H}\rvert = \varphi(\ell)$, so $r | \varphi(\ell)$. By the argument from (ii), $B^{p^{\varphi(\ell)} - q^{\varphi(\ell)}} = 1$, so $\left.\ell \;\middle|\; p^{\varphi(\ell)} - q^{\varphi(\ell)}\right.$. The rest is elementary number theory.
To summarize, we’ve now shown that every irreducible representation of $\Gamma$ factors through a metacyclic group, which is moreover cylic-by-*finite*-cyclic. The following calculation will be useful in the next section:
\[propBtothes\] In $G$, we have the identities
1. $A^{-1} B A = B^s$, and
2. $A^{-i} B^j A^i = B^{js^i}$ in general,
where $\ell = |B|$ and $p \equiv qs \pmod{\ell}$.
\(i) Since $H \trianglelefteq G$, $A^{-1}BA \in H$; as $H = \langle B \rangle$, this means $A^{-1} BA = B^s$ for some s. Now $$B^p = A^{-1} B^q A = (A B A^{-1})^q = (B^s)^q = B^{qs};$$ as $\ell = |B|$ by definition, we have $p \equiv qs \pmod{\ell}$. (ii) is immediate from (i).
A Basis for $V$
===============
From now on, assume $\rho$ is irreducible, so that $V$ is a simple $\Gamma$-module and the results of Theorem \[thmHFinite\] will apply. (We will not need to make use of algebraic groups any further.) As in the previous section, let $\ell$ denote the order of $B$ and $s$ be the unique solution to $p \equiv q s \pmod{\ell}$. The next step is to get a canonical basis for $V$ of eigenvectors of $B$ which behaves nicely under multiplication by $A$.
\[propEigenB\] Let $v$ be a nonzero $\lambda$-eigenvalue of $B$ for some $\lambda$.
1. $A^mv$ is a $\lambda^{s^m}$-eigenvector of $B$.
2. $\{v, Av, A^2v, \ldots, A^nv\}$ is a basis of $B$-eigenvectors.
3. The eigenvalues of $B$ are distinct primitive $\ell$-th roots of unity.
\(i) By Proposition \[propBtothes\](ii), $B(A^m v) = A^m (B^{s^m} v) = \lambda^{s^m} (A^m v)$. (ii) Let $N$ be the smallest positive integer such that $A^{N+1}v \in \operatorname{span}\{v, Av, \ldots A^N v\}$. Then $W = \operatorname{span}\{v, Av, \ldots A^N v\}$ is a nonempty subspace of $V$ invariant under $a$ and $b$, and consequently under all of $\Gamma$. Since $V$ is a simple $\Gamma$-module, we must have $V = W$. This set of $N+1$ vectors is linearly independent by construction, so it’s a basis for $V$. Equating dimensions, $N + 1 = \dim V = n + 1$. (iii) We’ve now shown each eigenvalue of $B$ is a power of $\lambda$. If $\lambda$ had order $\ell' < \ell$, then each power of $\lambda$ would have order dividing $\ell'$. As $B$ is diagonalizable, this would mean $B^{\ell'} = I$. So $\lambda$ is a primitive $\ell$-th root of unity. But $\lambda$ was an arbitrary eigenvalue of $B$, so each eigenvalue of $B$ is a primitive $\ell$-root of unity. Finally, if $\lambda^{s^m} = \lambda^{s^{m'}}$ for some $0 \leq m < m' \leq n$, then $\lambda^{s^{m' - m}} = 1$, contradicting the previous sentence.
$A^{n+1}$ stabilizes each eigenspace of $B$.
Let $v$ be a $\lambda$-eigenvector of $B$. By the proposition $A^{n+1} v$ is a $\mu$-eigenvector of $B$ for some $\mu$. Consider the action of $A$ on $V$. Each of $Av, A(Av), \ldots A(A^{n-1})v$ is an eigenvector of $B$ corresponding to some non-$\lambda$ eigenvalue. If $\mu \neq \lambda$, then we would have $\operatorname{im}A \subseteq \operatorname{span}\{Av, A^2v, \ldots A^n v\}$, which is a proper subspace of $V$. Since $A$ is invertible, this can’t be the case, so $A^{n+1} v$ must be a $\lambda$-eigenvector of $B$.
Interpreting these as statements about the matrices of $A$ and $B$ in a particular basis, we’ve essentially proven the following result:
\[propCanonicalFormAB\] There is a basis for $V$ in which $$A = c \left[ \begin{array}{cccc:c}
0 & & & & \,1 \\
1 & \ddots & & & \,0 \\
& \ddots & \ddots & & \vdots \\
& & \ddots & 0 & \vdots \\
& & & 1 & \,0
\end{array} \right], \quad B = \left[ \begin{array}{ccccc}
\lambda \\
& \lambda^s \\
& & \ddots \\
& & & \ddots \\
& & & & \lambda^{s^n} \end{array}\right]
\label{eqCanonicalFormAB}$$ for some nonzero complex number $c$ and some root $\lambda$ of unity.
Rescale the basis from Proposition \[propEigenB\](ii).
We can cast these results in more representation-theoretic terms. Suppose we have a representation of the form (\[eqCanonicalFormAB\]) with $c = 1$. Let $\eta = \langle a^{n+1}, b \rangle \leq \Gamma$. Then we have $\rho=\operatorname{ind}^\Gamma_\eta \chi$, where $\chi : \eta \rightarrow \mathbb{C}^\times$ is the character of $\eta$ sending $b$ to $\lambda$ and $a$ to 1. In principle, we could now complete the classification of simple $\Gamma$-modules by applying the Mackey irreducibility criterion to see which of these characters induce irreducible representations of $\Gamma$. As it turns out, though, the argument from first principles in the next section is simpler.
Classification of simple $\Gamma$-modules
=========================================
In Proposition \[propCanonicalFormAB\], we showed that each $n+1$-dimensional simple representation of $\Gamma$ is conjugate to one of the form (\[eqCanonicalFormAB\]). In this section, we completely characterize such representations, thereby giving a complete description of the irreducible representations of $\Gamma$.
\[thmRepExistConditions\] Let $c$ be a nonzero complex number, $\ell$ a positive integer, $\lambda$ a primitive $\ell$-th root of unity, and $p \equiv q s\pmod{\ell}$. There is an $(n+1)$-dimensional representation of $\Gamma$ sending $a$ to $A$ and $b$ to $B$, where $A$ and $B$ are as in (\[eqCanonicalFormAB\]), if and only if $\ell$ divides $q^{n+1} - p^{n+1}$.
Note that this condition implies that $\ell$ is relatively prime to both $p$ and $q$. We just need to check when $A B^p = B^q A$. Let $\{e_0, e_1, \ldots, e_n\}$ denote the standard basis of $\mathbb{C}^{n+1}$. For $0 \leq i \leq n$, $$AB^p e_i = A \left( \lambda^{ps^i} e_i \right) = \begin{cases}
c \lambda^{ps^i} e_{i+1} & \text{ if } i < n, \\
c \lambda^{ps^n} e_0 & \text{ if } i = n;
\end{cases}$$ $$\begin{aligned}
B^q A e_i &= \begin{cases}
B^q c e_{i+1} & \text{ if } i < n, \\
B^q c e_0 & \text{ if } i = n,
\end{cases} \\
&= \begin{cases}
c \lambda^{q s^{i+1} } e_{i+1} & \text{ if } i < n, \\
c \lambda^q e_0 & \text{ if } i = n.
\end{cases}\end{aligned}$$ For $i \leq n$, the condition $c \lambda^{ps^i} = c \lambda^{q s^{i+1} }$ is always satisfied, as it is equivalent to $p \equiv qs \pmod{\ell}$. For $i = n$, the condition $c \lambda^{p s^n} = c \lambda^q$ is equivalent to $p s^n \equiv q \pmod{\ell}$. Multiplying through by $q^n$, we have $p q^n s^n \equiv q^{n+1} \pmod{\ell}$. As $p \equiv qs \pmod{\ell}$, this just says that $p^{n+1} \equiv q^{n+1} \pmod{\ell}$, or in other words that $\ell$ divides $q^{n+1} - p^{n+1}$.
As an example, consider three-dimensional representations of $\operatorname{BS}(2, 5)$ of this form. We want to pick some value for $\ell$ which divides $$5^3 - 2^3 = 125 - 8 = 117 = 3^2\cdot 13.$$ When $\ell = 3$, the solution to $2 \equiv 5s \pmod{3}$ is $s = 1$. The primitive third roots of unity are $-\frac{1}{2} \pm \frac{\sqrt{3}}{2}$, so we get the representations $$a \mapsto \begin{pmatrix}
0 & 0 & c \\
c & 0 & 0 \\
0 & c & 0
\end{pmatrix}, \qquad b \mapsto \begin{pmatrix}
-\frac{1}{2} \pm \frac{\sqrt{3}}{2} & 0 & 0 \\
0 & -\frac{1}{2} \pm \frac{\sqrt{3}}{2} & 0 \\
0 & 0 & -\frac{1}{2} \pm \frac{\sqrt{3}}{2}
\end{pmatrix}.$$ When $\ell = 9$, the solution to $2 \equiv 5s \pmod{9}$ is $s = 4$. If $\zeta$ is a primitive ninth root of unity, then we get the representations $$a \mapsto \begin{pmatrix}
\zeta & 0 & 0 \\
0 & \zeta^4 & 0 \\
0 & 0 & \zeta^7
\end{pmatrix}.$$ Notice that the former are reducible while the latter are not. All that remains is to distinguish these two cases. From Proposition \[propEigenB\](iii), we know that it’s a necessary condition that the eigenvalues $\lambda, \lambda^s, \ldots, \lambda^{s^n}$ be distinct. Clearly this condition is sufficient as well – if it holds, then $b$ fixes each $\operatorname{span}\{e_i\}$, while $a$ permutes them in a single orbit.
\[corRepSimpleConditions\] Such a representation is irreducible if and only if $\ell$ does not divide $q^k - p^k$ for any $1 \leq k \leq n$.
We’ve seen already that irreducibility is equivalent to $B$ having distinct eigenvalues. If $B$ does not have distinct eigenvalues, then $\lambda^{s^i} = \lambda^{s^j}$ for some $i$ and $j$ with $i \not \equiv j \pmod{\ell}$, or equivalently $s^i \equiv s^j \pmod{\ell}$. This may be rewritten as $s^{i-j} \equiv 1 \pmod{\ell}$; multiplying through by $q^{i-j}$, we have $p^{i - j} \equiv q^{i - j} \pmod{\ell}$, i.e. $\ell$ divides $q^{i-j} - p^{i-j}$. Finally, notice that the difference $i - j$ can take on any value between 1 and $n$.
Applying this to the example above, we see that $$5^2 - 2^2 = 25 - 4 = 21 = 3 \cdot 7, \qquad 5^1 - 2^1 = 3,$$ which is in accord with our observation that taking $\ell = 9$ gave an irreducible representation, while taking $\ell = 3$ did not.
Conclusion
==========
It is hoped that these results can be generalized to the case of arbitrary Baumslag-Solitar groups, which contain the groups examined here as subgroups: if $\Gamma = \operatorname{BS}(m,n)$, where $(m, n) = d$, then $\Delta = \langle a, b^d \rangle \leq \Gamma$ is isomorphic to $\operatorname{BS}(p, q)$, where $p = m/d$ and $q = n/d$. Note that $\Delta$ is *not*, in general, of finite index in $\Gamma$.
This paper was prepared under the direction of Andy Magid, whose supervision was invaluable in finding and solving the problem and submitting it for publication. The author is also grateful to Nikolay Buskin for helpful discussions.
| |
Seed Space gallery debuts Demetrius Oliver: Aeriform, an installation which poetically ruptures the dimensionality of sculpture through deft aesthetic gestures. The New York-based artist is best known for his use of prosaic materials in improvisatory and site-specific installations about atmospheric phenomena—in what he refers to as a process of suggestion. Aeriform is a continuation of Oliver’s metaphoric consideration of everyday materials to envision abstract experiences.
In this site-specific installation, Oliver reinterprets material and sculptural elements occurring in his previous work within the constraints and limitations of the gallery space. The motif of a storm shutter reoccurs in this work as an obstructing barrier; Oliver has nailed a clear industrial storm shutter to the exterior of the doorframe, effectively barring entry into the exhibition space. Its thickly corrugated surface disperses light from a single fixture above. Immediately on the other side of the ridged shutter, an iridescent cast-resin air turbine sits just below eye level on a thin steel pedestal—its lowest shelf is occupied by a chrome turbine of the same size.
In a poetic and frustrating conflation of screened visibility and limited physical access, Oliver, through disabling the only function of the doorway—a mechanism for transitioning into a new space—has transparently rendered the entrance symbolic and obsolete. Viewers spatially orient themselves several feet from the entry of the gallery, viewing Oliver’s sculptural intervention in the context of its extended environment, an action which effectively shifts the physical domain of the gallery beyond the space it normally occupies.
An astute example of Oliver’s penchant for materiality, the use of the storm shutter curiously situates Aeriform as both photographic and sculpturally resonant. Sealing the work along a two-dimensional plane, the shutter visually flattens the physical space the work occupies, but in its translucency recalls the foreseeable depth of the room beyond. Likewise, the close proximity of the figurative sculpture to the other side of the shutter denotes a sense of unresolved confrontation—compelled to approach the sculptural piece that dominates the frame, we are unable to occupy the same space, and our vision is impaired by a screen.
Oliver’s poeticism in Aeriform derives from a series of concise denials: The space is un-enterable; the doorframe is closed; the storm shutter mediates our pure vision of the sculptural work behind it, and the air turbines do not move. His allusion to the conduction and preservation of air through the turbines and the sealed storm shutter denotes an expectancy, or resolve, of possible functionality, yet it is motionless and unburdened by real-life application. The final resonance of Aeriform is like much of Oliver’s work— singular, lyrical, and with imperceptible depth.
Demetrius Oliver: Aeriform is on exhibit at Seed Space through September 23. For more information, visit seedspace.org. | https://nashvillearts.com/2017/08/demetrius-oliver-aeriform/ |
How do you know if an argument is inductive or deductive?
If the arguer believes that the truth of the premises definitely establishes the truth of the conclusion, then the argument is deductive. If the arguer believes that the truth of the premises provides only good reasons to believe the conclusion is probably true, then the argument is inductive.
What is an example of deductive and inductive arguments?
Inductive Reasoning: Most of our snowstorms come from the north. It’s starting to snow. This snowstorm must be coming from the north. Deductive Reasoning: All of our snowstorms come from the north.
What are some examples of inductive arguments?
For example: In the past, ducks have always come to our pond. Therefore, the ducks will come to our pond this summer. These types of inductive reasoning work in arguments and in making a hypothesis in mathematics or science.
What is smoke and fire argument?
The converse is not true: not every inductively correct argument is also deductively correct; the smoke-fire argument is an example of an inductively correct argument that is not deductively correct. For whereas the existence of smoke makes likely the existence of fire it does not guarantee the existence of fire.
What are some examples of deductive arguments?
With this type of reasoning, if the premises are true, then the conclusion must be true. Logically Sound Deductive Reasoning Examples: All dogs have ears; golden retrievers are dogs, therefore they have ears. All racing cars must go over 80MPH; the Dodge Charger is a racing car, therefore it can go over 80MPH.
What kind of reasoning often uses the IF THEN statement?
That’s logic.” If–then arguments , also known as conditional arguments or hypothetical syllogisms, are the workhorses of deductive logic. They make up a loosely defined family of deductive arguments that have an if–then statement —that is, a conditional—as a premise.
What makes an argument deductive?
A deductive argument is the presentation of statements that are assumed or known to be true as premises for a conclusion that necessarily follows from those statements. Deductive reasoning relies on what is assumed to be known to infer truths about similarly related conclusions.
What are the 6 types of inductive arguments?
6 Types of Inductive Reasoning
- Generalized.
- Statistical.
- Bayesian.
- Analogical.
- Predictive.
- Causal inference.
Which is a form of inductive argument?
In the case of inductive reasoning, a statement may seem to be true until an exception is found. A person might inductively reason, for example, that all people have 10 toes till they see an exception. Often inductive reasoning is based on circumstantial evidence of a more-or-less limited sampling size.
What is an example of deductive?
Examples of deductive logic:
If the first two statements are true, then the conclusion must be true. Bachelors are unmarried men. Bill is unmarried. Therefore, Bill is a bachelor.
Which is an example of inductive reasoning quizlet?
Making assumptions. When you estimate a population in the future you don’t know what the population will actually be you are looking for a trend, you are generalizing and therefore using inductive reasoning.
Does inductive reasoning use if-then?
If-then statements are also called conditional statements. uses facts, definitions, accepted properties, and the laws of logic to make a logical argument. This form of reasoning differs from inductive reasoning, in which previous examples and patterns are used to form a conjecture.
What is an if-then statement called?
Hypotheses followed by a conclusion is called an If-then statement or a conditional statement.
What term is used to describe the then clause in an if-then statement?
Complete conditional sentences contain a conditional clause (often referred to as the if-clause) and the consequence. Consider the following sentences: If a certain condition is true, then a particular result happens. I would travel around the world if I won the lottery.
Which clause from an if-then statement is the conclusion?
The “then” part of an if-then statement is called the conclusion, consequent or apodosis. The conclusion of a conditional statement is the result of the hypothesis. The “then” part of an if-then statement is called the conclusion, consequent or apodosis.
How do you write a conditional statement in if/then form?
When a conditional statement is written in if-then form, the “if’ part contains the hypothesis and the “then” part contains the conclusion. Use red to identify the hypothesis and blue to identify the conclusion. Then rewrite the conditional statement in if-then form.
What are the 4 types of conditional sentences examples?
Here are a few examples:
- General truth – If I eat breakfast, I feel good all day.
- Future event – If I have a test tomorrow, I will study tonight.
- Hypothetical situation – If I had a million dollars, I would buy a boat!
- Hypothetical outcome – If I had prepared for the interview, I would have gotten the job.
What are if conditionals?
Conditional Sentences are also known as Conditional Clauses or If Clauses. They are used to express that the action in the main clause (without if) can only take place if a certain condition (in the clause with if) is fulfilled.
How many types of if clauses are there?
If clauses = main clause and if-clause. There are 3 Types: If clauses Type 1, If clauses Type 2, If clauses Type 3. | https://goodmancoaching.nl/are-if-smoke-then-fire-arguments-deductive-or-inductive/ |
How To Install Asphalt Shingles
Asphalt shingles last a long time, and if installed properly, will effectively protect your home against water leaks. Installing asphalt shingles may appear complicated at first glance, but if you are good at following directions, you can do the job on your own and get good results. To install asphalt shingles, follow the steps outlined below.
Step 1 – Get Ready To Work
Gather all the materials you need to install your asphalt shingles, and put them inside a utility box. Once you have all your tools and materials ready, take out your ladder and set it on solid ground before you use it to climb up your roof.
Take out your rope then tie one of it on the handle of the utility box. Take the other end of the rope with you as you climb up the ladder to your roof. Once you are on the roof, hoist the utility box up by pulling on the rope.
Step 2 – Install the Drip Edges and Waterproofing Underlay
Line the eaves with the drip edges then lay down the self-adhesive and waterproof underlay over the whole roof area. This will serve as the shield to keep water from seeping into your home. Then, lay the felt underlay. Allow 3-inches overlap between each section, to allow complete coverage even with displacement. Install 1 on top of the other, with the higher section always on top. Install the rake drip edge at the sides. Staple the felt underlay under each rake drip edge and drip edges.
Step 3 – Locate Starting Point Then Install the Asphalt Shingles
Snap the chalk line at the top center of the roof towards the eave. This will serve as guide to start installing the shingles. Use the shingle guide provided along with the shingle packaging, starting from the center line towards the rakes of the roof.
First, install the starter strip of the shingles, using self-sealing adhesive along the eave. Let 3/8 inch of its portion overhang beyond the edge, for dripping. This should prevent letting water seep in. Nail the shingles on top of the first line, flush off the starter course. If you are in a windy area, use 6 nails per shingle. Otherwise, you can use 4 nails per shingle. As soon as the course in laid, snap again chalk lines, this time to align horizontally across the roof, for the succeeding lines of shingles down to the eaves. Allow 5 inches of each bottom edge to overlay the course below it.
Step 4 – Finishing Up
Clean up the area, and remove all loose materials to prevent accidents. Lower the utility box back to the ground, then climb down the ladder. Fold the ladder and store it in a safe place, away from the reach of young children. Put away all useful materials you have left, for future use. | https://assets.doityourself.com/stry/how-to-install-asphalt-shingles |
---
abstract: 'We study valley-dependent spin transport theoretically in monolayer transition-metal dichalcogenides in which a variety of spin and valley physics are expected because of spin-valley coupling. The results show that the spins are valley-selectively excited with appropriate carrier doping and valley polarized spin current (VPSC) is generated. The VPSC leads to the [*spin-current Hall effect*]{}, transverse spin accumulation originating from the Berry curvature in momentum space. The results indicate that spin excitations with spin-valley coupling lead to both valley and spin transport, which is promising for future low-consumption nanodevice applications.'
author:
- 'Yuya Ominato${}^1$, Junji Fujimoto${}^1$, and Mamoru Matsuo${}^{1,2}$'
bibliography:
- '../reference.bib'
title: 'Valley-dependent spin transport in monolayer transition-metal dichalcogenides'
---
[*Introduction.—*]{}Monolayer transition-metal dichalcogenides (TMDCs) have attracted significant attention because of their unique band structure labeled by spin and valley degrees of freedom. Monolayer TMDCs are direct-bandgap semiconductors, and the band extrema are located at the $K_+$ and $K_-$ points of the Brillouin zone [@makAtomicallyThinMoS2010; @splendianiEmergingPhotoluminescenceMonolayer2010]. Strong spin-orbit coupling of transition metals and the inversion-asymmetric crystal structure lead to spin-valley coupling (SVC) [@xiaoCoupledSpinValley2012]. The broken inversion symmetry also leads to the valley-contrasting Berry curvature [@xiaoValleyContrastingPhysicsGraphene2007; @yaoValleydependentOptoelectronicsInversion2008; @xiaoBerryPhaseEffects2010; @koshinoAnomalousOrbitalMagnetism2010a; @koshinoChiralOrbitalCurrent2011], which is vitally important to assign an intrinsic magnetic moment to each valley and access the valley degrees of freedom.
Recent rapid progress in TMDC device fabrication techniques has enriched our knowledge of the valley physics, such as valley-dependent circular dichroism [@caoValleyselectiveCircularDichroism2012; @makControlValleyPolarization2012; @zengValleyPolarizationMoS2012; @sallenRobustOpticalEmission2012; @wuElectricalTuningValley2013; @zhaoEnhancedValleySplitting2017; @zhongVanWaalsEngineering2017; @seylerValleyManipulationOptically2018; @nordenGiantValleySplitting2019], the valley Hall effect [@makValleyHallEffect2014; @leeElectricalControlValley2016a; @ubrigMicroscopicOriginValley2017; @wuIntrinsicValleyHall2019; @hungDirectObservationValleycoupled2019a], and valley-dependent spin injection by spin-polarized charge injection [@yeElectricalGenerationControl2016]. All these experiments used charge excitations by an electric field and an optical irradiation. Conversely, SVC provides a possible way to access the valley degrees of freedom via a spin excitation. However, neither an experimental signature nor a theoretical proposal of spin-valley coupled phenomena by a spin excitation is missing so far.
In this work, we study valley-dependent spin transport theoretically by a spin excitation in a TMDC monolayer. Figure \[fig\_system\] shows a schematic diagram of a system, in which a ferromagnetic insulator (FI) is fixed to a TMDC monolayer. We then consider microwave irradiation of the system, which induces precession of the localized spins in the FI (i.e., ferromagnetic resonance). The ferromagnetic resonance excites the electron spins in the TMDC monolayer via spin-transfer processes originating from the proximity-exchange coupling at the interface. We find that SVC with proximity-exchange coupling leads to valley-dependent spin excitation, producing valley-polarized spin current (VPSC). Because of the valley-contrasting Berry curvature, VPSC leads to transverse spin accumulation, which we call the [*spin-current Hall effect*]{}. Solving the spin diffusion equation for the valley-polarized spins, we show the spatial distribution of the transverse spin accumulation.
![ [A ferromagnetic insulator is fixed to the TMDC monolayer and a diffusive spin current $\bm{j}_s$ is generated by an external microwave irradiation.]{} []{data-label="fig_system"}](fig_system.eps){width="1\hsize"}
[*Model Hamiltonian.—*]{}The total Hamiltonian is $$\begin{aligned}
H=H_{\rm{TM}}+\hfi+\hex.
\label{eq_Hamiltonian}\end{aligned}$$ The first term $H_{\rm{TM}}=\sum_{\alpha,\bk}\e_{\alpha\bk} c^\dagger_{\alpha\bk} c_{\alpha\bk}$ describes the electronic states of the TMDC monolayer, where $c^\dagger_{\alpha\bk}$ ($c_{\alpha\bk}$) is the electron creation (annihilation) operator with eigenenergy $\e_{\alpha\bk}$ and quantum number $\alpha=(n,\tau,s)$, where $n=\pm$, $\tau=\pm$, and $s=\pm$ are the band, valley, and spin indices, respectively. The eigenenergy and eigenstates are derived by diagonalizing the effective Hamiltonian around the $K_+$ and $K_-$ points [@xiaoCoupledSpinValley2012]: $$\begin{aligned}
H_{\rm{eff}}=
&\hbar v
\left(
\tau k_x\s^x+
k_y\s^y
\right)
+\frac{\Delta}{2}\sigma^z
-\tau s\lambda\frac{\sigma^z-1}{2},
%-\tau s\lambda_c\frac{\sigma^z+1}{2},
\label{eq_effective}\end{aligned}$$ where $v$ is the velocity, $\Delta$ is the energy gap, $\lambda$ is the spin splitting at the valence-band top caused by spin-orbit coupling, and $\bm{\sigma}$ contains the Pauli matrices acting on the orbital degrees of freedom. These parameters are fit from first-principles calculations [@zhuGiantSpinorbitinducedSpin2011; @kangBandOffsetsHeterostructures2013; @liuThreebandTightbindingModel2013a; @kormanyosTheoryTwodimensionalTransition2015; @echeverrySplittingBrightDark2016]. The second term $\hfi$ in Eq. (\[eq\_Hamiltonian\]) describes a bulk FI exposed to microwave irradiation: $$\begin{aligned}
\hfi(t)=
&\sum_\bk\hbar\omega_\bk b^\dagger_{\bk}b_{\bk}
-\hac^+(t)b^\dagger_{\bk=\bm{0}}-\hac^-(t)b_{\bk=\bm{0}},
%&\sum_\bk\ho_\bk \bc\ba \notag \\
%&-\sqrt{2SN}
%\frac{\hbar\gamma\hac}{2}
%(
% e^{-i\Omega t}b^\dagger_{\bk=\bm{0}}
% +e^{i\Omega t}b_{\bk=\bm{0}}
%),\end{aligned}$$ where $b^\dagger_{\bk}$ ($b_{\bk}$) is the creation (annihilation) operator for magnons with momentum $\bk$, $\hbar\omega_\bk=Dk^2-\hbar\gamma B$ is the magnon dispersion, and $\hac^\pm(t)=\sqrt{SN}{\hbar\gamma\hac}e^{\mp i\Omega t}/\sqrt{2}$, where $N$ is the number of spins in the FI, $S$ is the magnitude of the localized spin, $B$ is a static external magnetic field, $\hac$ and $\Omega$ are the amplitude and frequency of the microwave, respectively, and $\gamma(<0)$ is the gyromagnetic ratio. We have introduced in this Hamiltonian the spin-wave approximation $S^z_\bk=S-b^\dagger_\bk b_\bk$, $S_{\bk}^+=\sqrt{2S}b_{\bk}$, and $S_{-\bk}^{-}=\sqrt{2S}b_{\bk}^\dagger$, where $S^z_\bk$ and $S^\pm_\bk$ give the Fourier components of the $z$ component and of the spin-flip operators of the localized spin in the FI, respectively.
The third term $\hex$ in Eq. (\[eq\_Hamiltonian\]) describes the proximity-exchange coupling at the interface between the TMDC and the FI. The proximity-exchange coupling Hamiltonian contains Zeeman-like exchange coupling [@qiGiantTunableValley2015a; @zhangLargeSpinValleyPolarization2016; @liangMagneticProximityEffect2017; @xuLargeValleySplitting2018; @habeAnomalousHallEffect2017; @cortesTunableSpinPolarizedEdge2019] and a tunneling Hamiltonian [@ohnumaEnhancedDcSpin2014; @ohnumaTheorySpinPeltier2017; @matsuoSpinCurrentNoise2018; @katoMicroscopicTheorySpin2019], $\hex=H_{\rm{Z}}+H_{\rm{T}}$, where $H_{\rm{Z}}=-JSs^z_{\rm{tot}}$, and $$\begin{aligned}
%&\hex
%=H_{\rm Z}+H_{\rm T}, \\
%&H_{\rm Z}
%=-JSs^z_{\rm tot}, \\
&H_{\rm{T}}
=-\sum_{\bq,\bk}
\left(
J_{\bq,\bk}s_\bq^+S_\bk^-
+{\rm{H.c.}}
\right),
\label{eq_spin_transfer}\end{aligned}$$ where $J$ and $J_{\bq,\bk}$ are the exchange-coupling constant and the matrix element for spin-transfer processes, respectively. $s^z_{\rm{tot}}=\sum_{\alpha,\bk}sc^\dagger_{\alpha\bk}c_{\alpha\bk}$ is the $z$ component of the total electron spin in the TMDC, and $s^\pm_\bq$ is the Fourier transform of the spin-flip operators of electron spin density on the TMDC. The Hamiltonians $H_{\rm{Z}}$ and $H_{\rm{T}}$ correspond to the out-of-plane component and the in-plane component of the proximity-exchange coupling: $H_{\rm{Z}}$ modulates the spin splitting and $H_{\rm{T}}$ describes spin transfer at the interface. [*Spin current at the interface.—*]{}The microwave excites magnons and increases magnon population, which excites spins in the TMDC monolayer because of the spin transfer term $H_T$. This mechanism is called spin pumping, which gives successful spin injection in bilayer systems composed of TMDCs and ferromagnets [@mendesEfficientSpinCharge2018; @husainSpinPumpingHeusler2018; @bansalExtrinsicSpinorbitCoupling2019]. A spin excitation is described by the spin current at the interface. The spin current operator is $$\begin{aligned}
\IS:=-\frac{\hbar}{2}\dot{s}^z_{\rm{tot}}
=-i\sum_{\bq,\bk}
\left(
J_{\bq,\bk}
s_\bq^+
S_\bk^-
-
\rm{H.c.}
\right),\end{aligned}$$ where we define positive current to flow from the TMDC to the FI. We calculate the statistical average of the spin current at the interface and treat $H_{\rm{T}}$ as a perturbation and $H_{\rm{TM}}+\hfi+H_{\rm{Z}}$ as an unperturbed Hamiltonian. The second-order perturbation calculation with respect to $H_{\rm{T}}$ gives the statistical average of the spin current at the interface [@ohnumaEnhancedDcSpin2014; @ohnumaTheorySpinPeltier2017; @matsuoSpinCurrentNoise2018; @katoMicroscopicTheorySpin2019]: $$\begin{aligned}
\la\IS\ra
&=
2\hbar
\sum_{\bq,\bk}
|J_{\bq,\bk}|^2
\int_{-\infty}^\infty
\frac{d\omega}{2\pi}
{\rm{Im}}\chi^R_{\bq}(\omega)
{\rm{Im}}\left[\d G_{-\bk}^<(\omega)\right].\end{aligned}$$ The dynamical spin susceptibility of the TMDC monolayer is $$\begin{aligned}
&\chi^R_\bq(\omega):=
\int^\infty_{-\infty}dte^{i\omega t}
\frac{1}{i\hbar}\theta(t)
\la[s_{\bq}^+(t),s_{-\bq}^-(0)]\ra.%, \\
%&G^<_\bk(\omega):=
% \int^\infty_{-\infty}dte^{i\omega t}
% \frac{1}{i\hbar}
% \la S_{-\bk}^-(0) S_{\bk}^+(t) \ra,\end{aligned}$$ The second-order perturbation calculation of the magnon propagator $G^<_\bk(\omega):=\int^\infty_{-\infty}dte^{i\omega t}({2S}/{i\hbar})\la b_{\bk}^\dagger(0) b_{\bk}(t) \ra$ with respect to $h_{\rm{ac}}$ leads to $$\begin{aligned}
{\rm{Im}}\left[\d G_{-\bk}^<(\omega)\right]
%&=-{2\pi N\hbar}
% \frac
% {(2S/\hbar)^2(\gamma\hac/2)^2}
% {(\omega-\ok)^2+\ag^2\omega^2}
% \d_{\bk,{\bm 0}}\d(\omega-\Omega), \notag \\
%&=-\frac
% {2\pi N(S\gamma\hac)^2/\hbar}
% {(\omega-\ok)^2+\ag^2\omega^2}
% \d_{\bk,{\bm 0}}\d(\omega-\Omega),
&=-\frac{1}{\hbar}
g(\omega)
\d(\omega-\Omega)
\d_{\bk,{\bm 0}},
%/\hbar,\end{aligned}$$ where $g(\omega)={2\pi N(S\gamma\hac)^2}/\left[{(\omega-\omega_{\bm{0}})^2+\ag^2\omega^2}\right]$ is the dimensionless function with the phenomenological dimensionless damping parameter $\ag$ [@kasuyaRelaxationMechanismsFerromagnetic1961; @cherepanovSagaYIGSpectra1993; @jinTemperatureDependenceSpinwave2019]. In the current setup, only the uniform magnon mode is excited as indicated by the Kronecker delta $\d_{\bk,\bm{0}}$. We consider an interface characterized by a roughness length scale $r$, satisfying a condition, with $k_{\rm{F}} \ll r^{-1} \ll a^{-1}$, where $k_{\rm{F}}$ is the Fermi wavelength and $a$ is the lattice constant of the TMDC. This condition expresses an atomically flat heterostructure where (i) the matrix element is constant, $J_{\bq,\bk}=J_0$, because of the long-wavelength approximation, and (ii) the intervalley spin-transfer processes are negligible; in other wards, the roughness condition excludes a change in wave vector comparable to $a^{-1}$. Given these conditions, we obtain the following analytical expression for the spin current: $$\begin{aligned}
\la\IS\ra=I_S^{K_+}+I_S^{K_-},
%\la\IS\ra=\sum_\tau\la\IS\ra^{K_\tau},
%=2|J_0|^2 g(\Omega){\rm Im}\chi^R_{\rm loc}(\Omega),
%=2|J_0|^2 g(\Omega)
% \sum_{\tau}
% {\rm Im}
% \chi^R_{\tau,{\rm loc}}(\Omega),
%\label{eq_spin_current}\end{aligned}$$ where we introduce the valley-resolved spin current$$\begin{aligned}
I_S^{K_\tau}=2|J_0|^2 g(\Omega){\rm{Im}}\chi^R_{\tau,{\rm{loc}}}(\Omega),
\label{eq_spin_current}\end{aligned}$$ with the local spin susceptibility for each valley $\chi^R_{\tau,{\rm{loc}}}(\omega):=\sum_\bq\chi^R_{\tau,{\bq}}(\omega)$. The imaginary part of the local spin susceptibility is given by $$\begin{aligned}
&{\rm{Im}}\chi_{\tau,{\rm{loc}}}^R(\omega)
=-{2\pi\ho}
\int d\e
\left(
-\frac{\partial f(\e)}{\partial \e}
\right)
D_{\tau,+}(\e)D_{\tau,-}(\e) \notag \\
&\hspace{35mm}\times
\left(
1+\frac{Z_{\tau,+}Z_{\tau,-}}{|\e-E_{\tau,+}||\e-E_{\tau,-}|}
\right),
\label{eq_susceptibility}\end{aligned}$$ where $f(\e)=1/\left(e^{(\e-\mu)/\kbt}+1\right)$ is the Fermi distribution function with chemical potential $\mu$ and temperature $T$, and $D_{\tau,s}(\e)$ is the density of states per unit area: $$\begin{aligned}
&D_{\tau,s}(\e)=
\frac{1}{2\pi(\hbar v)^2}
|\e-E_{\tau,s}|
{~}\theta(|\e-E_{\tau,s}|-Z_{\tau,s}),
%&D_{\tau s,0}(\e)=
% \frac{A}{2\pi(\hbar v)^2}
% Z_{\tau s}
% {~}\theta(|\e-E_{\tau s}|-Z_{\tau s}),\end{aligned}$$ with $E_{\tau,s}=s(\tau\lambda/2+JS)$ and $Z_{\tau,s}=\Delta/2-\tau s \lambda/2$. At zero temperature, the spin current is finite when the product of the spin-up and spin-down density of states in each valley is finite at the Fermi level as shown in Eqs. (\[eq\_spin\_current\]) and (\[eq\_susceptibility\]) [@ominatoQuantumOscillationsGilbert2019]. One of the essential results of the above expressions is that the spin current can be valley polarized. This is because the valley degeneracy is lifted in the current system so that the spin current in each valley can differ. The valley polarization of the spin current is characterized by the valley-polarized spin current (VPSC), which is defined as $$\begin{aligned}
I_S^{\rm{VP}}:=I_S^{K_+}-I_S^{K_-}.\end{aligned}$$ We show that, with appropriate carrier doping, the spin current is completely valley polarized, which means that the spins are valley-selectively excited. The first and second panels from the left in Fig. \[fig\_vpsc\](a) show the valence bands in the $K_+$ and $K_-$ valleys, respectively. The parameters are given the values $\lambda/\Delta=0.10$ and $JS/\Delta=0.05$, which are comparable to the results of first-principles calculations [@qiGiantTunableValley2015a; @zhangLargeSpinValleyPolarization2016; @liangMagneticProximityEffect2017; @xuLargeValleySplitting2018]. The third and fourth panels from the left in Fig. \[fig\_vpsc\](a) show the spin current in each valley and the VPSC, respectively. In the energy region (i), the spin current is finite only in the $K_+$ valley, so that the spin current is completely valley-polarized. This means that the spins are valley-selectively excited, which is feasible even at finite temperatures provided that the spin splitting due to proximity-exchange coupling is much greater than the thermal broadening: $\kbt/\lambda\ll 1$. In the energy region (ii), however, the spin current is finite in the $K_+$ and $K_-$ valleys, where the VPSC is almost zero, and the valley selectivity is suppressed. Note also that a small spin splitting exists in the conduction band [@liuThreebandTightbindingModel2013a], which is omitted in the current model Hamiltonian, and valley-selective spin excitation is possible in the conduction band.
![ (a) The first and second panels from the left show the valence bands in the $K_+$ and $K_-$ valleys, respectively. The third panel from the left shows the spin current at the interface. The dotted and solid curves represent the valley-resolved spin current in the $K_+$ and $K_-$ valleys, respectively. The fourth panel from the left shows the valley-polarized spin current at several temperatures. The units of the spin current are given by $I_{0}=\frac{g(\Omega)}{\pi}\frac{|J_0|^2\Delta^2}{(\hbar v)^4}\hbar\Omega$. (b) (i) The up-spin (blue arrows) and down-spin (red arrows) electrons flowing in opposite directions lead to the transverse spin accumulation. (ii) When the spin current is finite in the $K_+$ and $K_-$ valleys, the transverse spin accumulation cancels. []{data-label="fig_vpsc"}](fig_spin_current.eps){width="1\hsize"}
The spin current at the interface generates a diffusive spin current $\bm{j}_s$ on the TMDC monolayer (see Fig. \[fig\_system\]). Figure \[fig\_vpsc\](b) shows the generated diffusive spin current schematically. In the energy region (i), the diffusive spin current consists of electrons in the $K_+$ valley. Focusing on the flow of one spin, the Berry curvature leads to the transverse flow of the spin [@xiaoValleyContrastingPhysicsGraphene2007; @yaoValleydependentOptoelectronicsInversion2008; @xiaoBerryPhaseEffects2010]. The sign of the Berry curvature is the same for the up-spin and down-spin electrons because they belong to the same valley. Consequently, the up-spin and down-spin electrons flowing in opposite directions lead to transverse spin accumulation, which we call the [*spin-current Hall effect*]{}. This is one of the main results of this paper. In the energy region (ii), however, the diffusive spin current consists of electrons in the $K_+$ and $K_-$ valleys. The Berry curvatures around the $K_+$ and $K_-$ valleys have opposite signs because they are time-reversed with respect to each other. Consequently, the transverse spin accumulations originating from the $K_+$ and $K_-$ valleys cancel each other. [*Diffusion of injected spins.—*]{}A prominent feature of the VPSC is the generation of transverse spin accumulation, as discussed above. Here, we solve the spin diffusion equation for the valley-polarized spins to clarify the transverse spin accumulation. The spin diffusion equation is $$\begin{aligned}
\left(\partial_x^2+\partial_y^2-1/\lambda_s^2\right)\mu_s(x,y)=0,
\label{eq_spin_diffusion}\end{aligned}$$ where $\mu_s(x,y)$ is the spin accumulation and $\lambda_s$ is the spin diffusion length. The diffusive spin current is given by $$\begin{aligned}
\bm{j}_s(x,y)
&=-\frac{\sigma_{xx}}{e}
\left[
\nabla\mu_s(x,y)+\theta\nabla\mu_s(x,y)\times\bm{e}_z
\right],
\label{eq_diffusive_spin_current}\end{aligned}$$ where $\theta=\sigma_{xy}/\sigma_{xx}$ is the Hall angle, $\sigma_{xx}$ is the longitudinal conductivity, and $\sigma_{xy}$ is the Hall conductivity originating from the Berry curvature. The second term describes the [*spin-current Hall effect*]{}. We consider a TMDC monolayer with system size $L_x\times 2L_y$ and boundary conditions $j_{s}^x(0,y)=j_{0},{~}j_{s}^x(L_x,y)=0$ and $j_{s}^y(x,\pm L_y)=0$. The boundary conditions mean that the diffusive spin current is injected at $x=0$ and vanishes at the other boundaries. We numerically solve Eq. (\[eq\_spin\_diffusion\]) and set $L_x/\lambda_s=L_y/\lambda_s=5$ with the parameter $\theta=0.3$ [@wuIntrinsicValleyHall2019; @hungDirectObservationValleycoupled2019a]. Figure \[fig\_spin\_diffusion\] shows the spin accumulation decaying exponentially with the spin diffusion length in the $x$ direction, which is the usual spin diffusion. Note the transverse spin accumulation near the boundaries $y=\pm L_y$ with opposite signs, which is the consequence of the anomalous $\theta$ term in Eq. (\[eq\_diffusive\_spin\_current\]). In addition, the transverse spin accumulation decays with the spin diffusion length.
![ Numerical solution of spin diffusion equation with system size $L_x/\lambda_s=L_y/\lambda_s=5$ and Hall angle $\theta=0.3$. (a) Spin accumulation $\mu_s$ plotted as a function of $x$ and $y$ and (b) as a function of $y$ for several values of $x$. The units of the spin accumulation are given by $\mu_{0}=ej_{0}\lambda_s/\sigma_{xx}$. []{data-label="fig_spin_diffusion"}](fig_spin_diffusion.eps){width="1\hsize"}
[*Discussion.—*]{}We now discuss the experimental detection of the transverse spin accumulation. One feasible experimental technique to detect such spin accumulation is to measure the magneto-optical Kerr effect [@katoObservationSpinHall2004a; @sihSpatialImagingSpin2005; @leeElectricalControlValley2016a]. A spatial resolution image of the Kerr angle provides information on the spatial distribution of the spin accumulation. Although spin-orbit coupling is strong in the TMDC monolayer, the spin diffusion length for the out-of-plane component is expected to be quite long because of the symmetry of the crystal structure. Therefore, the spin diffusion length could be comparable to the limit of the spatial resolution of the Kerr measurement (about one micron). We also discuss the effects of intervalley spin transfer processes at the interface, which were neglected in our main analysis. In the presence of atomic scale interface roughness, the intervalley spin transfer processes are not negligible and give a correction term for the spin current estimated as $\delta\la\IS\ra\propto\sum_{\tau}|J_1|^2D_{\tau,+}(\e)D_{-\tau,-}(\e)$ with the intervalley matrix elements $J_1$. The correction term is proportional to the product of the density of states for spin-up and spin-down electrons in the different valleys. An vital consequence of the correction term is that non-equilibrium valley accumulation is induced by spin excitations, as shown in Fig. \[fig\_ISHE\](a), in the presence of the spin-valley locking, which means a one-to-one correspondence between spin and valley indices at the Fermi level. Two ways are possible to detect the valley accumulation induced by the spin excitation. First, the valley accumulation may be detected by the inverse spin Hall effect (ISHE). Valley accumulation leads to a diffusive spin current consisting of spin-up $K_+$ valley electrons and spin-down $K_-$ valley electrons. The valley-contrasting Berry curvature gives rise to the ISHE, which may be detected electrically. Figure \[fig\_ISHE\] (b) shows a schematic illustration of the ISHE. Second, valley accumulation may be detected by the modulation of the anomalous Hall conductivity, as demonstrated by the optical pumping of valley-polarized carriers [@makValleyHallEffect2014; @ubrigMicroscopicOriginValley2017].
![(a) Valley accumulation is induced by spin excitation with the intervalley spin transfer processes. The dotted line represents the chemical potential $\mu$ in equilibrium. (b) Valley accumulation with spin-valley locking leads to the inverse spin Hall effect.[]{data-label="fig_ISHE"}](fig_ISHE.eps){width="1\hsize"}
[*Conclusion.—*]{}We present herein a study of the valley-dependent spin transport at the interface between a TMDC monolayer and a FI. Given appropriate carrier doping, the spins are valley-selectively excited, which generates valley-polarized spin current (VPSC). We also study spin diffusion in the TMDC. A prominent feature of the VPSC is the generation of transverse spin accumulation, which we call the [*spin-current Hall effect*]{}. This valley-dependent spin excitation and transport phenomenon can expand the possibility of the future nanotechnology, and can be useful for all spin-valley logic devices [@behin-aeinProposalAllspinLogic2010a]. We thank R. Oshima and M. Shiraishi for helpful discussions. This work is partially supported by the Priority Program of Chinese Academy of Sciences, Grant No. XDB28000000. 0
Appendix
========
Effective Hamiltonian
---------------------
The electronic states of TMDC around the ${\rm{K}}_+$ and ${\rm{K}}_-$ points are described by the $8\times8$ effective Hamiltonian [@xiaoCoupledSpinValley2012; @rostamiEffectiveLatticeHamiltonian2013; @kormanyosMonolayerMoSTrigonal2013; @kormanyosSpinOrbitCouplingQuantum2014; @kormanyosTheoryTwodimensionalTransition2015] $$\begin{aligned}
H_{\rm{eff}}=
&\hbar v
\Big(
k_x\s^x\otimes \tau^z \otimes 1_s+
k_y\s^y\otimes 1_\tau \otimes 1_s
\Big) \notag \\
&+\frac{\Delta}{2}\s^z\otimes1_\tau\otimes1_s
-\lambda_v\frac{\s^z-1_\s}{2} \otimes \tau^z \otimes s^z \notag \\
&-\lambda_c\frac{\s^z+1_\s}{2} \otimes \tau^z \otimes s^z,\end{aligned}$$ where $\bm{\sigma}$, $\bm{\tau}$, and $\bm{s}$ ($1_\sigma,1_\tau$, and $1_s$) are Pauli (identity) matrices acting on the orbital, the valley, and the real spin degrees of freedom, $v$ is the band velocity, $\Delta$ is the energy gap, and $\lambda_v$ and $\lambda_c$ are the spin splitting at the valence and conduction band edges caused by spin-orbit coupling. The Zeeman like exchange coupling originating from the out-of-plane component is written as $$\begin{aligned}
H_{\rm Z}=-JS1_\sigma\otimes1_\tau\otimes s^z.\end{aligned}$$ The effective Hamiltonian with the Zeeman coupling is block diagonal and the valley index and the $z$-component of spin are good quantum numbers so that what we solve is the following $2\times2$ Hamiltonian $$\begin{aligned}
H_{\tau s\bk}
&=
X_{\tau\bk}\sigma^x
+Y_{\bk}\sigma^y
+Z_{\tau s}\sigma^z
+E_{\tau s} 1_\sigma, \\
&=\begin{pmatrix}
Z_{\tau s}+E_{\tau s} & X_{\tau\bk}-iY_{\bk} \\
X_{\tau\bk}+iY_{\bk} & -Z_{\tau s}+E_{\tau s}
\end{pmatrix},\end{aligned}$$ where we define $X_{\tau\bk}$, $Y_{\bk}$, $Z_{\tau s}$, and $E_{\tau s}$ as $$\begin{aligned}
&X_{\tau\bk}=\hbar v\tau k_x, \\
&Y_{\bk}=\hbar v k_y, \\
&Z_{\tau s}=\frac{\Delta}{2}-\tau s\frac{\lambda_+}{2}, \\
&E_{\tau s}=s\left(\tau\frac{\lambda_-}{2}-JS\right),\end{aligned}$$ where $\lambda_\pm=\lambda_v\pm\lambda_c$, band index $n=\pm$, valley $\tau=\pm$, and spin $s=\pm$. The Schrödinger equation is written as $$\begin{aligned}
H_{\tau s\bk}\phi_{n\tau s\bk}=\e_{n\tau s\bk}\phi_{n \tau s\bk}.\end{aligned}$$ The energy band is given by $$\begin{aligned}
&\e_{n\tau s\bk}
=nR_{\tau s\bk}+E_{\tau s}, \\
&R_{\tau s\bk}
=\sqrt{X_{\tau\bk}^2+Y_{\bk}^2+Z_{\tau s}^2},\end{aligned}$$ and the explicit expression is $$\begin{aligned}
\e_{n\tau s\bk}
&=n\sqrt{
(\hbar vk)^2
+\left(
\frac{\Delta}{2}-\tau s\frac{\lambda_+}{2}
\right)^2
}
+s\left(
\tau\frac{\lambda_-}{2}-JS
\right),\end{aligned}$$ where $k=\sqrt{k_x^2+k_y^2}$. The eigenspinor is given by $$\begin{aligned}
\phi_{n\tau s\bk}=
\begin{cases}
\frac
{1}
{\sqrt{
2R_{\tau s\bk}
(R_{\tau s\bk}+Z_{\tau s})
}
}
\begin{pmatrix}
R_{\tau s\bk}+Z_{\tau s} \\
X_{\tau s\bk} + iY_{\bk}
\end{pmatrix} \hspace{6.2mm}(n=+) \\
}
}
-(X_{\tau s\bk} - iY_{\bk}) \\
R_{\tau s\bk}+Z_{\tau s}
\end{pmatrix} {~}(n=-)
\end{cases}\end{aligned}$$ The density of states $D(\e)$ is written as $$\begin{aligned}
&D(\e)
=\sum_{\tau,s}D_{\tau s}(\e), \\
&D_{\tau s}(\e)
=\sum_{n,\bk}\d(\e_{n\tau s\bk}-\e),\end{aligned}$$ where $D_{\tau s}(\e)$ is the density of states for each valley and spin. Using the following formula $$\begin{aligned}
\delta\left(g(x)\right)=
\sum_i\frac{1}{|g^\p(x_i)|}\delta(x-x_i),\end{aligned}$$ where $x_i$ are the roots of $g(x)=0$ and $$\begin{aligned}
\left(
\frac{\partial\e_{n\tau s\bk}}{\partial k}
\right)^{-1}
&=\frac
{
n\sqrt{
(\hbar vk)^2 + \left(
\Delta/2-\tau s\lambda_+/2
\right)^2
}
}
{(\hbar v)^2k} \notag \\
&=\frac{nR_{\tau s \bk}}{(\hbar v)^2k},\end{aligned}$$ the delta function $\delta(\e_{n\tau s\bk}-\e)$ is written as $$\begin{aligned}
&\delta(\e_{n\tau s\bk}-\e)
=
\Bigg|\frac{nR_{\tau s \bk_0}}{(\hbar v)^2k_0}\Bigg|
\delta\left(k-k_0\right),\end{aligned}$$ where $\e_{n\tau s\bk_0}=nR_{\tau s\bk_0}+E_{\tau s}=\e$ and $n={\rm sgn}(\e)$, so that $D_{\tau s}(\e)$ is calculated as $$\begin{aligned}
D_{\tau s}(\e)
&=
\frac{A}{(2\pi)^2}
\int_0^\infty kdk
\int_0^{2\pi}d\theta_\bk
\Bigg|\frac{nR_{\tau s \bk_0}}{(\hbar v)^2k_0}\Bigg|
\d(k_0-k) \notag \\
&\hspace{5mm}\times
\theta(|\e-E_{\tau s}|-Z_{\tau s}) \notag \\
&=\frac{A}{2\pi(\hbar v)^2}
|\e-E_{\tau s}|{~}
\theta(|\e-E_{\tau s}|-Z_{\tau s}) \notag \\
&=\frac{A}{2\pi(\hbar v)^2}
\Bigg|
\e-s\left(
\tau\frac{\lambda_-}{2}-JS
\right)
\Bigg| \notag \\
&\hspace{5mm}\times
\theta
\left(
\Bigg|
-\left(
\frac{\Delta}{2}-\tau s\frac{\lambda_+}{2}
\right)
\right),\end{aligned}$$ where $A$ is the area of the system.
Field operators
---------------
We define the field operators $$\begin{aligned}
&\hat{\psi}^\dagger(\br)
=\sum_{\alpha,\bk}
c^\dagger_{\alpha\bk}
\psi_{\alpha\bk}^\dagger(\br), \\
&\hat{\psi}(\br)
=\sum_{\alpha,\bk}
c_{\alpha\bk}
\psi_{\alpha\bk}(\br),\end{aligned}$$ where $c_{\alpha\bk}^\dagger$ and $c_{\alpha\bk}$ are the creation and annihilation operators satisfying the following anticommutation rules $$\begin{aligned}
&\{
c_{\alpha\bk},
c_{\beta\bkp}
\}
=
\{
c_{\alpha\bk}^\dagger,
c_{\beta\bkp}^\dagger
\}=0, \\
&\{
c_{\alpha\bk},
c_{\beta\bkp}^\dagger
\}=
\delta_{\alpha\beta}\delta_{\bk\bkp},\end{aligned}$$ and $\psi_\alpha(\br)$ are 8-component orthonormal basis functions satisfying the completeness relation $$\begin{aligned}
&\int d\br
\psi_{\alpha\bk}^\dagger(\br)
\psi_{\beta\bkp}(\br)
=
\delta_{\alpha\beta}
\delta_{\bk\bkp}, \\
&\sum_{\alpha,\bk}
\psi_{\a\bk}(\br)
\psi^\dagger_{\a\bk}(\br^\p)
=
\delta(\br-\br^\p).\end{aligned}$$ Note that $\hat{\psi}^\dagger(\br)$ and $\hat{\psi}(\br)$ are the field operators, and $\psi^\dagger_{\alpha\bk}(\br)$ and $\psi_{\alpha\bk}(\br)$ are 8-component wave functions. We use plane-wave basis functions $$\begin{aligned}
&\psi^\dagger_{\alpha\bk}(\br)
=\frac{e^{-i\bk\cdot\br}}{\sqrt{A}}
u^\dagger_{\alpha\bk}, \\
&\psi_{\alpha\bk}(\br)
=\frac{e^{i\bk\cdot\br}}{\sqrt{A}}
u_{\alpha\bk},\end{aligned}$$ where $A$ is the area of the system, $\alpha=(n,\tau,s)$, and $u_{\alpha\bk}$ is the 8-component spinor.
The density of an operator $O$ is defined as $$\begin{aligned}
O(\br):=\hat{\psi}^\dagger(\br)O\hat{\psi}(\br)
=\sum_{\alpha,\bk}\sum_{\beta,\bkp} O_{\alpha\bk,\beta\bkp}(\br)c_{\alpha\bk}^\dagger c_{\beta\bkp},\end{aligned}$$ where $O_{\alpha\bk,\beta\bkp}(\br)$ is given as $$\begin{aligned}
O_{\alpha\bk,\beta\bkp}(\br)
&=\psi^\dagger_{\alpha\bk}(\br)O\psi_{\beta\bkp}(\br) \notag \\
&=
\frac
{e^{-i(\bk-\bkp)\cdot\br}}
{A}
u^\dagger_{\alpha\bk}
O
u_{\beta\bkp} \notag \\
&=:
\frac
{e^{-i(\bk-\bkp)\cdot\br}}
{A}
O_{\alpha\bk,\beta\bkp}.\end{aligned}$$ $O(\br)$ is expanded in Fourier series $$\begin{aligned}
&O(\br)=\sum_\bq e^{i\bq\cdot\br}O_\bq, \\
&O_\bq=\frac{1}{A}\int d\br e^{-i\bq\cdot\br}O(\br),\end{aligned}$$ and the Fourier coefficient is given as $$\begin{aligned}
O_\bq=
\frac{1}{A}
\sum_{\alpha,\bk,\beta}
O_{\alpha\bk,\beta\bk+\bq}
c_{\alpha\bk}^\dagger c_{\beta\bk+\bq},\end{aligned}$$ where we use the orthogonality relation $$\begin{aligned}
\frac{1}{A}\int d\br
e^{-i(\bk-\bkp+\bq)\cdot\br}=
\delta_{\bk+\bq,\bkp}.\end{aligned}$$ The spin density operator is defined as $$\begin{aligned}
&\bm{s}(\br):=
\hat{\psi}^\dagger(\br)
(1_\sigma\otimes1_\tau\otimes\bm{s})
\hat{\psi}(\br),\end{aligned}$$ and the Fourier coefficient is given as $$\begin{aligned}
\bm{s}_\bq=
\frac{1}{A}
\sum_{\alpha,\bk}
\sum_{\beta}
\bm{s}_{\alpha\bk,\beta\bk+\bq}
c_{\alpha\bk}^\dagger c_{\beta\bk+\bq},\end{aligned}$$ and the matrix element is given as $$\begin{aligned}
\bm{s}_{\alpha\bk,\beta\bk+\bq}=
u^\dagger_{\alpha\bk}
(1_\sigma\otimes 1_\tau\otimes\bm{s})
u_{\beta\bk+\bq}.\end{aligned}$$ In the following calculation, the spin operator $1_\sigma\otimes1_\tau\otimes\bm{s}$ is written as $\bm{s}$ for the sake of a simple notation. The matrix element satisfy $$\begin{aligned}
\left(
s_{\alpha\bk,\beta\bk+\bq}^+
\right)^\ast=
s_{\beta\bk+\bq,\alpha\bk}^-,\end{aligned}$$ and the spin raising and lowering operators $s^+_\bq$ and $s^-_\bq$ satisfy $$\begin{aligned}
&\left(
s_{\bq}^+
\right)^\dagger=
s_{-\bq}^-.\end{aligned}$$ The above relation is proved as follows $$\begin{aligned}
\left(
s_{\alpha\bk,\beta\bk+\bq}^+
\right)^\ast
&=
\left(
u^\dagger_{\alpha\bk}
s^+
u_{\beta\bk+\bq}
\right)^\ast \notag \\
&=
u^\dagger_{\beta\bk+\bq}
s^-
u_{\alpha\bk} \notag \\
&=s_{\beta\bk+\bq,\alpha\bk}^-.\end{aligned}$$ and $$\begin{aligned}
\left(
s_{\bq}^+
\right)^\dagger
&=
\left(
\frac{1}{A}
\sum_{\alpha,\bk}
\sum_{\beta}
s^+_{\alpha\bk,\beta\bk+\bq}
c_{\alpha\bk}^\dagger c_{\beta\bk+\bq}
\right)^\dagger \notag \\
&=\frac{1}{A}
\sum_{\alpha,\bk}
\sum_{\beta}
s^-_{\beta\bk+\bq,\alpha\bk}
c^\dagger_{\beta\bk+\bq}c_{\alpha\bk} \notag \\
&=\frac{1}{A}
\sum_{\a,\bk}
\sum_{\beta}
s^-_{\a\bk,\beta\bk-\bq}
c^\dagger_{\a\bk}c_{\beta\bk-\bq} \notag \\
&=s_{-\bq}^-.\end{aligned}$$
Here, we obtain the commutation relation between the spin density operators. In the following, we use shorthand notation, $c_1=c_{\alpha\bk}$, $c_2=c_{\beta\bkp}$, and so on. The commutation relation between the spin density operators are calculated as $$\begin{aligned}
&[s^i(\br),s^j(\brp)] \notag \\
&=\sum_{1,2,3,4}
[c^\dagger_1 c_2,c^\dagger_3 c_4]
s^i_{12}(\br)s^j_{34}(\brp) \notag \\
&=\sum_{1,2,3,4}
\left(
c^\dagger_1 c_4\delta_{23}
-c^\dagger_3 c_2\delta_{14}
\right)
s^i_{12}(\br)s^j_{34}(\brp)
-c^\dagger_3 c_2\delta_{41}
s^j_{34}(\brp)s^i_{12}(\br)
\right) \notag \\
&=\sum_{1,4}
c^\dagger_1 c_4
\sum_{2,3}
s^i_{12}(\br)\delta_{23}s^j_{34}(\brp) \notag \\
&\hspace{10mm}
-\sum_{3,2}
c^\dagger_3 c_2
\sum_{1,4}
s^j_{34}(\brp)\delta_{41}s^i_{12}(\br) \notag \\
&=\sum_{1,4}
c^\dagger_1 c_4
\psi^\dagger_1(\br)
s^is^j
\psi_4(\br)\delta(\br-\br^\p) \notag \\
&\hspace{10mm}
-\sum_{32}
c^\dagger_3 c_2
\psi^\dagger_3(\br)
s^js^i
\psi_2(\br)\delta(\br-\br^\p) \notag \\
&=\sum_{1,2}
c^\dagger_1 c_2
\psi^\dagger_1(\br)
[s^i,s^j]
\psi_2(\br)\delta(\br-\br^\p) \notag \\
&=\hat{\psi}(\br)[s^i,s^j]\hat{\psi}(\br)\delta(\br-\br^\p),\end{aligned}$$ where we use following relations $$\begin{aligned}
[c^\dagger_1 c_2,c^\dagger_3 c_4]
&=c^\dagger_1 [c_2,c^\dagger_3 c_4]
+[c^\dagger_1,c^\dagger_3 c_4] c_2 \notag \\
&=c^\dagger_1 c_4\delta_{23}
-c^\dagger_3 c_2\delta_{14},\end{aligned}$$ and $$\begin{aligned}
\sum_{1,2}\psi_1(\br)\psi^\dagger_2(\br^\p)\delta_{12}
=\delta(\br-\br^\p).\end{aligned}$$ Using the commutation relations between the spin operators $$\begin{aligned}
&[s^i,s^j]=i\epsilon_{ijk}s^k, \\
&[s^z,s^\pm]=\pm2s^\pm,\end{aligned}$$ we obtain the commutation relations $$\begin{aligned}
&[s^i(\br),s^j(\brp)]=i\epsilon_{ijk}s^k(\br)\delta(\br-\brp), \label{eq_sisj} \\
&[s^z(\br),s^\pm(\br^\p)]=\pm2s^\pm(\br)\delta(\br-\br^\p). \label{eq_szspm}\end{aligned}$$ Integrating both sides of Eq. (\[eq\_szspm\]) $$\begin{aligned}
&\int d\br
\frac{1}{A}\int d\brp e^{-i\bq\cdot\brp}
[s^z(\br),s^\pm(\br^\p)]
=[s^z_{\rm tot},s^\pm_\bq], \\
&\int d\br
\frac{1}{A}\int d\brp e^{-i\bq\cdot\brp}
\left(
\pm2s^\pm(\br)\delta(\br-\br^\p)
\right)
=\pm2s^\pm_\bq,
\end{aligned}$$ we obtain $$\begin{aligned}
[s^z_{\rm tot},s^\pm_\bq]=\pm2s^\pm_\bq.
\end{aligned}$$
Spin susceptibility
-------------------
The retarded component of the spin susceptibility is defined as $$\begin{aligned}
\chi_\bq^R(t_1,t_2)
&=\frac{1}{i\hbar}\theta(t_1-t_2)
\langle[s_{\bq}^+(t_1),s_{-\bq}^-(t_2)]\rangle.\end{aligned}$$ In this section, we derive the formula of the spin susceptibility. The susceptibility is calculated as $$\begin{aligned}
\chi_\bq^R(t_1,t_2)
&=
\frac{1}{i\hbar}
\theta(t_1-t_2)
\la[s_{\bq}^+(t_1),s_{-\bq}^-(t_2)]\ra \notag \\
&=
\frac{1}{i\hbar}
\frac{1}{A^2}
\sum_{\a, \bk, \beta}
\sum_{\gamma,\bkp,\delta}
s^+_{\a\bk ,\beta \bk +\bq}
s^-_{\gamma\bkp,\delta\bkp-\bq}
\theta(t_1-t_2) \notag \\
&\hspace{5mm}\times
\la
[
c^\dagger_{\a\bk}(t_1) c_{\beta \bk +\bq}(t_1),
c^\dagger_{\gamma\bkp}(t_2)c_{\delta\bkp-\bq}(t_2)
]
\ra \notag \\
&=
\frac{1}{i\hbar}
\frac{1}{A^2}
\sum_{\alpha,\bk ,\beta}
\sum_{\gamma,\bkp,\delta}
s^+_{\alpha\bk ,\beta \bk +\bq}
s^-_{\gamma\bkp,\delta\bkp-\bq}
\theta(t_1-t_2) \notag \\
&\hspace{10mm}\times
\exp\left[i(\e_{\alpha\bk}-\e_{\beta\bk+\bq})t_1 /\hbar\right] \notag \\
&\hspace{10mm}\times
\exp\left[i(\e_{\gamma\bkp}-\e_{\delta\bkp-\bq})t_2/\hbar\right] \notag \\
&\hspace{10mm}\times
\la[
c^\dagger_{\a\bk}(0) c_{\beta \bk +\bq}(0),
c^\dagger_{\gamma\bkp}(0)c_{\delta\bkp-\bq}(0)
]\ra \notag \\
&=
\frac{1}{i\hbar}
\frac{1}{A^2}
\sum_{\alpha\beta}
s^+_{\alpha\bk ,\beta \bk+\bq}
s^-_{\beta \bk-\bq,\alpha\bk}
\theta(t_1-t_2) \notag \\
&\hspace{10mm}\times
\exp\left[i(\e_{\a\bk}-\e_{\beta\bk+\bq})(t_1-t_2)/\hbar\right] \notag \\
&\hspace{10mm}\times
(f_{\a\bk}-f_{\beta\bk+\bq}),\end{aligned}$$ where the time evolution of creation and annihilation operators is given by $$\begin{aligned}
&c_{\alpha\bk}(t)=\exp(-i\e_{\a\bk} t/\hbar)c_{\a\bk}(0), \\
&c^\dagger_{\alpha\bk}(t)=\exp(i\e_{\a\bk} t/\hbar)c^\dagger_{\a\bk}(0),\end{aligned}$$ and the statistical average of the commutation relation is given by $$\begin{aligned}
\langle[c^\dagger_1 c_2,c^\dagger_3 c_4]\rangle
=\delta_{14}\delta_{23}(f_1-f_2),\end{aligned}$$ where $f_1=\langle c_1^\dagger c_1\rangle=1/(e^{(\e_1-\mu)/k_{\rm B}T})$ is the Fermi distribution function. The above relation is proved as follows $$\begin{aligned}
\la[c^\dagger_1 c_2,c^\dagger_3 c_4]\ra
&=
\la c^\dagger_1 c_2 c^\dagger_3 c_4\ra
-\la c^\dagger_3 c_4 c^\dagger_1 c_2\ra \notag \\
&=
\la c^\dagger_1 c_2\ra\la c^\dagger_3 c_4\ra
+\la c^\dagger_1 c_4\ra\la c_2 c^\dagger_3\ra \notag \\
&\hspace{5mm}
-\la c^\dagger_3 c_4\ra\la c^\dagger_1 c_2\ra
-\la c^\dagger_3 c_2\ra\la c_4 c^\dagger_1\ra \notag \\
&=
\la c^\dagger_1 c_4\ra\la c_2 c^\dagger_3\ra
-\la c^\dagger_3 c_2\ra\la c_4 c^\dagger_1\ra \notag \\
&=\delta_{14}\delta_{23} f_1 (1 - f_2)
-\delta_{23}\delta_{14} f_2 (1 - f_1) \notag \\
&=\delta_{14}\delta_{23}(f_1-f_2).\end{aligned}$$ Because of time-translation symmetry, the spin susceptibility becomes a function of the difference of the time $$\begin{aligned}
\chi_\bq^R(t_1-t_2)
&=
\frac{1}{i\hbar}
\frac{1}{A^2}
\sum_{\alpha\beta}
|s^+_{\alpha\bk,\beta\bk+\bq}|^2
\theta(t_1-t_2) \notag \\
&\hspace{10mm}\times
\exp\left[
i(\e_{\a\bk}-\e_{\beta\bk+\bq})
(t_1-t_2)/\hbar
\right] \notag \\
&\hspace{10mm}\times
(f_{\a\bk}-f_{\beta\bk+\bq}).\end{aligned}$$ The Fourier transformation of the spin susceptibility is derived as $$\begin{aligned}
\chi_\bq^R(\omega)
&=
\int^\infty_{-\infty}dt
e^{i\omega t}
\chi_\bq^R(t) \notag \\
%&=
%\sum_{\a,\bk,\beta}
%s^+_{\alpha\bk,\beta\bk-\bq}
%s^-_{\beta\bk-\bq,\alpha\bk}
%\frac{f_{\alpha\bk}-f_{\beta\bk-\bq}}
%{\e_{\a\bk}-\e_{\beta\bk-\bq}+\hbar\omega+i0} \notag \\
&=
\frac{1}{A^2}
\sum_{\a,\bk,\beta}
|s^+_{\a\bk,\beta\bk+\bq}|^2
\frac{f_{\a\bk}-f_{\beta\bk+\bq}}{\e_{\a\bk}-\e_{\beta\bk+\bq}+\ho+i0},\end{aligned}$$ where we use $$\begin{aligned}
&\int^\infty_{-\infty}dt \theta(t)\exp[i(\omega+i0)t]
\exp\left[i(\e_{\a\bk}-\e_{\beta\bk+\bq})t/\hbar\right] \notag \\
&=\frac{\hbar}{i}
\frac{1}{\e_{\a\bk}-\e_{\beta\bk+\bq}+\ho+i0} \notag \\
&\hspace{10mm}\times
\Big[
\exp
\left[
i\left(
\e_{\a\bk}-\e_{\beta\bk+\bq}+\ho+i0
\right)t/\hbar
\right]
\Big]^\infty_0 \notag \\
&=
-\frac{\hbar}{i}\frac{1}{\e_{\a\bk}-\e_{\beta\bk+\bq}+\ho+i0},\end{aligned}$$ and an infinitesimal imaginary part ensures convergence. The imaginary part of the spin susceptibility is important to describe spin transport at the interface and given as $$\begin{aligned}
{\rm Im}\chi_\bq^R(\omega)
&=
-\frac{\pi}{A^2}
\sum_{\alpha,\bk,\beta}
|s^+_{\alpha\bk,\beta\bk+\bq}|^2
(f_{\a\bk}-f_{\beta\bk+\bq}) \notag \\
&\hspace{10mm}\times
\delta(\e_{\a\bk}-\e_{\beta\bk+\bq}+\ho),\end{aligned}$$ where we use $$\begin{aligned}
\frac{1}{x+i0}&=\frac{\mathcal{P}}{x}-i\pi\delta(x).\end{aligned}$$ The above formula is used to calculate the spin current at the interface and the modulation of the Gilbert damping. The energy scale of the FMR frequency $\hbar \Omega\sim10^{-4}{\rm eV}$ is much smaller than the temperature $k_{\rm B}T$ in a typical experimental situation so that ${\rm Im}\chi^R_{\bq}(\omega)$ is approximated as $$\begin{aligned}
{\rm Im}\chi_\bq^R(\omega)
&=
-\frac{\pi}{A^2}
\sum_{\a,\bk,\beta}
|s^+_{\a\bk,\beta\bk+\bq}|^2
\left[
f(\e_{\a\bk})-f(\e_{\a\bk}+\ho)
\right] \notag \\
&\hspace{30mm}\times
\delta(\e_{\a\bk}-\e_{\beta\bk+\bq}+\ho) \notag \\
&\approx
-\frac{\pi}{A^2}
\sum_{\a,\bk,\beta}
|s^+_{\a\bk,\beta\bk+\bq}|^2
\ho
\left(
-\frac{\partial f_{\a\bk}}{\partial \e_{\a\bk}}
\right) \notag \\
&\hspace{30mm}{\times}
\delta(\e_{\a\bk}-\e_{\beta\bk+\bq}) \notag \\
&=
\int d\e
\left(
-\frac{\partial f}{\partial \e}
\right)
\Big[
-\frac{\pi\ho}{A^2}
\sum_{\a,\bk,\beta}
|s^+_{\a\bk,\beta\bk+\bq}|^2
\notag \\
&\hspace{25mm}\times
\delta(\e_{\a\bk}-\e)
\delta(\e_{\a\bk}-\e_{\beta\bk+\bq})
\Big] \notag \\
&=
{\rm Im}\chi_\bq^R(\omega,T=0),\end{aligned}$$ where $$\begin{aligned}
&{\rm Im}\chi_\bq^R(\omega,T=0) \notag \\
&:=
-\frac{\pi\ho}{A^2}
\sum_{\a,\bk,\beta}
|s^+_{\a\bk,\beta\bk+\bq}|^2
\delta(\e_{\a\bk}-\e)\delta(\e_{\beta\bk+\bq}-\e).\end{aligned}$$ We define the local spin susceptibility as $$\begin{aligned}
{\rm Im}\chi_{\rm loc}^R(\omega)
:=
\sum_\bq{\rm Im}\chi_{\bq}^R(\omega).\end{aligned}$$ The local spin susceptibility is written as $$\begin{aligned}
&{\rm Im}\chi_{\rm loc}^R(\omega,T=0) \notag \\
&=
-\frac{\pi\ho}{A^2}
\sum_{\bq}
\sum_{\a,\bk,\beta}
|s^+_{\a\bk,\beta\bk+\bq}|^2
\delta(\e_{\a\bk}-\e)\delta(\e_{\beta\bk+\bq}-\e) \notag \\
&=
-\frac{\pi\ho}{A^2}
\sum_{\a,\bk}
\sum_{\beta,\bq}
|s^+_{\a\bk,\beta\bq}|^2
\delta(\e_{\a\bk}-\e)\delta(\e_{\beta\bq}-\e).\end{aligned}$$ In the present model, the matrix element is calculated as $$\begin{aligned}
s^+_{\a\bk,\beta\bq}
&=
u^\dagger_{\a\bk}
s^+
u_{\beta\bq}, \notag \\
&=
\phi_{\a\bk}^\dagger
\phi_{\beta\bq}
\delta_{\tau\tau^\p}
{~}2\delta_{s+}
\delta_{s^\p-},\end{aligned}$$ so that the matrix element $s^+_{\a\bk,\beta\bq}$ is non-zero for $\alpha=(n,\tau,+)$ and $\beta=(n^\p,\tau,-)$. The product of the delta functions imposes $n=n^\p$, so that the local spin susceptibility is written as $$\begin{aligned}
&{\rm Im}\chi_{\rm loc}^R(\omega,T=0) \notag \\
&=
-\frac{\pi\ho}{A^2}
\sum_{n,\tau,\bk,\bq}
4|\phi^\dagger_{n\tau+\bk}\phi_{n\tau-\bq}|^2
\delta(\e_{n\tau+\bk}-\e)\delta(\e_{n\tau-\bq}-\e) \notag \\
&=\sum_{\tau}{\rm Im}\chi_{\tau,{\rm loc}}^R(\omega,T=0),\end{aligned}$$ where the local spin susceptibility for each valley is defined as $$\begin{aligned}
&{\rm Im}\chi_{\tau,\rm loc}^R(\omega,T=0) \notag \\
&:=
-\frac{\pi\ho}{A^2}
\sum_{n,\bk,\bq}
4|\phi^\dagger_{n\tau+\bk}\phi_{n\tau-\bq}|^2
\delta(\e_{n\tau+\bk}-\e)\delta(\e_{n\tau-\bq}-\e).\end{aligned}$$ The eigenspinor is given by $$\begin{aligned}
}
}
\tau ke^{i\tau\theta_\bk}
\end{pmatrix} {~}(n=+) \\
\\
}
}
-\tau ke^{-i\tau\theta_\bk} \\
R_{\tau s\bk}+Z_{\tau s}
\end{pmatrix} {~}(n=-)
\end{cases}\end{aligned}$$ so that $4|\phi^\dagger_{n\tau+\bk}\phi_{n\tau-\bq}|^2$ is calculated as
$$\begin{aligned}
4|\phi^\dagger_{n\tau+\bk}\phi_{n\tau-\bq}|^2
&=\frac
{\big|
(R_{\tau+\bk} + Z_{\tau+})(R_{\tau-\bq} + Z_{\tau-})
+kqe^{-i\tau(\theta_\bk-\theta_\bq)}
\big|^2}
{
R_{\tau+\bk} R_{\tau-\bq}
(R_{\tau+\bk}+Z_{\tau+})
(R_{\tau-\bq} +Z_{\tau-} )
} \notag \\
&=\frac
{
(R_{\tau+\bk}+Z_{\tau+})^2
(R_{\tau-\bq} +Z_{\tau-} )^2+
k^2q^2+
2kq(R_{\tau+\bk}+Z_{\tau+})
(R_{\tau-\bq} +Z_{\tau-} )
\cos\left(\theta_\bk-\theta_\bq\right)
}
{
{
(R_{\tau+\bk} + Z_{\tau+})(R_{\tau-\bq} + Z_{\tau-})
+
(R_{\tau+\bk} - Z_{\tau+})(R_{\tau-\bq} - Z_{\tau-})
+
2kq\cos\left(\theta_\bk-\theta_\bq\right)
}
{
R_{\tau+\bk} R_{\tau-\bq}
} \notag \\
&=\frac
{
2(R_{\tau+\bk} R_{\tau-\bq} + Z_{\tau+} Z_{\tau-})
+
2kq\cos\left(\theta_\bk-\theta_\bq\right)
}
{
R_{\tau+\bk} R_{\tau-\bq}
},
\end{aligned}$$
where we use $$\begin{aligned}
&R_{\tau+\bk}=\sqrt{k^2+Z_{\tau+}^2},
\hspace{2mm}
R_{\tau-\bq}=\sqrt{q^2+Z_{\tau-}^2}, \notag \\
&k^2q^2
=
(R_{\tau+\bk}^2-Z_{\tau+}^2)
(R_{\tau-\bq}^2-Z_{\tau-}^2)
=
(R_{\tau+\bk}+Z_{\tau+})
(R_{\tau-\bq}+Z_{\tau-})
(R_{\tau+\bk}-Z_{\tau+})
(R_{\tau-\bq}-Z_{\tau-}).
\end{aligned}$$ The local spin susceptibility at $T=0$ is calculated as $$\begin{aligned}
{\rm Im}\chi_{\tau,{\rm loc}}^R(\omega,T=0)
&=
-\frac{\pi\ho}{A^2}
\sum_{n}
\frac{A}{(2\pi)^2}
\int^\infty_0 kdk
\int_0^{2\pi}d\theta_\bk
\frac{A}{(2\pi)^2}
\int^\infty_0 qdq
\int_0^{2\pi}d\theta_\bq \notag \\
&\hspace{30mm}\times
4|\phi_{n\tau+\bk}^\dagger\phi_{n\tau-\bq}|^2
\delta(\e_{n\tau+\bk}-\e)
\delta(\e_{n\tau-\bq}-\e) \notag \\
&=
-\pi\ho
\frac{1}{(\hbar v)^4}
\frac{1}{(2\pi)^2}
2(R_{\tau+\bk_0} R_{\tau-\bq_0} + Z_{\tau+} Z_{\tau-})
\theta(|\e-E_{\tau +}|-Z_{\tau +})
\theta(|\e-E_{\tau -}|-Z_{\tau -}) \notag \\
&=
-\frac{\ho}{2\pi(\hbar v)^4}
\left(
|\e-E_{\tau+}||\e-E_{\tau-}| + Z_{\tau+} Z_{\tau-}
\right)
\theta(|\e-E_{\tau +}|-Z_{\tau +})
\theta(|\e-E_{\tau -}|-Z_{\tau -}) \notag \\
&=
-\frac{2\pi\ho}{A^2}
D_{\tau+}(\e)D_{\tau-}(\e)
\left(
1
+
\frac
{Z_{\tau +}Z_{\tau -}}
{|\e-E_{\tau+}||\e-E_{\tau-}|}
\right) \\
%&=
%-\frac{\ho A^2}{2\pi(\hbar v)^4}
%\left[
% \e^2
% -
% \left(
% \tau\frac{\lambda_-}{2}+JS
% \right)^2
% +
% \frac{\Delta^2-\lambda_+^2}{4}
%\right] \notag \\
%&\hspace{15mm}\times
%\theta
%\left(
% \Bigg|
% \e-\left(
% -
% \frac{\Delta}{2}-\tau\frac{\lambda_+}{2}
% \right)
%\right)
% -
\end{aligned}$$
where we used $$\begin{aligned}
&\delta(\e_{n\tau s\bk}-\e)
=
\Bigg|\frac{nR_{\tau s \bk_0}}{(\hbar v)^2k_0}\Bigg|
\delta\left(k-k_0\right),\end{aligned}$$ and $$\begin{aligned}
&\e_{n\tau+\bk_0}
=nR_{ \tau+\bk_0}
+E_{\tau+}
=\e, \\
&\e_{n\tau-\bq_0}
=nR_{ \tau-\bq_0}
+E_{\tau-}
=\e,\end{aligned}$$ $$\begin{aligned}
&E_{\tau+}
=\tau\frac{\lambda_-}{2}+JS, \\
&E_{\tau-}
=-\left(\tau\frac{\lambda_-}{2}+JS\right)=-E_{\tau+}, \\
&Z_{\tau+}
=\frac{\Delta}{2}-\tau\frac{\lambda_+}{2}, \\
&Z_{\tau -}
=\frac{\Delta}{2}+\tau\frac{\lambda_+}{2}.\end{aligned}$$
Exchange coupling
-----------------
The exchange coupling at the interface is modeled as $$\begin{aligned}
\hex&=
-\int d\br
\sum_{j}
J(\br-\br_j)
\bm{s}(\br)\cdot\bm{S}_j \notag \\
&=
-\int d\br
\sum_{j}
J(\br-\br_j)
\frac{1}{\sqrt{N}}
\sum_{\bq,\bk}
e^{i(\bq\cdot\br+\bk\cdot\br_j)} \notag \\
&\hspace{30mm}\times
\frac{1}{2}
\left(
s_\bq^+S_\bk^-+s_\bq^-S_\bk^+
\right) \notag \\
&\hspace{5mm}
-\int d\br
\sum_{j}
J(\br-\br_j)
s^z(\br)S^z_j,\end{aligned}$$ where the Fourier coefficients are given as $$\begin{aligned}
&\bm{s}(\br)=
\sum_\bq
e^{i\bq\cdot\br}
\bm{s}_\bq, \\
&\bm{S}_j=
\frac{1}{\sqrt{N}}
\sum_\bk
e^{i\bk\cdot\br_j}
\bm{S}_\bk.\end{aligned}$$ We define a tunneling amplitude $$\begin{aligned}
J_{\bq,\bk}:=
\frac{1}{2}
\int d\br
\frac{1}{\sqrt{N}}
\sum_{j}
J(\br-\br_j)
e^{i(\bq\cdot\br+\bk\cdot\br_j)}.\end{aligned}$$ The second term is approximated as $$\begin{aligned}
-\int d\br
\sum_{j}
J(\br-\br_j)
s^z(\br)S^z_j
&\approx -JS\int d\br s^z(\br) \notag \\
&=-JSs^z_{\rm tot}.\end{aligned}$$ The exchange coupling is written as $$\begin{aligned}
\hex=
-JSs^z_{\rm tot},\end{aligned}$$ where we used a relation $J_{-\bq,-\bk}=J_{\bq,\bk}^\ast$. When we consider the uniform magnon mode $|\bk|=0$, the tunneling amplitude is rewritten as $$\begin{aligned}
J_{\bq,\bm{0}}
&=\frac{1}{2}\int d\br\frac{1}{\sqrt{N}}\sum_j
J(\br-\br_j)e^{i\bq\cdot\br} \notag \\
&=\frac{1}{2}\frac{1}{\sqrt{N}}
\sum_{z_j}
\sum_{x_j y_j} e^{i\bq\cdot\br_j^{\rm 2D}}
\int d\brp e^{i\bq\cdot\brp}J(\brp,z_j) \notag \\
&=\frac{A}{2}\sum_{z_j}F_{-\bq} J_{-\bq}(z_j),\end{aligned}$$ where $F_{-\bq}$ is a form factor defined as $$\begin{aligned}
F_{-\bq}:=\frac{1}{\sqrt{N}}\sum_{x_j,y_j} e^{i\bq\cdot\br_j^{\rm 2D}}.\end{aligned}$$ Note that $\br=(x,y)$, $\brp=(x^\prime,y^\prime)$, $\br_j^{\rm 2D}=(x_j,y_j)$, and $\br_j=(x_j,y_j,z_j)$.
Spin current at the interface
-----------------------------
In this section, we derive the general expression of spin current at the interface. We treat the tunneling Hamiltonian as a perturbation and the other terms as the unperturbed Hamiltonian $$\begin{aligned}
&H(t)=H_0(t)+H_{\rm T}, \\
&H_0(t)=H_{\rm AL}+\hfi(t)+H_{\rm Z}.\end{aligned}$$ The operator of spin current flowing from the TMDCs to the FI at the interface is defined by $$\begin{aligned}
\IS:=-\frac{\hbar}{2}\dot{s}^z_{\rm tot}=
-\frac{\hbar}{2}
\frac{1}{i\hbar}[s^z_{\rm tot},H_{\rm T}]
=\frac{i}{2}[s^z_{\rm tot},H_{\rm T}].\end{aligned}$$ Calculating the commutation relation, we obtain the following expression $$\begin{aligned}
\IS=
-
\right),\end{aligned}$$ where we use $$\begin{aligned}
\left[
s^z_{\rm tot},s^\pm_\bq
\right]
=\pm2s^\pm_\bq.\end{aligned}$$ The time-dependent quantum average of $\IS$ is written as $$\begin{aligned}
\la\IS(t)\ra
&={\rm Re}
\left[
-2i
\sum_{\bq,\bk}
J_{\bq,\bk}
\la
s_\bq^+(t)
S_\bk^-(t)
\ra
\right],\end{aligned}$$ where $\la\cdots\ra={\rm Tr}[\rho_0\cdots]$ denotes the statistical average with an initial density matrix $\rho_0$. In order to develop the perturbation expansion, we introduce the interaction picture $$\begin{aligned}
\la
\IS(\tau_1,\tau_2)
\ra
&=
T_C
\S_C
\ts_\bq^+(\tau_1)
\tS_\bk^-(\tau_2)
\ra
\right],\end{aligned}$$ $\S_C$ and $\tilde{O}(t)$ are defined as $$\begin{aligned}
\S_C:=
T_C\exp
\left(
\int_C
d\tau
\frac
{\tilde{H}_{\rm T}(\tau)}
{i\hbar}
\right),
\end{aligned}$$ and $$\begin{aligned}
\tilde{O}(t):=
\U0^\dagger(t,t_0)O\U0(t,t_0),
\end{aligned}$$ where $$\begin{aligned}
\U0(t,t_0)=
T\exp
\left(
\int_{t_0}^t
dt^\p
\frac{H_0(t^\p)}{i\hbar}
\right).
\end{aligned}$$ $\S_C$ is expanded as $$\begin{aligned}
\S_C\approx
1+
\int_Cd\tau
T_C
\frac
{\tilde{H}_{\rm T}(\tau)}
{i\hbar}.
\end{aligned}$$ The Green’s function is approximated as $$\begin{aligned}
\ra \notag \\
&\approx
-\frac{1}{i\hbar}
\int_Cd\tau
\sum_{\bqp,\bkp}
\Big\la T_C
\big[
J_{\bqp,\bkp}
\ts^+_\bqp(\tau)
\tS^-_\bkp(\tau) \notag \\
&\hspace{20mm}
+J_{\bqp,\bkp}^\ast
\ts^-_{-\bqp}(\tau)
\tS^+_{-\bkp}(\tau)
\big]
\ts^+_\bq(\tau_1)
\tS^-_\bk(\tau_2)
\Big\ra \notag \\
&=
-\frac{1}{i\hbar}
\int_Cd\tau J_{\bq,\bk}^\ast
\la
T_C
\ts^+_\bq(\tau_1)
\ts^-_{-\bq}(\tau)
\ra
\la
T_C
\tS^+_{-\bk}(\tau)
\tS^-_\bk(\tau_2)
\ra,
\end{aligned}$$ where, we use the following relations $$\begin{aligned}
&\la
T_C
\ts^+_\bq(\tau_1)
\ts^+_\bqp(\tau)
\ra=0, \\
\ra=
\d_{\bq,\bqp}
\tS_{-\bkp}^+(\tau)
\tS_{\bk}^-(\tau_2)
\ra=
\d_{\bk,\bkp}
\end{aligned}$$ We note that $$\begin{aligned}
\ra\neq0,
\end{aligned}$$ under the microwave irradiation. The spin current is $$\begin{aligned}
&\la\IS(\tau_1,\tau_2)\ra=
\sum_{\bq,\bk}
|J_{\bq,\bk}|^2 \notag \\
&\times{\rm Re}
\Bigg[
\frac{2}{\hbar}
\int_Cd\tau
\Bigg].
\end{aligned}$$ Using the contour ordered Green’s functions $$\begin{aligned}
&\chi_{\bq}(\tau_1,\tau)=
\frac{1}{i\hbar}
\la T_C
\ts_{\bq}^+(\tau_1)
\ts_{-\bq}^-(\tau)
\ra, \\
&G_{\bk}(\tau,\tau_2)=
\frac{1}{i\hbar}
\la T_C
\tS_{ \bk}^+(\tau)
\tS_{-\bk}^-(\tau_2)
\ra,
\end{aligned}$$ the above equation is rewritten as $$\begin{aligned}
&\la\IS(\tau_1,\tau_2)\ra \notag \\
&=\sum_{\bq,\bk}|J_{\bq,\bk}|^2
{\rm Re}
\Bigg[
-2\hbar
\int_Cd\tau
\chi_{\bq}(\tau_1,\tau)
G_{-\bk}(\tau,\tau_2)
\Bigg].
\end{aligned}$$ We put $\tau_2$ on the forward contour and $\tau_1$ on the backward contour to describe spin transfer at the interface in appropriate time order.
![ Keldysh contour []{data-label="fig_contour"}](fig_contour.eps){width="0.8\hsize"}
We use the Langreth rule $$\begin{aligned}
&\la\IS(t)\ra=
-2\hbar
\int_{-\infty}^\infty dt^\p
\Big(
\c_{\bq}^R(t,t^\p)
G_{-\bk}^>(t^\p,t)
+
\c_{\bq}^>(t,t^\p)
G_{-\bk}^A(t^\p,t)
\Big)
\Bigg].
\end{aligned}$$ We assume a steady state, i.e. $\la\IS(t)\ra$ is independent of $t$ $$\begin{aligned}
&\chi(t,t^\p)\to
\chi(t-t^\p)\to
\chi(-t^\p), \\
&G(t^\p,t)\to
G(t^\p-t)\to
G(t^\p),
\end{aligned}$$ so that the steady spin current is written as $$\begin{aligned}
&\la\IS\ra=
-2\hbar
\sum_{\bq,\bk}
|J_{\bq,\bk}|^2 \notag \\
&\hspace{1mm}\times{\rm Re}
\Bigg[
\int_{-\infty}^\infty dt^\p
\Big(
\c_{ \bq}^R(-t^\p)
G_{-\bk}^>( t^\p)
+\c_{ \bq}^>(-t^\p)
G_{-\bk}^A( t^\p)
\Big)
\Bigg].
\end{aligned}$$ Here we have $$\begin{aligned}
&\c_{\bq}^>(t)
=\frac{1}{i\hbar}
\la
\ts^+_{\bq}(t)\ts^-_{-\bq}(0)
\ra, \\
&\c_{\bq}^R(t)
=\frac{1}{i\hbar}\t(t^\p)
\la
[\ts^+_{\bq}(t),\ts^-_{-\bq}(0)]
\ra, \\
&G_{\bk}^>(t)
=\frac{1}{i\hbar}
\la
\tS^+_{\bk}(t)\tS^-_{-\bk}(0)
\ra, \\
&G_{\bk}^A(t)
=-\frac{1}{i\hbar}\t(-t)
\la
[\tS^+_{\bk}(t),\tS^-_{-\bk}(0)]
\ra.
\end{aligned}$$ The Fourier transformation is defined as $$\begin{aligned}
&G(\omega)=
\int^\infty_{-\infty}dt
e^{i\omega t}G(t), \\
&G(t)=
\int^\infty_{-\infty}
\frac{d\omega}{2\pi}
e^{-i\omega t}G(\omega).
\end{aligned}$$ The convolution is calculated as $$\begin{aligned}
&\int_{-\infty}^\infty dt^\p
\c_{ \bq}^R(-t^\p)
G_{-\bk}^>( t^\p) \notag \\
&=\int_{-\infty}^\infty dt^\p
\int_{-\infty}^\infty
\frac{d\omega }{2\pi}
\int_{-\infty}^\infty
\frac{d\omega^\p}{2\pi}
e^{i(\omega-\omega^\p)t^\p}
\c^R_{ \bq}(\omega)
G^>_{-\bk}(\omega^\p) \notag \\
&=
\c^R_{ \bq}(\omega)
G^>_{-\bk}(\omega^\p)
\int_{-\infty}^\infty dt^\p
e^{i(\omega-\omega^\p)t^\p} \notag \\
&=
2\pi\d(\omega-\omega^\p) \notag \\
&=\int_{-\infty}^\infty
\frac{d\omega}{2\pi}
\c^R_{ \bq}(\omega)
G^>_{-\bk}(\omega),
\end{aligned}$$ so that the spin current is written as $$\begin{aligned}
&\hspace{5mm}\times{\rm Re}
\Bigg[
\int_{-\infty}^\infty
\frac{d\omega}{2\pi}
\Big(
\c_{ \bq}^R(\omega)
G_{-\bk}^>(\omega)
+\c_{ \bq}^>(\omega)
G_{-\bk}^A(\omega)
\Big)
\Bigg].
\end{aligned}$$ The Fourier transformation of the Green’s functions satisfy $$\begin{aligned}
&\c_{ \bq}^>(\omega)=\c_{ \bq}^<(\omega)+\c_{ \bq}^R(\omega)-\c_{ \bq}^A(\omega), \\
& G_{ \bk}^>(\omega)= G_{ \bk}^<(\omega)+ G_{ \bk}^R(\omega)- G_{ \bk}^A(\omega),
\end{aligned}$$ so that the Green’s functions are rewritten as $$\begin{aligned}
&\c_{ \bq}^R(\omega)
G_{-\bk}^>(\omega)
+\c_{ \bq}^>(\omega)
G_{-\bk}^A(\omega) \notag \\
&=
\c_{ \bq}^R(\omega)
\Big(
G_{-\bk}^<(\omega)+ G_{-\bk}^R(\omega)- G_{-\bk}^A(\omega)
\Big) \notag \\
&\hspace{5mm}+
\Big(
\c_{ \bq}^<(\omega)+\c_{ \bq}^R(\omega)-\c_{ \bq}^A(\omega)
\Big)
G_{-\bk}^A(\omega) \notag \\
&=
&\hspace{5mm}+
\c_{ \bq}^R(\omega)G_{-\bk}^R(\omega)
-
\c_{ \bq}^A(\omega)G_{-\bk}^A(\omega),
\end{aligned}$$ The Green’s functions satisfy $$\begin{aligned}
&\c^R_\bq(\omega)=
\left[\c^A_\bq(\omega)\right]^\ast, \\
&G^R_\bk(\omega)=
\left[G^A_\bk(\omega)\right]^\ast.
\end{aligned}$$ Finally, the spin current is given as $$\begin{aligned}
\end{aligned}$$ We introduce the distribution functions as $$\begin{aligned}
&\chi^<_\bq(\omega)=
f^{\rm AL}_\bq(\omega)
\left[
2i{\rm Im}\chi^R_\bq(\omega)
\right], \\
&G^<_\bk(\omega)=
f^{\rm FI}_\bk(\omega)
\left[
2i{\rm Im}G^R_\bk(\omega)
\right].
\end{aligned}$$ The Green’s functions satisfy $$\begin{aligned}
&{\rm Im}\c^R_\bq(\omega)=
-{\rm Im}\c^A_\bq(\omega), \\
&{\rm Im} G^R_\bk(\omega)=
-{\rm Im} G^A_\bk(\omega).
\end{aligned}$$ Using these relations, we derive $$\begin{aligned}
{\rm Re}
\left[
\c^R_{\bq}(\omega)
G_{-\bk}^<(\omega)
\right]
&={\rm Re}
\left[
\c^R_{\bq}(\omega)
f^{\rm FI}_{-\bk}(\omega)
2i{\rm Im}G^R_{-\bk}(\omega)
\right] \notag \\
&=-2{\rm Im}\chi^R_{\bq}(\omega)
{\rm Im}G^R_{-\bk}(\omega)
f^{\rm FI}_{-\bk}(\omega),
\end{aligned}$$ and $$\begin{aligned}
{\rm Re}
\left[
\c^<_{\bq}(\omega)
G_{-\bk}^A(\omega)
\right]
&={\rm Re}
\left[
f^{\rm AL}_{\bq}(\omega)
2i{\rm Im}\c^R_{\bq}(\omega)
G^A_{-\bk}(\omega)
\right] \notag \\
&=-2{\rm Im}\chi^R_\bq(\omega)
{\rm Im}G^A_{-\bk}(\omega)
f^{\rm AL}_{\bq}(\omega) \notag \\
&=2{\rm Im}\chi^R_\bq(\omega)
{\rm Im}G^R_{-\bk}(\omega)
f^{\rm AL}_{\bq}(\omega).\end{aligned}$$ The formula of the spin current at the interface is derived as $$\begin{aligned}
\la\IS\ra
&=
4\hbar
\sum_{\bq,\bk}
|J_{\bq,\bk}|^2 \notag \\
&
\times
\int_{-\infty}^\infty
\frac{d\omega}{2\pi}
{\rm Im}\chi^R_{\bq}(\omega)
{\rm Im}G^R_{-\bk}(\omega)
\left[
f^{\rm FI}_{-\bk}(\omega)
-
f^{\rm AL}_{\bq}(\omega)
\right].\end{aligned}$$ In the case of spin pumping, the formula is written as $$\begin{aligned}
\la\IS\ra^{\rm SP}
&=
{\rm Im}\left[\d G_{-\bk}^<(\omega)\right],\end{aligned}$$ where $\d G_\bk^<(\omega)$ is the deviation of the lesser Green’s function from equilibrium. We note that $\la\IS\ra^{\rm SP}>0$ because the number of magnons increases under the microwave irradiation and the spin current flows from the FI to the TMDCs.
Magnon Green’s function
-----------------------
$\hfi$ describes a bulk ferromagnetic insulator under a microwave irradiation and is given by $$\begin{aligned}
\hfi(t)&=-J\sumij\bm{S}_i\cdot\bm{S}_j-\hbar\gamma B\sum_iS_i^z \notag \\
&\hspace{5mm}-\frac{\hbar\gamma\hac}{2}\sum_i
\left(
e^{-i\Omega t}S_i^-
+
e^{ i\Omega t}S_i^+
\right),\end{aligned}$$ where $\bm{S}_i$ is the localized spin at site $i$ in the FI, $J$ is the exchange interaction, $B$ is a static magnetic field, $\hac$ and $\Omega$ are the amplitude and frequency of the applied microwave radiation, respectively, and $\gamma$ is the gyromagnetic ratio. Using the Holstein-Primakoff transformation and employing the spin-wave approximation ($S^z_\bk=S-b^\dagger_\bk b_\bk$, $S_{\bk}^+=\sqrt{2S}b_\bk$, $S_{-\bk}^{-}=\sqrt{2S}b_{\bk}^\dagger$), the Hamiltonian of the FI is rewritten as $$\begin{aligned}
\hfi(t)=
&-JS^2Nz
-\hbar\gamma BSN
+\sum_\bk\hbar\omega_\bk \bc\ba \notag \\
&-\frac{\hbar\gamma h_{\rm ac}}{2}
\sqrt{2SN}
(
e^{-i\Omega t}b^\dagger_{\bk=\bm{0}}
+e^{i\Omega t}b_{\bk=\bm{0}}
),\end{aligned}$$ where $\hbar\omega_\bk=Dk^2+\hbar\gamma B$ is the magnon dispersion, $S$ is the magnitude of the localized spin, and $N$ is the number of spins in the FI. We define the contour ordered Green’s function $$\begin{aligned}
G_{\bk}(\tau_1,\tau_2):=
\frac{1}{i\hbar}
\la T_C
\tS_{ \bk}^+(\tau_1)
\tS_{-\bk}^-(\tau_2)
\ra.\end{aligned}$$ We treat the microwave radiation as a perturbation. $$\begin{aligned}
&\hfi(t)=H_0+V(t), \\
&H_0=\sum_\bk\hbar\ok\bkd\ba, \\
&V(t)=-\hac^+(t)b_{\bk=\bm{0}}^\dagger
-\hac^-(t)b_{\bk=\bm{0}}, \\
&h_{\rm ac}^\pm(t)=
\frac{\hbar\gamma\hac}{2}
\sqrt{2SN}
e^{\mp i\Omega t},\end{aligned}$$ The S operator is given as $$\begin{aligned}
\S(t,t_0)=T\exp\left(\int_{t_0}^tdt^\p\frac{\tilde{V}(t^\p)}{i\hbar}\right),\end{aligned}$$ where $$\begin{aligned}
\tilde{V}(t)
&=-\hac^+(t)\tb_{\bm 0}^\dagger(t)
-\hac^-(t)\tb_{\bm 0}(t).\end{aligned}$$ The Heisenberg equation is written as $$\begin{aligned}
\dot{\tb}_\bk(\tau)
&=\frac{1}{i\hbar}[\tb_\bk(\tau),H_0], \notag \\
&=\frac{1}{i\hbar}\hbar\ok\tb_\bk(\tau),\end{aligned}$$ and $$\begin{aligned}
\dot{\tb}_\bk^\dagger(\tau)
&=\frac{1}{i\hbar}[\tb^\dagger_\bk(\tau),H_0], \notag \\
&=-\frac{1}{i\hbar}\hbar\ok\tb^\dagger_\bk(\tau).\end{aligned}$$ The time evolution of the magnon operators are given as $$\begin{aligned}
&\tb_\bk(\tau)=
\exp\left(-i\ok\tau\right)\tb_\bk(0), \\
&\tb^\dagger_\bk(\tau)
=\exp\left(i\ok\tau\right)\tb^\dagger_\bk(0).\end{aligned}$$ Consequently, the perturbation term is written as $$\begin{aligned}
\tilde{V}(t)
&=-\hac^+(t)
\exp\left(i\omega_{\bm 0}\tau\right)
\tb_{\bm 0}^\dagger(0) \notag \\
&\hspace{5mm}
-\hac^-(t)
\exp\left(-i\omega_{\bm 0}\tau\right)
\tb_{\bm 0}(0).\end{aligned}$$ Here, we show several relations, which we use to perform second order perturbation calculation. The S operator is expanded as $$\begin{aligned}
\S_C
&=\sum_{n=0}^\infty\frac{1}{n!}
\left(
\frac{1}{i\hbar}
\right)^n
\int_Cd\tau_1
\int_Cd\tau_2
\cdots
\int_Cd\tau_n \notag \\
&\hspace{10mm}\times
T_C
\left[
\tilde{V}(\tau_1)
\tilde{V}(\tau_2)
\cdots
\tilde{V}(\tau_n)
\right],\end{aligned}$$ so that $\S_C\tb_\bkp(0)\tb_\bk(0)$ and $\S_C\tb_\bkp(0)\tb_\bk^\dagger(0)$ give following terms $$\begin{aligned}
&\S_C\tb_\bkp(0)\tb_\bk(0) \notag \\
&\to
\sum_{n,m}
\left(
\tb^\dagger_{\bm 0}(0)
\right)^{n-m}
\left(
\tb_{\bm 0}(0)
\right)^m
\tb_\bkp(0)\tb_\bk(0),\end{aligned}$$ and $$\begin{aligned}
&\S_C
\tb_\bkp(0)\tb_\bk^\dagger(0) \notag \\
\tb_\bkp(0)\tb_\bk^\dagger(0).\end{aligned}$$ Consequently, we obtain following relations $$\begin{aligned}
&\la
T_C
\S_C
\tS_\bkp^+(\tau_1)
\tS_\bk^+(\tau_2)
\ra_\conn \notag \\
&=\d_{\bk,\bkp}\d_{\bk,{\bm 0}}
\la
T_C
\S_C
\tS_\bk^+(\tau_1)
\tS_\bk^+(\tau_2)
\ra_\conn.\end{aligned}$$ and $$\begin{aligned}
\ra_\conn.\end{aligned}$$ Using the above relations, we calculate the Green’s function within the second-order perturbation. $\S_C$ is approximated as $$\begin{aligned}
&\S_C\approx
1+
\frac{1}{i\hbar}
\int_Cd\tau_3 T_C\tilde{V}(\tau_3) \notag \\
&\hspace{10mm}
+\frac{1}{2!}
\left(
\frac{1}{i\hbar}
\right)^2
\int_Cd\tau_3\int_Cd\tau_4 T_C
\left[\tilde{V}(\tau_3)\tilde{V}(\tau_4)
\right].\end{aligned}$$ The first-order term vanishes $$\begin{aligned}
\int_Cd\tau_3
\la T_C
\tilde{V}(\tau_3)
\tb_\bk(\tau_1)
\tb_\bk^\dagger(\tau_2)
\ra_{\rm conn}=0.\end{aligned}$$ The second-order term is given as $$\begin{aligned}
\d G_\bk(\tau_1,\tau_2)
&=
\frac{1}{2}
\left(
\frac{1}{i\hbar}
\right)^3
\int_Cd\tau_3\int_Cd\tau_4 \notag \\
&\hspace{5mm}\times 2S
\la T_C
\tilde{V}(\tau_3)
\tilde{V}(\tau_4)
\tilde{b}_\bk(\tau_1)
\tilde{b}_\bk^\dagger(\tau_2)
\ra_\conn.\end{aligned}$$ The perturbation is given as $$\begin{aligned}
&\tilde{V}(\tau_3)\tilde{V}(\tau_4) \notag \\
&=\left(
\hac^+(\tau_3)
\tb_{\bm 0}^\dagger(\tau_3)
+\hac^-(\tau_3)
\tb_{\bm 0}(\tau_3)
\right) \notag \\
&\hspace{5mm}\times
\left(
\hac^+(\tau_4)
\tb_{\bm 0}^\dagger(\tau_4)
+\hac^-(\tau_4)
\tb_{\bm 0}(\tau_4)
\right),\end{aligned}$$ and we have $$\begin{aligned}
&\la T_C
\tilde{V}(\tau_3)
\tilde{V}(\tau_4)
\tb_\bk(\tau_1)
\tb_\bk^\dagger(\tau_2)
\ra_\conn \notag \\
&=\hac^+(\tau_3)\hac^-(\tau_4)
\la T_C
\tb_{\bm 0}^\dagger(\tau_3)
\tb_{\bm 0}(\tau_4)
\tb_\bk(\tau_1)
\tb_\bk^\dagger(\tau_2)
\ra_\conn \notag \\
&\hspace{5mm}+
\hac^-(\tau_3)\hac^+(\tau_4)
\la T_C
\tb_{\bm 0}(\tau_3)
\tb_{\bm 0}^\dagger(\tau_4)
\tb_\bk(\tau_1)
\tb_\bk^\dagger(\tau_2)
\ra_\conn.\end{aligned}$$ Using Bloch-de Dominicis theorem $$\begin{aligned}
&\la T_CC_1C_2\cdots C_{2n}\ra \notag \\
&=
{\sum_{P}}^\p\s^P
\la T_CC_{p_1}C_{p_2}\ra
\cdots
\la T_CC_{p_{2n-1}}C_{p_{2n}}\ra,\end{aligned}$$ with $$\begin{aligned}
&p_1<p_2,\cdots,p_{2n-1}<p_{2n}, \\
&p_1<p_3<\cdots<p_{2n-1},\end{aligned}$$ the first term is written as $$\begin{aligned}
&=
\la T_C
\tb_{\bm 0}^\dagger(\tau_3)
\tb_\bk(\tau_1)
\ra
\la T_C
\tb_{\bm 0}(\tau_4)
\tb_\bk^\dagger(\tau_2)
\ra,\end{aligned}$$ and the second term is written as $$\begin{aligned}
&=
\tb_{\bm 0}(\tau_3)
\tb_\bk^\dagger(\tau_2)\ra
\la T_C
\tb_{\bm 0}^\dagger(\tau_4)
\tb_\bk(\tau_1)
\ra.\end{aligned}$$ The second-order deviation of the Green’s function is calculated as $$\begin{aligned}
\d G_\bk(\tau_1,\tau_2)
&=
\left(
\frac{1}{i\hbar}
\right)^3
\int_Cd\tau_3\int_Cd\tau_4
\hac^+(\tau_3)\hac^-(\tau_4) \notag \\
&\hspace{5mm}\times 2S
\la T_C
\tb_\bk(\tau_1)
\tb_{\bm 0}^\dagger(\tau_3)
\ra, \notag \\
&=\frac{1}{2S}\frac{1}{i\hbar}
\d_{\bk,\bm{0}}
\int_Cd\tau_3
G_\bk(\tau_1,\tau_3)
\hac^+(\tau_3) \notag \\
&\hspace{17mm}\times\int_Cd\tau_4
G_\bk(\tau_4,\tau_2)\hac^-(\tau_4),\end{aligned}$$ and $$\begin{aligned}
\d G_\bk(\tau_1,\tau_2)=
&\frac{1}{2S}\int_C d\tau_3\int_C d\tau_4 \notag \\
&\hspace{5mm}\times
G_\bk(\tau_1,\tau_3)
\Sigma_\bk(\tau_3,\tau_4)
G_\bk(\tau_4,\tau_2),\end{aligned}$$ where the self-energy is defined as $$\begin{aligned}
\Sigma_\bk(\tau_3,\tau_4)=\d_{\bk,\bm{0}}
\frac{1}{i\hbar}
\la
T_C \hac^+(\tau_3)\hac^-(\tau_4)
\ra.\end{aligned}$$ The lesser component $\d G^<_\bk(t_1,t_2)$ is written as $$\begin{aligned}
\d G^<_\bk(t_1,t_2)
&=\frac{1}{2S}\frac{1}{i\hbar}
\d_{\bk,\bm{0}}
\int_{-\infty}^\infty dt_3
G^R_\bk(t_1,t_3)
\hac^+(t_3) \notag \\
&\hspace{17mm}\times
\int_{-\infty}^\infty dt_4
G^A_\bk(t_4,t_2)
\hac^-(t_4),\end{aligned}$$ where we used following relations $$\begin{aligned}
\int_Cd\tau_3
G_\bk(\tau_1,\tau_3)=
\int_{-\infty}^\infty dt_3
G^R_\bk(t_1,t_3), \notag \\
\int_Cd\tau_4
G_\bk(\tau_4,\tau_2)=
\int_{-\infty}^\infty dt_4
G^A_\bk(t_4,t_2).\end{aligned}$$ The Green’s function is written as $$\begin{aligned}
\d G^<_\bk(t_1,t_2)=
&\frac{1}{2S}
\int_{-\infty}^\infty dt_3
\int_{-\infty}^\infty dt_4 \notag \\
&\hspace{5mm}\times
G^R_\bk(t_1,t_3)
\Sigma_\bk^<(t_3,t_4)
G^A_\bk(t_4,t_2),\end{aligned}$$ where the lesser component of the self-energy is written as $$\begin{aligned}
\Sigma_\bk^<(t_3,t_4)=
\d_{\bk,\bm{0}}
\frac{1}{i\hbar}
\la
\hac^-(t_4)\hac^+(t_3)
\ra.\end{aligned}$$ We assume that a steady state exist and the Green’s funciton is a function of the time difference. $\d G^<_\bk(t_1,t_2)$ is rewritten as $$\begin{aligned}
&\d G^<_\bk(t)=\frac{1}{2S}
\int_{-\infty}^\infty dt_3
\int_{-\infty}^\infty dt_4 \notag \\
&\hspace{20mm}\times
G^R_\bk(-t_3)
\Sigma^<_\bk(t+t_3-t_4)
G^A_\bk(t_4).\end{aligned}$$ The Fourier transformation is written as $$\begin{aligned}
\d G^<_\bk(\omega)=
\frac{1}{2S}
G^R_\bk(\omega)
\Sigma^<_\bk(\omega)
G^A_\bk(\omega).\end{aligned}$$ The retarded Green’s function is calculated as $$\begin{aligned}
G^R_\bk(t)&=
\frac{1}{i\hbar}\t(t)
\la
[
\tS_{ \bk}^+(t),
\tS_{-\bk}^-(0)
]
\ra \notag \\
&=\frac{2S}{i\hbar}\t(t)
e^{-i\ok t}
\la
[
\tb_\bk(0),
\tb_\bk^\dagger(0)
]
\ra \notag \\
&=\frac{2S}{i\hbar}\t(t)
e^{-i\ok t}.\end{aligned}$$ The Fourier transformation is $$\begin{aligned}
G^R_\bk(\omega)
&=\int^\infty_{-\infty}dt
e^{i\omega t}
G^R_\bk(t) \notag \\
&=\frac{2S}{i\hbar}
\int^\infty_{-\infty}dt
\t(t)e^{i(\omega-\ok+i\ag\omega)t} \notag \\
&=\frac{2S}{i\hbar}
\int^\infty_0dt
e^{i(\omega-\ok+i\ag\omega)t} \notag \\
&=\frac{2S}{i\hbar}
\frac{-1}{i(\omega-\ok+i\ag\omega)} \notag \\
&=\frac{2S/\hbar}{\omega-\ok+i\ag\omega},\end{aligned}$$ where we have introduced the phenomenological dimensionless damping parameter $\ag$, which originates from the Gilbert damping. The retarded and advanced Green’s functions satisfy $$\begin{aligned}
G^A_\bk(\omega)
&=[G^R_\bk(\omega)]^\ast,\end{aligned}$$ so that $$\begin{aligned}
G^R_\bk(\omega)G^A_\bk(\omega)=
\frac{(2S/\hbar)^2}{(\omega-\ok)^2+\ag^2\omega^2}.\end{aligned}$$ The self-energy is calculated as $$\begin{aligned}
\Sigma^<_\bk(\omega)
&=\int dt e^{i\omega t}\Sigma_\bk^<(t) \notag \\
&=\int dt e^{i\omega t}
\d_{\bk,{\bm 0}}\frac{1}{i\hbar}
\la\hac^-(0)\hac^+(t)\ra \notag \\
&=\frac{1}{i\hbar}
\left(
\frac{\hbar\gamma \hac}{2}
\right)^2
2SN2\pi\d_{\bk,{\bm 0}}\d(\omega-\Omega) \notag \\
&=\frac{\hbar}{i}
2SN
\left(
\frac{\gamma \hac}{2}
\right)^2
2\pi
\d_{\bk,{\bm 0}}\d(\omega-\Omega).\end{aligned}$$ The lesser Green’s function is written as $$\begin{aligned}
\d G^<_\bk(\omega)=
\frac{\hbar}{i}
\frac{N(2S/\hbar)^2(\gamma\hac/2)^2}{(\omega-\ok)^2+\ag^2\omega^2}
2\pi\d_{\bk,{\bm 0}}\d(\omega-\Omega).\end{aligned}$$ The lesser Green’s function is written as $$\begin{aligned}
\right],\end{aligned}$$ so that the deviation of distribution function is calculated as $$\begin{aligned}
\d f^{\rm FI}_\bk(\omega)
&=\frac{\d G^<_\bk(\omega)}
{2i{~}{\rm Im}G^R_\bk(\omega)}, \notag \\
&=\frac{1}{2S}
\frac
{
G^R_\bk(\omega)
\Sigma^<_\bk(\omega)
G^A_\bk(\omega)
}
{
2i{~}{\rm Im}G^R_\bk(\omega)
}, \notag \\
&=\frac
{2\pi SN(\gamma\hac/2)^2}
{\ag\omega}
\d_{\bk,{\bm 0}}\d(\omega-\Omega).\end{aligned}$$
Spin diffusion equation
-----------------------
In this section, we solve the spin diffusion equation for the valley polarized spin current. The spin diffusion equation is given by $$\begin{aligned}
\left(\partial_x^2+\partial_y^2-1/\lambda^2\right)\mu_s(x,y)=0,\end{aligned}$$ and the spin current is given by $$\begin{aligned}
\bm{j}_s(x,y)
&=-\frac{\sigma_{xx}}{e}\left[\nabla\mu_s(x,y)+\theta\nabla\mu_s(x,y)\times\bm{e}_z\right].\end{aligned}$$ where $\theta=\sigma_{xy}/\sigma_{xx}$ is the Hall angle. We solve the problem with boundary conditions $$\begin{aligned}
j_{s,x}(0,y)&=j_{0} \\
\mu_s(\infty,y)&=0 \\
j_{s,y}(x,0)&=0 \\
\mu_s(x,\infty)&=\mu_s^{(0)}(x)\end{aligned}$$ where $\mu_s^{(0)}(x)$ is the solution for $\theta=0$ $$\begin{aligned}
\mu_s^{(0)}(x)=\frac{ej_{0}\lambda}{\sigma_{xx}}e^{-x/\lambda}.\end{aligned}$$ The boundary conditions are written as $$\begin{aligned}
(\partial_x+\theta\partial_y)\mu_s(0,y)&=-\frac{ej_{0}}{\sigma_{xx}} \\
\mu_s(\infty,y)&=0 \\
(\partial_y-\theta\partial_x)\mu_s(x,0)&=0 \\
\mu_s(x,\infty)&=\mu_s^{(0)}(x)\end{aligned}$$ In the following, we derive an approximate analytical solution assuming $\theta\ll1$. The spin accumulation is approximated as $$\begin{aligned}
\mu_s(x,y)\approx\mu_s^{(0)}(x)+\theta\mu_s^{(1)}(x,y).\end{aligned}$$ $\mu_s^{(1)}(x,y)$ is the solution of $$\begin{aligned}
\left(\partial_x^2+\partial_y^2-1/\lambda^2\right)\mu_s^{(1)}(x,y)=0\end{aligned}$$ with boundary conditions $$\begin{aligned}
\partial_x\mu_s^{(1)}(0,y)&=0 \\
\mu_s^{(1)}(\infty,y)&=0 \\
\partial_y\mu_s^{(1)}(x,0)
&=-\frac{ej_{0}}{\sigma_{xx}}e^{-x/\lambda} \\
%\left(=\partial_x\mu_s^{(0)}(x,y)\right) \\
\mu_s^{(1)}(x,\infty)&=0\end{aligned}$$ where $\theta^2$ terms are omitted. We consider Fourier cosine transformation $$\begin{aligned}
\hat{\mu}_s^{(1)}(k,l)
=\frac{2}{\pi}\int^\infty_0\int^\infty_0
\cos (kx)\cos (ly)\mu_s^{(1)}(x,y)dxdy.\end{aligned}$$ We have $$\begin{aligned}
&\int^\infty_0\cos(kx)\partial_x^2\mu_s^{(1)}(x,y)dx \notag \\
&=\left[\cos(kx)\partial_x\mu_s^{(1)}(x,y)\right]^\infty_0
+k\int^\infty_0\sin(kx)\partial_x\mu_s^{(1)}(x,y)dx \notag \\
&=k\left[\sin(kx)\mu_s^{(1)}(x,y)\right]^\infty_0
-k^2\int^\infty_0\cos(kx)\mu_s^{(1)}(x,y)dx \notag \\
&=-k^2\int^\infty_0\cos(kx)\mu_s^{(1)}(x,y)dx,\end{aligned}$$ where we use $\partial_x\mu_s^{(1)}(\infty,y)=0$ and $\mu_s^{(1)}(\infty,y)=0$, and $$\begin{aligned}
&\int^\infty_0\cos(ly)\partial_y^2\mu_s^{(1)}(x,y)dy \notag \\
&=\left[\cos(ly)\partial_y\mu_s^{(1)}(x,y)\right]^\infty_0
+l\int^\infty_0\sin(ly)\partial_y\mu_s^{(1)}(x,y)dy \notag \\
&=-\partial_y\mu_s^{(1)}(x,0)
+l\left[\sin(ly)\mu_s^{(1)}(x,y)\right]^\infty_0 \notag \\
&\hspace{30mm}-l^2\int^\infty_0\cos(ly)\mu_s^{(1)}(x,y)dy \notag \\
&=-\partial_y\mu_s^{(1)}(x,0)
-l^2\int^\infty_0\cos(ly)\mu_s^{(1)}(x,y)dy,\end{aligned}$$ where we use $\partial_y\mu_s^{(1)}(x,\infty)=0$ and $\mu_s^{(1)}(x,\infty)=0$. We define $\hat{s}(k)$ as $$\begin{aligned}
\hat{s}(k)
&:=\sqrt{\frac{2}{\pi}}
\int^\infty_0\cos(kx)\partial_y\mu_s^{(1)}(x,0)dx \notag \\
&=-\sqrt{\frac{2}{\pi}}\frac{ej_{0}}{\sigma_{xx}}
\int^\infty_0\cos(kx)e^{-x/\lambda}dx \notag \\
&=-\sqrt{\frac{2}{\pi}}\frac{ej_{0}}{\sigma_{xx}}
\frac{\lambda}{1+k^2\lambda^2}.\end{aligned}$$ The spin diffusion equation is written as $$\begin{aligned}
(-k^2-l^2-1/\lambda^2)\hat{\mu}_s^{(1)}(k,l)-\sqrt{\frac{2}{\pi}}\hat{s}(k)=0.\end{aligned}$$ The solution is given as $$\begin{aligned}
\hat{\mu}_s^{(1)}(k,l)=-\sqrt{\frac{2}{\pi}}\frac{\hat{s}(k)}{k^2+l^2+1/\lambda^2}.\end{aligned}$$ Finally, we derive $$\begin{aligned}
\mu_s^{(1)}(x,y)=\frac{2}{\pi}\frac{ej_{0}\lambda}{\sigma_{xx}}
\int^\infty_0
\cos(k\tilde{x})
\frac
{e^{-\sqrt{1+k^2}\tilde{y}}}
{(1+k^2)^{3/2}}
dk,\end{aligned}$$ where we use the following calculations $$\begin{aligned}
\int^\infty_0\frac{\cos(ly)}{k^2+l^2+1/\lambda^2}dl
=\frac{\pi}{2}\frac{\lambda e^{-\sqrt{1+k^2\lambda^2}y/\lambda}}{\sqrt{1+k^2\lambda^2}}\end{aligned}$$ and $$\begin{aligned}
&\frac{2}{\pi}\int^\infty_0\int^\infty_0
\cos(kx)\cos(ly)
\hat{\mu}_s^{(1)}(k,l)dkdl \notag \\
&=\left(\frac{2}{\pi}\right)^{3/2}
\int^\infty_0
\cos(kx)\hat{s}(k)
\left(
-\int^\infty_0
\frac
{\cos(ly)}
{k^2+l^2+1/\lambda^2}
dl
\right)
dk \notag \\
&=-\sqrt{\frac{2}{\pi}}
\int^\infty_0
\cos(kx)\hat{s}(k)
\frac{\lambda e^{-\sqrt{1+k^2\lambda^2}y/\lambda}}{\sqrt{1+k^2\lambda^2}}
dk \notag \\
&=\frac{2}{\pi}\frac{ej_{0}\lambda}{\sigma_{xx}}
\int^\infty_0
\cos\left(k\lambda \frac{x}{\lambda}\right)
\frac
{e^{-\sqrt{1+k^2\lambda^2}y/\lambda}}
{(1+k^2\lambda^2)^{3/2}}
d(k\lambda) \notag \\
&=\frac{2}{\pi}\frac{ej_{0}\lambda}{\sigma_{xx}}
dk.\end{aligned}$$
| |
It’s summer…and it’s HOT. It might even be in the 100s today. It hasn’t been that hot in Michigan in a long time. When it’s hot the best thing to eat is ice cream 🙂 And sometimes it’s nice to have an ice cream treat…that isn’t filled with chemicals and vegetable oils. Yesterday Rebecca and I created a homemade ice cream sandwich. I used my basic vanilla cookie recipe and created a banana cookie. This cookie is ideal because it stays very soft. It’s even easy to eat when frozen. And it just so happens that we made a big batch of strawberry ice cream on Tuesday. Put them together and you’ve got a unique, healthy ice cream treat that you won’t find in any box.
The ice cream sandwiches turned out great! Rebecca scarfed one down for snack last night. I would even feel good about serving them for breakfast 🙂 They are perfect for adults, kids and even the littlest in the family. Abram (13m) loves homemade ice cream and really enjoyed these cookies as well. They are soft enough that he can easily chew them. This is just one variation of many I hope to try!
I also cut open a cookie and put peanut butter on it like a sandwich as part of Rebecca’s lunch yesterday. She really enjoyed it.
This post is linked to Fat Tuesday, Real Food Wednesday, Fight Back Friday and Fresh Bites Friday.
Strawberry Banana Ice Cream Sandwiches
Cookie:
1 cup whole wheat flour
3/8 cup rapadura
2 Tbsp. honey
1/2 cup mashed ripe banana
1 egg
1/4 tsp. baking soda
1/4 tsp. salt
1/2 tsp. vanilla
1/2 tsp. cinnamon (optional)
Ice Cream:
1 cup strawberries
2 1/2 cups cream
1/2 cup maple syrup or honey (or a combination)
1 tsp. vanilla
3 egg yolks
————————————————
Prepare the ice cream:
Blend all ingredients together. Process in ice cream maker (mine takes about 20 minutes in my Kitchen Aid attachment). Store in container in freezer.
Prepare the cookies:
Heat oven to 425. Stir all ingredients together. Drop by spoonful (big scoops if you want big ice cream sandwiches) onto baking sheet. Bake 8 minutes. Let cool completely.
Assemble sandwiches:
Let ice cream soften slightly. Cut open a cookie*. Place a scoop of ice cream on the bottom half. Top with other half of cookie. Place on a wax paper lined baking sheet. Repeat for remaining cookies. Place in freezer. Once ice cream is solid wrap each sandwich individually in plastc wrap, wax paper or parchment paper.
*If you want really big ice cream sandwiches you can do a full cookie on bottom and top. You can also make smaller cookies for mini ice cream sandwiches that are perfect for kids or a small treat for adults.
Ice cream sandwich variations:
Chocolate cookies (omit banana, reduce flour to 3/4 cups, add 3 Tbsp. cocoa and 1/8 cup milk) with peanut butter, chocolate, mint , cookie dough or vanilla ice cream.
Vanilla cookies with chocolate, vanilla, caramel, cookie dough or strawberry ice cream.
Cinnamon cookies with vanilla or pumpkin ice cream.
Banana cookies with chocolate or peanut butter ice cream. | https://justtakeabite.com/2012/06/28/strawberry-banana-ice-cream-sandwiches/ |
Past: February 28 → June 25, 2020Zhenya Machneva — Galerie GP & N Vallois Discover in pictures the exhibition Réminiscences of Zhenya Machneva at the GP & N Vallois gallery.
Zhenya Machneva was born in 1988 in Leningrad, a Soviet city whose name existed between 1924 and 1991, also known as St. Petersburg. From the beginning there is the story of a disappearance, of a ghost. During her studies at the Stieglitz State Academy of Art and Design, Zhenya Machneva chose to train in the textile department. She is immediately struck and seduced by weaving techniques. At this time, she is not yet free of the subjects she wishes to weave, the tapestry is confined to a strictly decorative function. A function and a role that she will shift when she starts working alone in her studio.
On two manual looms, she creates tapestries representing industrial landscapes, factories emptied of their workers, useless machines, patterns and colors. Why would you try to represent by hand a heritage that no one seems to care about anymore? The artist finds in it a family history, as well as the fallen fantasy of an era. She points out that “the Soviet industrial period has enjoyed great glory, but now what we can see are just collapsed dreams.
It seems important to me to collect different objects and different landscapes in the process of disappearing.”.
Like an archaeologist from the Soviet industrial era, Zhenya Machneva begins by visiting the factory where her grandfather worked. On site, the machines appear to her as sculptures, organisms and autonomous entities. She needs to «touch her subject.» Before weaving, she sets off to explore deserted factories, abandoned sheds, wastelands turned into landfills. On site, she photographs what she calls «patterns.» The collection of images will give rise to extremely graphic drawings rendered in black and white. The drawings are the sketches from which she will implement the tapestries. Zhenya Machneva creates a contrast between subject and technique. The steel is made visible by cotton, the rate of work at the factory gives way to slowness, while the weight, coldness and rigidity of the buildings are soft and subtle in the weaving. The black and white drawings are transformed into colorful tapestries. Colors are intuitively chosen.
Zhenya Machneva wants to maintain a part of improvisation within a laborious manual process. “I hope that you can feel my energy through the works.” The choice of tapestry is physical. Zhenya Machneva gives the technique an organic dimension to which it is intimately attached. Sitting in front of the loom, the artist tirelessly repeats the same gestures to generate successive frames. Repetition, slowness and loneliness are part of a meditative state in which each cotton thread becomes a mantra. A gestural repetition that echoes that once applied by workers, active in these factories that today are ghosts. The choice of tapestry is also political. If you look at the history of art, tapestry is a medium.
It has an authoritarian, timeless, sensory aspect. Through the thread and the loom, Zhenya Machneva represents the Soviet industrial heritage that has become invisible and unproductive. The motifs, machines and buildings are the archives of a bygone era, a time when industrialization and the figure of the worker were over-glorified. A vanished era of which barely visible ghosts remain. It is then for the artist to embody this heritage to give it a new existence.The making of tapestries is a physical incarnation, but also metaphorical. She pays particular attention to patterns, or to the details of machines whose zoomorphism or anthropomorphism she accentuates. She then plays with the pareidolia, which brings out familiar faces, skulls and other forms.The artist thus engages a new reading of the woven motifs by opening a fictional space. Through her weft threads and her chain threads, Zhenya Machneva awakens ghosts, revives and makes the poetry of sleeping landscapes palpable. | https://slash-paris.com/en/evenements/reminiscences-zhenya-machneva/sous |
We all know that Jennifer Lawrence is pretty honest in her interviews, sometimes too honest, but how good is she at lying?
Jennifer stopped by the Tonight Show with Jimmy Fallon to promote X-Men: Days of Future Past and became the latest guest to challenge Jimmy in the hilarious game Box of Lies.
In Box of Lies, one has to pick a box and either decide to tell the other person what’s in the box or lie about it. The opponent must then decide whether the other person is telling the truth or a lie. Lets just say the objects in the box are quite ridiculous so choosing truth or lie can get pretty tough.
In between sharing what’s in their boxes, Jennifer and Jimmy did some hilarious accents and impressions. We really wish these two had played to 3 points instead of two.
Watch Jennifer & Jimmy play Box of Lies below! Update: Watch Jennifer talk about her embarrassing moment in front of Jennifer Lopez & more with Jimmy below. | https://www.shineon-media.com/2014/05/15/jennifer-lawrence-tests-her-lying-skills-with-jimmy-fallons-box-of-lies/ |
The strategic design of logistic networks, such as roads, railways or mobile phone networks, is essential for efficiently managing emergency situations. The geographic coordinate systems could be used to produce new traveling salesman problem (TSP) instances with geographic information systems (GIS) features. The current paper introduces a recurrent framework designed for building a sequence of instances in a systematic way. The framework intends to model real-life random adverse events manifested on large areas, as massive rainfalls or the arrival of a polar front, or targeted relief supply in early stages of the response. As a proof of concept for this framework, we use the first Romanian TSP instance with the main human settlements, in order to derive several sequences of instances. Nowadays state-of-the-art algorithms for TSP are used to solve these instances. A branch-and-cut algorithm delivers the integer exact solutions, using substantial computing resources. An implementation of the Lin–Kernighan heuristic is used to rapidly find very good near-optimal integer solutions to the same instances. The Lin–Kernighan heuristic shows stability on the tested instances. Further work could be done to better exploit GIS features in order to efficiently tackle operations on large areas and to test the solutions delivered by other algorithms on new instances, derived using the introduced framework.
KeywordsEmergency management Traveling salesman problem GIS Geographic coordinates
Notes
Acknowledgments
The authors would like to thank Professor William Cook and Dr. Vasile Crăciunescu for their support. The authors would also like to thank the reviewers for their suggestions and insightful comments to improve this work. | https://link.springer.com/article/10.1007%2Fs10115-016-0938-8 |
1) Preheat your oven: Gas mark 4; 350F; 180C.
2) Mix the flour, protein powder and ground hazelnuts in a bowl.
3) In another bowl melt the butter and whisk with the egg.
4) Stir the bowl of wet ingredients together with the dry ingredients until you have a smooth dough. If the mixture is too wet and sticky, sprinkle a little more flour mixture until the dough binds.
5) Divide the dough into 5 equal cookie shapes and place on a tray lined with baking paper. Crumble the chocolate and divide evenly among the cookies.
6) Bake for 30 mins on gas mark 4 (350F, 180C).
7) Serve warm, or leave to cool.
Variations: Instead of vanilla flavored protein powder, you can also use other complementary flavors, like chocolate or hazelnut.
Discover the many health benefits of hazelnuts and whether your genotype would benefit from this recipe by clicking here.
3 Easy Ways You Can Get Started
Discover which plan best fits your needs by answering a couple of questions. | https://fitnessgenes.com/blog/hazelnut-protein-cookies |
Hatching eggs from broiler breeder flocks (Ross 344 X 308SF) at 59 and 55 weeks of age were used in two experiments.
Eggs were stored in a cooler for 1-3 days prior to setting in Petersime incubators under standard single-stage conditions.
Early hatch time was 472-480 h or 471-477 h, middle hatch time was 488-492 or 480-486 h, and late hatch time was 496-510 h or 494-510 h, respectively, in experiments 1 or 2.
Chicks were deemed to be hatched when they exhibited healed navels and dryness about the head and neck. At 510 hours of incubation, hatched chicks were removed from the trays, feather sexed, counted, identified with neck tags, weighed, and placed in floor pens on new wood litter shavings.
Chicks were necropsied and yolk sac weight determined at placement in pens and 24 hours later in experiment 1 and immediately at each hatch time (477 h, 486 h, and 510 h) and also at placement in experiment 2. For each hatch time, chicks were assigned to 8 pens of 10 male and 10 female chicks each or 9 pens of 9 male and 9 female chicks each for a total of 480 or 486 chicks in experiments 1 and 2, respectively. Body weight and feed consumption were determined at 1, 7, 28, and 35 days of age or 7, 14, 21, and 35 days of age in experiments 1 and 2, respectively. Mortality was noted daily.
Data from the 3 (hatch time) x 2 (sex) completely randomised design were subjected to analysis of variance. Percentage yolk was greater in late hatch compared to early and middle hatch chicks at placement and at 1 days in experiment 1. Percentage yolk was similar in all groups in experiment 2 at hatch time but early hatch chicks exhibited less percentage yolk at placement.
Chick body weight was greater at placement in late hatch chicks compared to early hatch chicks in both experiments but this was no longer apparent by 7 days. Body weight was greater in the middle hatch compared to late hatch chicks with early hatch chicks intermediate at 7 and 35 days in experiment 1.
Although early and middle hatch chicks exhibited greater body weight loss between hatch and placement and lower body weight at placement compared to late hatch chicks, early hatch chicks had significantly larger body weight than late hatch chicks with middle hatch chicks intermediate at 35 days in experiment 2. Late hatch chicks consumed less feed and exhibited lower relative growth rate to 7 days and exhibited greater cumulative mortality in both experiments. Late hatch chicks had greater percentage yolk sac and body weight at 510 h of incubation, which was followed by reduced feed consumption to 7 days. Mortality and body weight gain was also poorer in late chicks relative to early and middle hatch chicks. | https://www.poultryworld.net/poultry/hatch-time-influences-live-performance-of-broiler-chicks/ |
The invention belongs to the field of pesticides, and particularly relates to a rice field herbicide composition and application thereof. The rice field herbicide composition is prepared from a weeding effective quantity of a first active ingredient and a weeding effective quantity of a second active ingredient; the first active ingredient is shown as one of the following general formulas (please see the formulas in the description), wherein R1 represents hydrogen, methyl, ethyl and cyclopropyl, R2 represents methyl, ethyl and isopropyl, and R3, R4, R5, R6, R7 and R8 represent hydrogen, methyl, ethyl, methoxyl, halogen, halogen-substituted methyl and halogen-substituted methoxyl; the second active ingredient is selected from one or more of the following compounds including triazines, nitriles and the like. The composition is low in cost, convenient to use, easy to degrade in the environment and safe to both current-crop rice and next-crop rice. | |
Average MLB fastball speed is 91 mph out of the hand, and 83 mph at the plate. Example: MLB average exit speed is 103 mph, bat speed ranges roughly from 70-85 mph.
How fast is a baseball hit?
According to Cherveny, the average swing speed in Major League Baseball games is around 70 miles per hour.
What is the average baseball speed?
The average fastball from an MLB starter in 2018 was 92.3 mph, compared with 93.4 mph for relievers. A team’s average is the average of every fastball thrown by those pitchers last year, as opposed to just an average of each pitcher’s average fastball velocity.
How fast can a human swing a bat?
Based on their testing of professional hitters, Zepp’s app says the typical bat speed at impact is between 70 and 90 mph.
Does a baseball speed up after being hit?
The faster ball will recoil faster, because the collision is partially elastic. The ball compresses at contact with the bat, and the outgoing velocity is faster than the bat velocity by the effect of the compression, which is always increasing with the incoming speed.
Who can throw 100 mph?
Closing the game in the 9th inning, Chapman unleashed a 105.1 mph fastball against the Baltimore Orioles. Aroldis Chapman’s fastball is widely regarded as the fastest pitch in MLB today. In fact, even after more than 575 career innings and countless pitches hitting 100-plus mph, he also holds the title this season.
How far can a 15 year old hit a baseball?
A normal 15-year-old kid… who can hit a baseball 500 feet.
How fast should a 17 year old pitch?
Pitching velocity by age in the U.S.
|Age||Average Velocity¹||Your Goal²|
|15||70 MPH||75 MPH|
|16||76 MPH||80 MPH|
|17||80 MPH||85 MPH|
|18||83 MPH||88-90 MPH|
Can anyone throw 90 mph?
If you are a serious baseball player, one who has put in the work over the years and have at least average coordination, speed, and ability, you can absolutely accomplish the feat of throwing 90 mph.
What is the fastest pitch by a 12 year old?
Cuban 12-year-old right-handed pitcher Alejandro Prieto recorded the fastest pitch of the WBSC U-12 Baseball World Cup 2019, at 123 km/h (76.4 mph) in his win against Mexico today in the Super Round.
Can I hit a 90 mph fastball?
A 90-mph fastball can reach home plate in 400 milliseconds — or four-tenths of a second. But a batter has just a quarter-second to identify the pitch, decide whether to swing, and start the process. … It can take nearly 25 milliseconds for the brain’s signals to pulse through the hitter’s body and start his legs moving.
Who has the fastest swing in MLB?
Giancarlo Stanton owns the 120 MPH exit velocity club
- Giancarlo Stanton, 2020, 121.3 MPH.
- Giancarlo Stanton, 2019, 120.6 MPH.
- Giancarlo Stanton, 2018, 121.7 MPH.
- Gary Sanchez, 2018, 121.1 MPH.
- Giancarlo Stanton, 2017, 122.2 MPH.
- Aaron Judge, 2017, 121.1 MPH.
- Giancarlo Stanton, 2016, 120.1 MPH.
How fast was Barry Bonds bat speed?
Bonds whips his bat around about 5 to 10 miles per hour faster than the typical 75 mph swing of Major League hitters, estimates Alan Nathan, a physicist at the University of Illinois who has done some pathbreaking research on exactly what happens when a bat hits a ball.
Do faster pitches get hit harder?
For a batter, there’s another way to understand the conservation of momentum: The faster the pitch and the faster the swing, the farther the ball will fly. A faster pitch is harder to hit than a slower one, but a batter who can do it may score a home run.
Does a baseball bat slow down when it hits a ball?
The answer here is Yes, but not much. Upon contact, there will be impulse between the bat and the ball and this impulse speeds the ball up and slows the bat down. However, according to Newton’s 3rd law of motion, which is the action-reaction of forces, says the ball is also hitting the bat, so the bat must slow down.
How much force does a 90 mph baseball have?
A force approaching 8,000 pounds is required to change the motion of a 5-ounce baseball traveling 90 miles per hour into a 110-mile-per-hour shot over the center-field fence. | https://wahoosam.net/rules/what-is-the-average-speed-of-a-baseball-being-hit.html |
Determining the Size of an Earthquake
Distinguish between intensity scales and magnitude scales.
Seismologists use a variety of methods to determine two fundamentally different measures that describe the size of an earthquake: intensity and magnitude. The first of these to be used was intensity—a measure of the amount of ground shaking at a particular location, based on observed property damage. Later, the development of seismographs allowed scientists to measure ground motion using instruments. This quantitative measurement, called magnitude, relies on data gleaned from seismic records to estimate the amount of energy released at an earthquake’s source.
Intensity Scales
Until the mid-1800s, historical records provided the only accounts of the severity of earthquake shaking and destruction. Perhaps the first attempt to scientifically describe the aftermath of an earthquake came following the great Italian earthquake of 1857. By systematically mapping effects of the earthquake, a measure of the intensity of ground shaking was established. The map generated by this study used lines to connect places of equal damage and hence equal ground shaking. Using this technique, zones of intensity were identified, with the zone of highest intensity located near the center of maximum ground shaking and often (but not always) the earthquake epicenter.
In 1902, Giuseppe Mercalli developed a more reliable intensity scale, which is still used today in a modified form. The Modified Mercalli Intensity scale, shown in Table 1, was developed using California buildings as its standard. For example, on the 12-point Mercalli Intensity scale, when some well-built wood structures and most masonry buildings are destroyed by an earthquake, the affected area is assigned a Roman numeral X (10).
More recently, the U.S. Geological Survey has developed a webpage called “Did You Feel It,” where Internet users enter their zip code and answer questions such as “Did objects fall off shelves?” Within a few hours, a Community Internet Intensity Map, like the one in Figure 1 for the 2011 central Virginia earthquake (M5.8), is generated. Shaking was reported from Maine to Florida, an area occupied by one-third of the U.S. population. Several national landmarks were damaged, including the Washington Monument and the National Cathedral located about 130 kilometers (80 miles) away from the epicenter. Because the crustal rocks east of the Rocky Mountains are cool and strong, earthquakes are felt over a much larger area than those of similar magnitudes in the west (see Figure 1).
Magnitude Scales
To more accurately compare earthquakes around the globe, scientists searched for a way to describe the energy released by earthquakes that did not rely on factors such as building practices, which vary considerably from one part of the world to another. As a result, several magnitude scales were developed.
Richter Magnitude
In 1935 Charles Richter of the California Institute of Technology developed the first magnitude scale to use seismic records. As shown in Figure 2 (top), the Richter scale is calculated by measuring the amplitude of the largest seismic wave (usually an S wave or a surface wave) recorded on a seismogram.
Because seismic waves weaken as the distance between the hypocenter and the seismograph increases, Richter developed a method that accounts for the decrease in wave amplitude with increasing distance.
Theoretically, as long as equivalent instruments are used, monitoring stations at different locations will obtain the same Richter magnitude for each recorded earthquake.
In practice, however, different recording stations often obtain slightly different magnitudes for the same earthquake— a result of the variations in rock types through which the waves travel.
Earthquakes vary enormously in strength, and great earthquakes produce wave amplitudes thousands of times larger than those generated by weak tremors.
To accommodate this wide variation, Richter used a logarithmic scale to express magnitude, in which a 10-fold increase in wave amplitude corresponds to an increase of 1 on the magnitude scale. Thus, the intensity of ground shaking for a magnitude 5 earthquake is 10 times greater than that produced by an earthquake having a Richter magnitude (ML) of 4 (Figure 3).
In addition, each unit of increase in Richter magnitude equates to roughly a 32-fold increase in the energy released. Thus, an earthquake with a magnitude of 6.5 releases 32 times more energy than one with a magnitude of 5.5 and roughly 1000 times (32 × 32) more energy than a magnitude 4.5 quake. A major earthquake with a magnitude of 8.5 releases millions of times more energy than the smallest earthquakes felt by humans (Figure 4).
The convenience of describing the size of an earthquake by a single number that can be calculated quickly from seismograms makes the Richter scale a powerful tool. Seismologists have since modified Richter’s work and developed other Richter-like magnitude scales.
Despite its usefulness, the Richter scale is not adequate for describing very large earthquakes. For example, the 1906 San Francisco earthquake and the 1964 Alaska earthquake have roughly the same Richter magnitudes.
However, based on the relative size of the affected areas and the associated tectonic changes, the Alaska earthquake released considerably more energy than the San Francisco quake. Thus, the Richter scale is considered saturated for major earthquakes because it cannot distinguish among them. Despite this shortcoming, Richter-like scales are still used because they can be calculated quickly.
Moment Magnitude
For measuring medium and large earthquakes, seismologists now favor a newer scale, called moment magnitude (MW), which measures the total energy released during an earthquake. Moment magnitude is calculated by determining the average amount of slip on the fault, the area of the fault surface that slipped, and the strength of the faulted rock.
Moment magnitude can also be calculated by modeling data obtained from seismograms. The results are converted to a magnitude number, as in other magnitude scales. As with the Richter scale, each unit increase in moment magnitude equates to roughly a 32-fold increase in the energy released.
Because moment magnitude estimates the total energy released, it is better than the Richter scale for measuring very large earthquakes. Seismologists have used the moment magnitude scale to recalculate the magnitudes of older strong earthquakes. For example, the 1964 Alaska earthquake, originally given a Richter magnitude of 8.3, has since been recalculated using the moment magnitude scale, resulting in an upgrade to MW 9.2. Conversely, the 1906 San Francisco earthquake that was given a Richter magnitude of 8.3 was downgraded to MW 7.9. The strongest earthquake on record is the 1960 Chilean subduction zone earthquake, with a moment magnitude of 9.5. | https://geologyengineering.com/determining-the-size-of-an-earthquake/ |
kangaroo common in arid regions of Australia.
|
Due to the more mountainous and heavily vegetated environment of Tasmania, red kangaroos are not found in the state. However, the grey kangaroo (also known as the Eastern grey kangaroo or Forester kangaroo) is common, and is a protected species.
Forester kangaroos can reach over 2m (6'7") in height when fully upright, and can jump 8m (26') in a single bound at high speed, although this would not happen so commonly in Tasmania. This type of kangaroo are most common in Tasmania's north-east, but can also be found in central Tasmania and on the east coast. | http://tourtasmania.com/content.php?id=kangaroo |
The pastoral system and the work of the year team plays an integral part in allowing students to both succeed academically and develop as a whole person during their time at Oaks Park High School. In Year 7 it will be key to ensuring a smooth transition.
Tutor Groups
Each year group is divided into tutor groups and on arrival into Year 7 your child will be placed in one of the tutor groups for Year 7. The form tutor will see members of their tutor group everyday for registration first thing in the morning.
The tutor’s role is very important as he or she will monitor the progress of each child and will look after his/her general well-being in the school. Students generally stay in the same group throughout their time in the school and wherever possible remain with the same tutor.
If there are any concerns parents/carers are encouraged to speak to their son/daughter’s tutor in the first instance either by appointment, email or by telephone through the school reception on 0208 647 5842.
Year Group Leader
Form tutors and students are supported by the Year Group Leader who keeps an overview of the progress of the year group and who is also responsible for ensuring students meet the required standards of behaviour, attendance and dress. In addition, a member of the Leadership Team is allocated to take an especial interest in the performance and progress of each year group; they also advise the year team on various matters and support them in their work.
Year Group Manager
Each year group has a Year Group Manager who is responsible for supporting students in a more informal way. If issues arise students may be referred to the Inclusion Assistant who will meet with students and undertake some monitoring to ensure that students cope with life at school and address any barriers to learning. | https://www.oaksparkhigh.org.uk/2370/pastoral-structure-1 |
That testimony bolsters the defense, which claims the kitchen knife was not the weapon used in the bloody 2007 slaying of Knox's British roommate, 21-year-old Meredith Kercher.
As things stand, there's no confirmed DNA belonging to Kercher on the knife; one piece of DNA on its blade that was first attributed to Kercher has been disputed on appeal.
Expert Andrea Berti testified Wednesday that the minute new DNA trace from the knife's handle showed "considerable affinity" with Knox's DNA, while not matching that of Kercher. It also did not match the DNA of Knox's co-defendant Raffaele Sollecito or Rudy Guede, an Ivorian man who has been convicted separately in the brutal slaying.
Knox defense lawyer Carlo Dalla Vedova told The Associated Press that the testimony confirms their contention that the knife was used solely for preparing food. "The report confirms that this is a kitchen knife. It is not a murder weapon," Dalla Vedova said.
Luca Maori, Sollecito's defense lawyer, said the trace's very existence also indicated the knife had not been washed. "It is something very important," he said. "It is absurd to use it for a murder and put it back in the drawer."
The DNA evidence on the knife found in a drawer at Sollecito's place has been among the most hotly contested pieces of evidence in the original trial and now in two appeals.
Knox and Sollecito were convicted in 2009 of murdering Kercher, and sentenced to 26 and 25 years in jail, respectively. The conviction was overturned on appeal in 2011, freeing Knox to return to the United States where she remains for the latest appeal.
Prosecutors have contended the knife was the murder weapon because it matched Kercher's wounds, and presented evidence in the first trial that it contained Kercher's DNA on the blade and Knox's on the handle.
However, a court-ordered review during the first appeal in Perugia, where the murder happened, discredited the DNA evidence. It said there were glaring errors in evidence-collecting and that below-standard testing and possible contamination raised doubts over the DNA traces linked to Kercher on the blade, as well as Sollecito's DNA on Kercher's bra clasp.
Italy's highest court, however, ordered a fresh appeals trial, blasting the acquittal as full of contradictions and questioning failures to retest some of the DNA evidence in light of advanced new technology.
Sollecito addressed the court on Wednesday, as allowed by the Italian judicial system. He said he hadn't taken seriously enough the accusations at the beginning because he was too caught up with his new romance with Knox to grasp what was happening.
"Me and Amanda were living the dawn of a carefree romance and we wanted to be completely isolated in our love nest," Sollecito said.
He struggled with his composure as he pleaded with the court to acquit him. "I hope I'll have the chance to live a life, a life, because at the moment I don't have a real life," he said. "That's what I'm asking you."
The DNA trace is the last new piece of evidence to be entered in the latest trial. Prosecutors begin their summations later this month, followed by the defense in December. A verdict is expected in January. | https://6abc.com/archive/9315565/ |
Q:
derivative of inverse matrix by itself
Let $A$ be a matrix, supposedly $k\times k$ matrix.
I know that
$$\frac{\partial A^{-1}}{\partial A} = -A^{-2} $$
I do not know how I am supposed to obtain the following results using this fact. I want to know the step of
$$\frac{\partial a^\top A^{-1} b}{\partial A} = -(A^\top)^{-1}ab^\top (A^\top)^{-1} $$
Also, I want to know the solution to
$$\frac{\partial (A^\top)^{-1}ab^\top (A^\top)^{-1} }{\partial A} = ? $$
A:
Start with the defining equation for the matrix inverse and find its differential.
$$\eqalign{
I &= A^{-1}A \cr
0 &= dA^{-1}\,A + A^{-1}\,dA \cr
0 &= dA^{-1}\,A + A^{-1}\,dA\,A^{-1} \cr
dA^{-1} &= -A^{-1}\,dA\,A^{-1} \cr
}$$
Next note the gradient of a matrix with respect to itself.
$$
{\mathcal H}_{ijkl} = \frac{\partial A_{ij}}{\partial A_{kl}} = \delta_{ik}\delta_{jl}
$$
Note that ${\mathcal H}$ is a 4th order tensor with some interesting symmetry properties (isotropic). It is also the identity element for the Frobenius product, i.e. for any matrix $B$
$${\mathcal H}:B=B:{\mathcal H}=B$$
Now we can answer your first question. The function of interest is scalar-valued. Let's find its differential and gradient
$$\eqalign{
\phi &= a^TA^{-1}b \cr &= ab^T:A^{-1} \cr
d\phi &= ab^T:dA^{-1} \cr &= -ab^T:A^{-1}\,dA\,A^{-1} \cr
&= -A^{-T}ab^TA^{-T}:dA \cr
\frac{\partial\phi}{\partial A} &= -A^{-T}ab^TA^{-T} \cr
}$$
Now let's try the second question. This time the function of interest is matrix-valued.
$$\eqalign{
F &= A^{-1}ab^TA^{-1} \cr
dF &= dA^{-1}ab^TA^{-1} + A^{-1}ab^TdA^{-1} \cr
&= -A^{-1}\,dA\,A^{-1}ab^TA^{-1} - A^{-1}ab^TA^{-1}\,dA\,A^{-1} \cr
&= -A^{-1}\,dA\,F - F\,dA\,A^{-1} \cr
&= -\Big(A^{-1}{\mathcal H}F^T + F{\mathcal H}A^{-T}\Big):dA \cr
\frac{\partial F}{\partial A} &= -\Big(A^{-1}{\mathcal H}F^T+F{\mathcal H}A^{-T}\Big) \cr
}$$
This gradient is a 4th order tensor.
If you prefer, you can vectorize the matrices to flatten the result.
$$\eqalign{
{\rm vec}(dF) &= -{\rm vec}(-A^{-1}\,dA\,F + F\,dA\,A^{-1}) \cr
&= -(F^T\otimes A^{-1} + A^{-T}\otimes F)\,{\rm vec}(dA) \cr
df &= -(F^T\otimes A^{-1} + A^{-T}\otimes F)\,da \cr
\frac{\partial f}{\partial a} &= -\Big(F^T\otimes A^{-1} + A^{-T}\otimes F\Big) \cr\cr
}$$
In some step above, a colon was used to denote the Frobenius (double-contraction) product
$$\eqalign{
A &= {\mathcal H}:B &\implies &A_{ij} &= \sum_{kl}{\mathcal H}_{ijkl} B_{kl} \cr \alpha &= H:B &\implies &\alpha &= \sum_{ij}H_{ij} B_{ij} = {\rm Tr}(H^TB) \cr
}$$
| |
Biggest Islands In Cape Verde
Santiago Island is Cape Verde's largest and most populous, and its economic, cultural, and governmental center.
An archipelago in the Atlantic Ocean, Cape Verde has ten islands and three islets. The islands are located off the West African coast. The islands have diverse geography, from uninterrupted mountain ranges, plateaus, valleys, and stretches of beaches. These resources have contributed to a boom in tourism on the island, and the subsequent growth of their individual economies. Some of the big islands in the nation include:
Santiago
Santiago is the largest of the islands in Cape Verde, stretching for an area of 991 km2. The island is home to the capital of Cape Verde, Praia. More than half of all Cabo Verdeans reside on this island. The island has a diverse geography, including majestic mountains, beaches, and plateaus. The vegetation ranges from acacia, figs, and euphorbias to more barren regions. The island has three national parks to protect its exotic wildlife and unique flora, the largest of the parks being the Serra Malagueta National Park. The island was a center for the slave trade and has monuments and architecture that attest to its former role in the trade. Santiago is the administrative, cultural, and economic center of the archipelago. The neighborhood in the island ranges from suburbs to slums.
Santo Antao
Santo Antao has an area of779 km2 and is the second largest island in Cape Verde. The island boasts of almost 50,000 Cabo Verdeans. The island has an agricultural sector, with the production of coffee, oranges, papaya, maize, and pineapples. The terrain of the island varies from barren to rich vegetation, complete with forests of pine, acacia, dragon, and fir. The Ribeira Grande hosts the capital of the island, which is lively and bustling with culture. The mountain is also home to giant mountain ranges which are the center of tourism on the island. The climate on the island is cool in the central region, dry in the southern region and humid in the northern part of the country.
Boa Vista
Boa Vista is the third largest island in Cape Verde, with an area of 620 km2. It is the easternmost of all islands in the country. The name translates to beautiful view, and its diverse geography makes the island live up to its name. The terrain of the island is mainly desert with stretches of sandy beaches. The administrative capital of the island is in Sal Rei. The island has abundant species of sea turtles, which have been under threat in the recent years. The animals have been subjected to slaughter, despite a ban criminalizing the act. The government has partnered with environmental organizations to educate local communities in a bid to protect the animals from extinction. The island is the least densely populated of all other islands. Tourism has been growing, and it is now a major industry in the region.
Fogo
Fogo Island lies on an area of 476 km2 and boasts of about 40,000 Cabo Verdeans. The highest point on the island is the volcanic mountain of Pico de Fogo, at 2,829 meters above sea level. The island is volcanically active, and the last volcano erupted in 2014. The geography of the island is dominated by mountains and lush vegetation. Agriculture thrives in the island, with the predominant produce being coffee. Other economic activities on the island include fishing and wine production. The island has also experienced a surge in tourism.
Other islands in Cape Verde, and their sizes include Sao Nicolau (388 km2), Maio (269 km2), Sao Vicente (227 km2), Sal (216 km2), Brava (67 km2), and Santa Luzia (35 km2). Cape Verde has been rising in ranks to become a tourist destination in the recent years. Tourism activities have led to increased economic growth in the region. The task for the government now lies in protecting the island’s fragile ecosystems in the face of increasing human activities.
What is the Biggest Island in Cape Verde?
Santiago Island is Cape Verde's largest and most populous island.
The Biggest Islands In Cape Verde
|Rank||Biggest Islands in Cape Verde||Area|
|1||Santiago||991 square kilometers|
|2||Santo Antao||779 square kilometers|
|3||Boa Vista||620 square kilometers|
|4||Fogo||476 square kilometers|
|5||Sao Nicolau||388 square kilometers|
|6||Maio||269 square kilometers|
|7||Sao Vicente||227 square kilometers|
|8||Sal||216 square kilometers|
|9||Brava||67 square kilometers|
|10||Santa Luzia||35 square kilometers|
About the Author
Benjamin Elisha Sawe holds a Bachelor of Arts in Economics and Statistics and an MBA in Strategic Management. He is a frequent World Atlas contributor.
Citations
Your MLA Citation
Your APA Citation
Your Chicago Citation
Your Harvard CitationRemember to italicize the title of this article in your Harvard citation. | https://www.worldatlas.com/articles/the-biggest-islands-in-cape-verde.html |
Mission Statement:
The Department of Energy is committed to systematically control and reduce the cost of utilities; project future utility costs and select better rates within electric service providers (ESP) in deregulated markets; and plan for replacement of inefficient equipment including building automation control systems.
Department Goals:
The Department of Energy is dedicated to the efficiency of energy use in the Pharr-San Juan-Alamo Independent School District. This is primarily accomplished by using our automated control system. By monitoring the automated control system, labor requirements are reduced and prompt responses to problems are facilitated. Building utility is tracked and discrepancies are investigated to maximize the reduction in energy consumption, utility, and fuel costs without sacrificing human comfort and safety.
The responsibilities of the Department of Energy are to:
- Work toward achieving a cost avoidance on energy usage
- Focus on cost-effective operations and on timely support to schools
- Effectively identify inefficient equipment and acquire with high-energy- efficient ratio (EER) replacements
- Effectively control utility waste
- Plan and forecast future consumption of utilities
- Keep abreast of all innovations that can reduce energy usage in schools
- Work with building principals and committees to determine ways to reduce energy consumption
- Make changes to school cooling plants that will reduce the use of energy
- Keep track of District energy consumption
- Communicate total cost avoidance to staff and the public
Important Information
-
Q: WHY DO WE HAVE TO UNPLUG OUR EQUIPMENT?
A: The answer is Vampire Power.
Q: What is Vampire Power?
A: Vampire Power is the energy consumed by an item when it is off but still plugged into the wall receptacle. Everyday we take for granted the energy we consume by items we leave plugged into the wall even if we don’t use it. Day after day we hear about the green movement or energy conservation but pay no attention to it. Some of us may be geared to do our best to conserve but with life’s busy schedule sometimes we feel we just don’t have the time.
Here are some examples of how much Vampire Power our everyday household or workplace items consume when off but still plugged in:
Product Average Watts
Mobile phone charger 1.0
17” LCD Flat Monitor 1.5
17” CRT Monitor (Big TV type) 2.0
Laptop 9.0
Desktop Tower 3.0
Inkjet Printer 1.3
Laser Printer 1.6
Computer Speakers 1.8
Portable Stereo 1.7
Television CRT type 3.0
Television Big Screen 7.0
DVD/VCR 5.0
To understand what exactly we are talking about when it comes to money, let’s do a small calculation using some of the items above. Let’s say we have one computer system in our classroom, or at home, and you just turned it off for half a day but it is still plugged in. What are they consuming?
First, we have to calculate cost for each item then add them together.
Desktop: 3.0 watts, Flat Monitor: 1.5 watts, Inkjet Printer: 1.3 watts, and Speakers: 1.8 watts
3.0 x 12 hours/day x 30 days/month = 1080 watts/1000 = 1.08 kwh x $0.10 = $0.108/month
1.5 x 12 hours/day x 30 days/month = 540 watts/1000 = 0.54 kwh x $0.10 = $0.054/month
1.3 x 12 hours/day x 30 days/month = 468 watts/1000 = 0.468 kwh x $0.10 = $0.0468/month
1.8 x 12 hours/day x 30 days/month = 648 watts/1000 = 0.648 kwh x $0.10 = $0.0648/month
So our total cost for our computer system is only $0.2736 a month which won’t break the bank, but if we calculate the cost for our entire district we are looking at $3556.80/month for approximately 13,000 systems. That same amount comes to approximately $42,681.60 a year by not unplugging the system. Imagine the cost for the same system but actually left on all day. When we see it in this perspective we should understand why we should unplug our items during times when we won’t be using them. I hope this sheds some light on what Vampire Power can consume.
Thank you,
Jaime Barboza
Energy Manager
AC Request
-
AC Requests are to be submitted online through mPulse
Procedures and or recommendations for HVAC requests. Please consider the following when submitting an AC Request:
- Is the request reasonable?
- Does it follow District Conservation Efforts? If it doesn’t will it be accepted or denied?
- Is my HVAC request submitted within (24 hours) time frame before a scheduled event? Phone requests are not permitted.
- The request should include the following:
Name of building
Location eg. Address or Rm. #
Title of Event
Date
Time
Number of participants
Signature of Approval by Principal, Director or Designee as per mpulse access list.
Note:
Not all employees have permission to request AC through mpulse.
Please contact your campus secretary or Department of Energy for additional information if needed. | https://www.psjaisd.us/domain/2474 |
Reverse a string in C using pointers with code example and explanation.
Input: I am coding string reverse!
Output: !esrever gnirts gnidoc ma I
Logic to Reverse a string in C:
Store the string into a buffer. Take two pointers e.g. start pointer and end pointer.
Set the start pointer to the first location of the buffer and end pointer to the end location of the buffer.
Swap the values of both the pointers. Move the start pointer one step forward by incrementing it and move the end pointer one step back by decrementing it.
Keep doing the same up to the middle of the string resides in the buffer.
C Code to reverse string:
The function void reverse_string(char* str) takes the input string in a pointer and process the reversing of the string in place in the buffer itself.
Read the comments given for each required code statements. | https://www.interviewsansar.com/reverse-a-string-in-c-using-pointers/ |
The depth of a window arch is the net thickness of the wall. For example, a 2x4 is actually 3½”, and a 2x6 is actually 5½”. Any depth over 5½” will be made as a 2 piece split (reference “Product Specifications” below for more information). Lastly, a window arch depth is limited to 24”. Anything greater than 24” is considered a barrel vault. Enter your depth measurement in inches to the nearest 1/8” (.125). For example, if your window arch is 3½”, you would enter 3.5000.
The rough opening width of a window arch is the span of the opening in-the-clear. In other words, from stud to stud. Enter your Rough Opening width measurement in inches to the nearest 1/8” (.125). For example, if your rough opening width is 3’-0”, you would enter 36.000.
The window width is the interior horizontal span of the window. Most windows are usually made a ½” under the rough opening to ensure an easy fit during installation. Enter your window width measurement in inches to the nearest 1/8” (.125). For example, if your window width is 2’-11 1/2”, you would enter 35.500.
The window height is the interior vertical span of the window. Most windows are usually made a ½” under the rough opening to ensure an easy fit during installation. Enter your window height measurement in inches to the nearest 1/8” (.125). For example, if your window height is 5’-11 1/2”, you would enter 71.500.
The window leg is the vertical span from the interior bottom of the window to the spring point of the arch top. Enter your window leg measurement in inches to the nearest 1/8” (.125). For example, if your window leg is 5’-6 3/4” wide, you would enter 66.750. Please note that most half-circle windows will not have a leg. In this instance, you would enter 0.000 for the window leg.
The total height of your window arch is the total amount of height you’ll need in-the-clear, which includes the thickness of the arch. Mathematically, this is the window height – window leg + ¼” thickness of the window arch cut. For example, if your window height is 5’-11 1/2” and your window leg is 5’-6 3/4” wide, this 4 ¾” + ¼” thickness or 5” total height.
The location of a window arch is not an address or zip code. Location is the specific place in the project your window arch will be installed (for example: Dining Rm, Study, #1 Window Arch). Location is a helpful, optional field: we will label each window arch kit with the location you provide, which can be especially helpful if there are multiple window arches in one project.
Window archways solve the problem of poor, crooked and out-of-round reveals for vinyl and aluminum windows. Our window arch kits also provide a weather tight seal and easy insulation.
Measure the window width to the nearest 1/8”.
Measure the window height to the nearest 1/8”.
Measure the window leg to the nearest 1/8”.
Enter the location (place) where the window arch will be installed in your project (for example: Dining Rm, Study, #1 Window Arch).
Fasten each window arch kit half to the marks on the header and stud.
Tailored installation instructions will be supplied with your window archways for easy install.
Drywalling a window arch is comprised of measuring the curvature of the arch, cutting the drywall to the curvature and depth, forming and fastening in place.
For tighter radius window arches, consider using a double layer of either ¼” or ¼” flex drywall.
You can finish your window arch with drywall, plaster or a wood veneer. Whether you want your window archway to have a simple or an ornate finish, the choice is yours.
The rise of your window archway is the window height less the window leg. If your window arch doesn’t have a leg, then your window arch rise would be the window height.
How Long Does it Take to Get My Window Arch?
Window arch kits normally take 3 – 5 business days to manufacture (in most cases sooner). Then add a few days for shipping.
How Do You Ship My Window Archways?
Depending upon the size of your order, most window arch kits ship in boxes via UPS Ground. If you order several arch kits or if you also order a ceiling or wall design kit, it could ship on a pallet via LTL common carrier. If you’d like to know how it will ship before placing your order, simply call us and we’ll let you know. | https://www.archwaysandceilings.com/product/window-arch/ |
The propellant, a neutral gas such as argon or xenon, is injected into a hollow cylinder surfaced with electromagnets. On entering the engine, the gas is first heated to a “cold plasma” by a helicon RF antenna/coupler that bombards the gas with electromagnetic energy, stripping electrons off the propellant atoms and producing a plasma of ions and free electrons. By varying the amount of RF heating energy and plasma, VASIMR is claimed to be capable of generating either low-thrust, high–specific impulse exhaust or relatively high-thrust, low–specific impulse exhaust. The second phase of the engine is a strong solenoid-configuration electromagnet that channels the ionized plasma, acting as a convergent-divergent nozzle like the physical nozzle in conventional rocket engines.
A second coupler, known as the Ion Cyclotron Heating (ICH) section, emits electromagnetic waves in resonance with the orbits of ions and electrons as they travel through the engine. Resonance is achieved through a reduction of the magnetic field in this portion of the engine that slows the orbital motion of the plasma particles. This section further heats the plasma to greater than 1,000,000 K (1,000,000 °C; 1,800,000 °F) —about 173 times the temperature of the Sun's surface.
The path of ions and electrons through the engine approximates lines parallel to the engine walls; however, the particles actually orbit those lines while traveling linearly through the engine. The final, diverging, section of the engine contains an expanding magnetic field that ejects the ions and electrons from the engine at velocities as great as 50,000 m/s (180,000 km/h).
In contrast to the typical cyclotron resonance heating processes, VASIMR ions are immediately ejected from the magnetic nozzle before they achieve thermalized distribution. Based on novel theoretical work in 2004 by Alexey V. Arefiev and Boris N. Breizman of University of Texas at Austin, virtually all of the energy in the ion cyclotron wave is uniformly transferred to ionized plasma in a single-pass cyclotron absorption process. This allows for ions to leave the magnetic nozzle with a very narrow energy distribution, and for significantly simplified and compact magnet arrangement in the engine.
VASIMR does not use electrodes; instead, it magnetically shields plasma from most hardware parts, thus eliminating electrode erosion, a major source of wear in ion engines. Compared to traditional rocket engines with very complex plumbing, high performance valves, actuators and turbopumps, VASIMR has almost no moving parts (apart from minor ones, like gas valves), maximizing long term durability.
According to Ad Astra as of 2015, the VX-200 engine requires 200 kW electrical power to produce 5 N of thrust, or 40 kW/N. In contrast, the conventional NEXT ion thruster produces 0.327 N with only 7.7 kW, or 24 kW/N. Electrically speaking, NEXT is almost twice as efficient, and successfully completed a 48,000 hours (5.5 years) test in December 2009.
New problems also emerge with VASIMR, such as interaction with strong magnetic fields and thermal management. The inefficiency with which VASIMR operates generates substantial waste heat that needs to be channeled away without creating thermal overload and thermal stress. The superconducting electromagnets necessary to contain hot plasma generate tesla-range magnetic fields that can cause problems with other onboard devices and produce unwanted torque by interaction with the magnetosphere. To counter this latter effect, two thruster units can be packaged with magnetic fields oriented in opposite directions, making a net zero-torque magnetic quadrupole.
The required power generation technology for fast interplanetary travel does not currently exist and is not feasible with current state-of-the-art technology.
The first VASIMR experiment was conducted at Massachusetts Institute of Technology in 1983. Important refinements were introduced in the 1990s, including the use of the helicon plasma source, which replaced the plasma gun originally envisioned and its electrodes, adding to durability and long life.
As of 2010, Ad Astra Rocket Company (AARC) was responsible for VASIMR development, signing the first Space Act Agreement in 23 June 2005 to privatize VASIMR technology. Franklin Chang Díaz is Ad Astra's chairman and CEO, and the company had a testing facility in Liberia, Costa Rica on the campus of Earth University.
In 1998, the first helicon plasma experiment was performed at the ASPL. VASIMR experiment (VX) 10 in 1998 achieved a helicon RF plasma discharge of up to 10 kW and VX-25 in 2002 of up to 25 kW. By 2005 progress at ASPL included full and efficient plasma production and acceleration of the plasma ions with the 50 kW, 0.5 newtons (0.1 lbf) thrust VX-50. Published data on the 50 kW VX-50 showed the electrical efficiency to be 59% based on a 90% coupling efficiency and a 65% ion speed boosting efficiency.
The 100 kilowatt VASIMR experiment was successfully running by 2007 and demonstrated efficient plasma production with an ionization cost below 100 eV. VX-100 plasma output tripled the prior record of the VX-50.
The VX-100 was expected to have an ion speed boosting efficiency of 80%, but could not achieve this efficiency due to losses from the conversion of DC electric current to radio frequency power and the auxiliary equipment for the superconducting magnet. In contrast, 2009 state-of-the-art, proven ion engine designs such as NASA's High Power Electric Propulsion (HiPEP) operated at 80% total thruster/PPU energy efficiency.
On 24 October 2008, the company announced in a press release that the helicon plasma generation component of the 200 kW VX-200 engine had reached operational status. The key enabling technology, solid-state DC-RF power-processing, reached 98% efficiency. The helicon discharge used 30 kW of radio waves to turn argon gas into plasma. The remaining 170 kW of power was allocated for acceleration of plasma in the second part of the engine, via ion cyclotron resonance heating.
Based on data from VX-100 testing, it was expected that, if room temperature superconductors are ever discovered, the VX-200 engine would have a system efficiency of 60–65% and a potential thrust level of 5 N. Optimal specific impulse appeared to be around 5,000 s using low cost argon propellant. One of the remaining untested issues was whether the hot plasma actually detached from the rocket. Another issue was waste heat management. About 60% of input energy became useful kinetic energy. Much of the remaining 40% is secondary ionizations from plasma crossing magnetic field lines and exhaust divergence. A significant portion of that 40% was waste heat (see energy conversion efficiency). Managing and rejecting that waste heat is critical.
Between April and September 2009, 200 kW tests were performed on the VX-200 prototype with 2tesla superconducting magnets that are powered separatelyand not accounted for in any "efficiency" calculations. During November 2010, long duration, full power firing tests were performed, reaching steady state operation for 25 seconds and validating basic design characteristics.
Results presented in January 2011 confirmed that the design point for optimal efficiency on the VX-200 is 50 km/s exhaust velocity, or an Isp of 5000 s. The 200 kW VX-200 had executed more than 10,000 engine firings with argon propellant at full power by 2013, demonstrating greater than 70% thruster efficiency relative to RF power input.
In March 2015, Ad Astra announced a $10 million award from NASA to advance the technology readiness of the next version of the VASIMR engine, the VX-200SS to meet the needs of deep space missions. The SS in the name stands for "steady state", as a goal of the long duration test is to demonstrate continuous operation at thermal steady state.
In August 2016, Ad Astra announced completion of the milestones for the first year of its 3-year contract with NASA. This allowed for first high-power plasma firings of the engines, with a stated goal to reach 100 hr and 100 kW by mid-2018. In August 2017, the company reported completing its Year 2 milestones for the VASIMR electric plasma rocket engine. NASA gave approval for Ad Astra to proceed with Year 3 after reviewing completion of a 10-hour cumulative test of the VX-200SS engine at 100 kW. It appears as though the planned 200 kW design is being run at 100 kW for reasons that are not mentioned in the press release.
In August, 2019 Ad Astra announced the successful completion of tests of a new generation radio-frequency (RF) Power Processing Unit (PPU) for the VASIMR engine, built by Aethera Technologies Ltd. of Canada. Ad Astra declared a power of 120 kW and >97% electrical-to-RF power efficiency, and that, at 52 kg, the new RF PPU is about 10x lighter than the PPUs of competing electric thrusters (power-to-weight ratio: 2.31 kW/kg)
VASIMR has a comparatively poor thrust-to-weight ratio, and requires an ambient vacuum.
Proposed applications for VASIMR such as the rapid transportation of people to Mars would require a very high power, low mass energy source, ten times more efficient than a nuclear reactor (see nuclear electric rocket). In 2010 NASA Administrator Charles Bolden said that VASIMR technology could be the breakthrough technology that would reduce the travel time on a Mars mission from 2.5 years to 5 months. However this claim has not been repeated in the last decade.
In August 2008, Tim Glover, Ad Astra director of development, publicly stated that the first expected application of VASIMR engine is "hauling things [non-human cargo] from low-Earth orbit to low-lunar orbit" supporting NASA's return to Moon efforts.
In order to conduct an imagined crewed trip to Mars in 39 days, the VASIMR would require an electrical power level far beyond anything currently possible or predicted.
On top of that, any power generation technology will produce waste heat. The necessary 200 megawatt reactor "with a power-to-mass density of 1,000 watts per kilogram" (Díaz quote) would require extremely efficient radiators to avoid the need for "football-field sized radiators" (Zubrin quote).
Electric propulsion
Space reactors
The VX-200 will provide the critical data set to build the VF-200-1, the first flight unit, to be tested in space aboard the International Space Station (ISS). The electrical energy will come from ISS at low power level, be stored in batteries and used to fire the engine at 200 kW. | https://www.knowpia.com/knowpedia/Variable_Specific_Impulse_Magnetoplasma_Rocket |
---
abstract: |
Random variables representing measurements, broadly understood to include any responses to any inputs, form a system in which each of them is uniquely identified by its content (that which it measures) and its context (the conditions under which it is recorded). Two random variables are jointly distributed if and only if they share a context. In a canonical representation of a system, all random variables are binary, and every content-sharing pair of random variables has a unique maximal coupling (the joint distribution imposed on them so that they coincide with maximal possible probability). The system is contextual if these maximal couplings are incompatible with the joint distributions of the context-sharing random variables. We propose to represent any system of measurements in a canonical form and to consider the system contextual if and only if its canonical representation is contextual. As an illustration, we establish a criterion for contextuality of the canonical system consisting of all dichotomizations of a single pair of content-sharing categorical random variables.
KEYWORDS: canonical systems, contextuality, dichotomization, direct influences, measurements.
author:
- 'Ehtibar N. Dzhafarov,^1^ Víctor H. Cervantes,^2^ and Janne V. Kujala^3^'
title: Contextuality in Canonical Systems of Random Variables
---
^1^Purdue University, USA, [email protected]\
^2^Purdue University, USA, [email protected]\
^3^University of Jyväskylä, Finland, [email protected]
\[sec: Introduction\]Introduction
=================================
We begin by recapitulating the basics of our theory of “quantum-like” contextuality, and then explain how this theory is developed in this paper. The name of the theory is *Contextuality-by-Default* (CbD), and its recent accounts can be found in Ref. [@DzhafarovKujala(2016)Fortschritte; @DzhafarovKujala(2017)LNCS; @DzhafarovKujala(2016)Context-Content].
We use the following two notation conventions throughout the paper: (1) due to its frequent occurrence we abbreviate the term *random variable* as *rv* (*rvs* in plural); and (2) we unconventionally capitalize the words *conteNt* and *conteXt* to prevent their confusion in reading.
The matrix below represents the smallest possible version of what we call a *cyclic system* [@DzhafarovKujalaLarsson(2015); @KujalaDzhafarov(2016)Proof; @KujalaDzhafarovLar(2015); @KujalaDzhafarov(2015)]:
------------- ------------- -------------------------
$R_{1}^{1}$ $R_{2}^{1}$ $c=1$
$R_{1}^{2}$ $R_{2}^{2}$ $c=2$
$\boxed{{\mathcal{R}}}$
------------- ------------- -------------------------
.
Each of the rvs $R_{q}^{c}$ represents measurements of one of two properties, $q=1$ or $q=2$, under one of two conditions, $c=1$ or $c=2$. The “properties” $q$ can also be called “objects,” “inputs,” “stimuli,” etc., depending on the application, and we refer to $q$ generically as the *conteNt* of the measurement $R_{q}^{c}$. The superscript $c$ in $R_{q}^{c}$ describes how and under what circumstances $q$ is measured, including what other conteNts are measured together with $q$. We refer to $c$ generically (and traditionally) as the *conteXt* of the measurement $R_{q}^{c}$. The conteNt-conteXt pair $\left(q,c\right)$ provides a *unique identification* of $R_{q}^{c}$ within the system of measurements ${\mathcal{R}}$. In addition, being an rv, $R_{q}^{c}$ is characterized by its *distribution*. In this paper, consideration is confined to *categorical rvs*, those with a finite number of values. The term “measurement” is understood very broadly, to include any response to any input or stimulus.
Let us begin with the simplest case of the system ${\mathcal{R}}$, when all four rvs $R_{q}^{c}$ are *binary*. In quantum physics, $R_{q}^{c}$ may describe a measurement of spin along one of two fixed axes, $q=1$ or $q=2$, in a spin-$\nicefrac{1}{2}$ particle. In psychology, $R_{q}^{c}$ may describe a response to one of two Yes-No questions, $q=1$ or $q=2$. In both applications, in conteXt $c=1$ one measures first $q=1$ and then $q=2$; in conteXt $c=2$ the measurements are made in the opposite order. The rvs sharing a conteXt $c$ are recorded in pairs, $\left(R_{1}^{c},R_{2}^{c}\right)$, which means that they are *jointly distributed* and can be viewed as a single (here, four-valued) rv. No such joint distribution is defined for rvs in different conteXts, such as $R_{2}^{1}$ and $R_{1}^{2}$. They are *stochastically unrelated* (to each other): one cannot ask about the probability of an “event” $\left[R_{2}^{1}=x,R_{1}^{2}=y\right]$, as no such “event” is defined. In particular, two conteNt-sharing rvs, $R_{q}^{1}$ and $R_{q}^{2}$, are always stochastically unrelated, hence they can never be considered one and the same rv, even if they are identically distributed (see Ref. [@DzhafarovKujala(2016)Fortschritte] for a detailed probabilistic analysis).
In both applications mentioned, the distributions of $R_{q}^{1}$ and $R_{q}^{2}$ are de facto different. In the quantum-mechanical example, the first spin measurement generally changes the state of the particle [@Bacciagaluppi(2015)]. Assuming identical preparations in both conteXts $c$, therefore, the state of the particle when a $q$-spin is measured first will be different from that when it is measured second. In the behavioral example, one’s response to a question asked second will generally be influenced by the question asked first [@WangBusemeyer2013; @Moore]. This creates obvious *conteXt-dependence* of the measurements, but this is not what we call *contextuality* in our theory. The original meaning of the term in quantum mechanics, when translated into the language of probability theory (as in Refs. [@DzhafarovKujala(2016)Context-Content; @DzhafarovKujala(2016)Fortschritte; @DzhafarovKujalaCervantes(2016)LNCS] and, with caveats, [@Cabello(2013); @KujalaDzhafarovLar(2015); @KurzynskiRamanathanKaszlikowski(2012); @Khr2009; @Khr2005; @Fine1982; @SuppesZanotti1981]), is that measurements of one and the same physical property $q$ have to be represented by different rvs depending on what other properties are being measured together with $q$ even when the laws of physics exclude all *direct interactions* (energy/information transfer) between the measurements. By extension, when such direct interactions are present, as they are in our two applications of the system ${\mathcal{R}}$, we speak of contextuality only if the dependence of $R_{q}^{c}$ on $c$ is greater, in a well-defined sense, than just the changes in its distribution. Contextuality is a non-causal aspect of conteXt-dependence, revealed in the probabilistic relations between different measurements rather than in their individual distributions.
This is how this understanding is implemented in CbD. We characterize the conteXt-induced changes in the individual distributions, i.e., the difference between those of $R_{q}^{1}$ and $R_{q}^{2}$, by *maximally coupling* them. This means that we replace $R_{q}^{1}$ and $R_{q}^{2}$ with jointly distributed $T_{q}^{1}$ and $T_{q}^{2}$ that have the same respective individual distributions, and among all such couplings we find one with the maximal value of $\Pr\left[T_{q}^{1}=T_{q}^{2}\right]$. This maximal coupling $\left(T_{q}^{1},T_{q}^{2}\right)$ always exists and is unique. The next step is to see if there exists an *overall coupling* $S$ of ${\mathcal{R}}$, a jointly distributed quadruple with elements corresponding to those of ${\mathcal{R}}$,
------------- ------------- ------------- --
$S_{1}^{1}$ $S_{2}^{1}$ $c=1$
$S_{1}^{2}$ $S_{2}^{2}$ $c=2$
$\boxed{S}$
------------- ------------- ------------- --
,
such that its rows $\left(S_{1}^{c},S_{2}^{c}\right)$ are distributed as the rows of ${\mathcal{R}}$ and its columns $\left(S_{q}^{1},S_{q}^{2}\right)$ are distributed as the maximal couplings $\left(T_{q}^{1},T_{q}^{2}\right)$ of the columns of ${\mathcal{R}}$. If such a *maximally-connected* coupling $S$ does not exist, one can say that the within-conteXt (row-wise) relations prevent different measurements of the same conteNt (column-wise) to be as close to each other as this is allowed by the direct influences alone. Put differently, the relations of $R_{q}^{1}$ and $R_{q}^{2}$ with their same-conteXt counterparts force them, if imposed a joint distribution upon, to coincide less frequently than if these relations are ignored. The system then is deemed *contextual*. Conversely, if the coupling $S$ above exists, the within-conteXt relations do not make the measurements of $R_{q}^{1}$ and $R_{q}^{2}$ any more dissimilar than required by the direct influences: the system is *noncontextual*.
The (non)existence of $S$ is determined by a simple linear programing procedure [@DzhafarovKujalaLarsson(2015); @DzhafarovKujala(2016)Context-Content]: in our example, $S$ has $2^{4}$ possible values, and we find out if they can be assigned nonnegative numbers (probability masses) that sum to the given row-wise probabilities $\Pr\left[R_{1}^{c}=x,R_{2}^{c}=y\right]$ and the computed column-wise probabilities $\left[T_{q}^{1}=x,T_{q}^{2}=y\right]$. There is also a simple criterion (inequality) for the existence of a solution for this system of equations [@DzhafarovKujalaLarsson(2015); @KujalaDzhafarov(2016)Proof; @KujalaDzhafarovLar(2015)]. Using it one can show, e.g., that in our quantum-mechanical application the system ${\mathcal{R}}$ is always noncontextual, and so it is in the behavioral application if one adopts the model proposed in Ref. [@WangBusemeyer2013] (see Ref. [@DzhafarovZhangKujala(2015)Isthere] for details). Mathematically, however, the system ${\mathcal{R}}$ can be contextual, and if it is, CbD provides a simple way of computing the *degree* of its contextuality [@DzhafarovKujala(2016)Context-Content]: one replaces the probability masses in the above linear programing task with *quasi-probabilities*, allowed to be negative, and finds among the solutions the minimum sum of their absolute values (see Section \[subsec: Degree-of-contextuality\]).
Although most of these principles and procedures of CbD have been formulated for arbitrary systems of measurements [@DzhafarovKujalaCervantes(2016)LNCS; @DzhafarovKujala(2016)Context-Content], they only work without complications with systems that satisfy the following two constraints: (A) they contain only binary rvs, and (B) there are no more than two rvs sharing a conteNt (i.e., occupying the same column). What we propose in this paper is to always present a system of measurements in a *canonical form*, which is in essence one with the properties A and B. The cyclic systems form a subclass of canonical systems, rich enough to cover most experimental paradigms of traditional interest in quantum-mechanical and behavioral contextuality studies [@DzhafarovKujala(2016)Context-Content; @DzhafarovKujalaCervantes(2016)LNCS; @DzhafarovKujalaCervantesZhangJones(2016); @DzhafarovKujalaLarsson(2015); @DzhafarovZhangKujala(2015)Isthere; @KujalaDzhafarovLar(2015)], but far from satisfactory generality.
What are the complications one faces if a system does not satisfy the properties A and B? Consider the system below, with all its rvs binary but with three rather than two of them in each column:
---------------------------- ---------------------------- -------------------------- --
$R_{1}^{1}$$\ensuremath{}$ $R_{2}^{1}$$\ensuremath{}$ $c=1$
$R_{1}^{2}$$\ensuremath{}$ $R_{2}^{2}$$\ensuremath{}$ $c=2$
$R_{1}^{3}$$\ensuremath{}$ $R_{2}^{3}$$\ensuremath{}$ $c=3$
$\boxed{{\mathcal{R}}'}$
---------------------------- ---------------------------- -------------------------- --
How does CbD apply here? In the earlier version of theory (summarized in Refs. [@DzhafarovKujala(2016)Context-Content; @DzhafarovKujalaCervantes(2016)LNCS]) we computed the couplings $\left(T_{q}^{1},T_{q}^{2},T_{q}^{3}\right)$ of each column that maximize $\Pr\left[T_{q}^{1}=T_{q}^{2}=T_{q}^{3}\right]$. One problem with this approach is that the maximal coupling $\left(T_{q}^{1},T_{q}^{2},T_{q}^{3}\right)$, while it always exists, is not defined uniquely. What should be the contextuality analysis of ${\mathcal{R}}'$ if the within-conteXt (row-wise) distributions are compatible with some but not all combinations of the maximal couplings for the two columns? Shall one then speak of a partial (non)contextuality? Originally we proposed to consider a system noncontextual if it is compatible with at least one of these pairs of maximal couplings, but in addition to being arbitrary, this leads to another complication: it may then very well happen that the system ${\mathcal{R}}'$ is noncontextual but one of its subsystems, e.g. ${\mathcal{R}}$, is contextual. This is contrary to one’s intuition of noncontextuality.
In the most recent publications therefore [@DzhafarovKujala(2017)LNCS; @DzhafarovKujala(2016)Fortschritte] we modified our approach into “CbD 2.0,” by positing that a coupling for conteNt-sharing measurements should be computed so that it maximizes the probability of coincidence for every *pair* (equivalently, every subset) of them. In our case, this means maximization of $\Pr\left[T_{q}^{1}=T_{q}^{2}\right]$, $\Pr\left[T_{q}^{2}=T_{q}^{3}\right]$, and $\Pr\left[T_{q}^{1}=T_{q}^{3}\right]$ (it is in fact sufficient to maximize only certain pairs rather than all of them, but this is not critical here). Such a coupling $\left(T_{q}^{1},T_{q}^{2},T_{q}^{3}\right)$ is called *multimaximal*. With only binary rvs involved, a multimaximal coupling always exists and is unique; and a subsystem of a noncontextual system then is always noncontextual.
Returning to system ${\mathcal{R}}$, consider now the situation when the measurements involved are not dichotomous. For example, let the two successive spin measurements along axes $q=1$ and $q=2$ be made on a hypothetical spin-$2$ particle, with the measurement outcomes denoted $\left\{ -2,-1,0,1,2\right\} $. In the behavioral application, let the questions asked allow 5 answers each, labeled in the same way. A maximal coupling in this situation exists for each column of ${\mathcal{R}}$, but not uniquely. This takes us back to the problem of what one should do if the row-wise distributions are compatible with some but not all pairs of these maximal couplings. Another problem is even harder. If the system is deemed noncontextual, one may consider it desirable that it remain noncontextual after some of the measurement outcomes are “lumped together.” Thus, one may wish to consider $\left\{ -2,-1,0,1,2\right\} $ in terms of “negative-zero-positive,” lumping together $-2$ with $-1$ and $2$ with $1$. Or one may wish to look at the outcomes in terms of “zero-nonzero.” As it turns out, a noncontextual system may become contextual after such coarsening of some of its measurements.
Both these problems can be resolved if we agree that *every measurement included in the system, empirically recorded or computed from those empirically recorded, should be represented by a set of binary rvs*. Let us denote by $D_{qW}^{c}$ the Bernoulli rv that equals 1 if the value of $R_{q}^{c}$ is within the subset $W$ of its possible values. We call $D_{qW}^{c}$ a *split* (of the original rv). We posit that a measurement with $k$ distinct values should always be represented by $k$ “detectors” of these values, i.e. the splits with one-element subsets $W$. Thus, in our system ${\mathcal{R}}$, each measurement $R_{q}^{c}$ should be replaced with the jointly distributed splits $$\left(D_{q\left\{ -2\right\} }^{c},D_{q\left\{ -1\right\} }^{c},D_{q\left\{ 0\right\} }^{c},D_{q\left\{ 1\right\} }^{c},D_{q\left\{ 2\right\} }^{c}\right).$$ If one is also interested in the coarsening of $R_{q'}^{c}$ into values “negative-zero-positive,” then the list should be expanded into $$\left(D_{q\left\{ -2\right\} }^{c},D_{q\left\{ -1\right\} }^{c},D_{q\left\{ 0\right\} }^{c},D_{q\left\{ 1\right\} }^{c},D_{q\left\{ 2\right\} }^{c},D_{q\left\{ -2,-1\right\} }^{c},D_{q\left\{ 1,2\right\} }^{c}\right).$$ If one wishes to include *all* possible coarsenings of the original rvs in ${\mathcal{R}}$, then the set of binary rvs should consist of *all* possible splits. Since every dichotomization creating a split should be applied to all rvs sharing a conteNt, one ends up replacing the system ${\mathcal{R}}$ with
-------------------------------- ---------- ------------------------------- ----------------------------------- ---------- --------------------------------- ---------- --------------------------------- -------------------------
$D_{1\left\{ -2\right\} }^{1}$ $\cdots$ $D_{1\left\{ 2\right\} }^{1}$ $D_{1\left\{ -2,-1\right\} }^{1}$ $\cdots$ $D_{1\left\{ 1,2\right\} }^{1}$ $\cdots$ $D_{2\left\{ 1,2\right\} }^{1}$ $c=1$
$D_{1\left\{ -2\right\} }^{2}$ $\cdots$ $D_{1\left\{ 2\right\} }^{2}$ $D_{1\left\{ -2,-1\right\} }^{2}$ $\cdots$ $D_{1\left\{ 1,2\right\} }^{2}$ $\cdots$ $D_{2\left\{ 1,2\right\} }^{2}$ $c=2$
$\boxed{{\mathcal{D}}}$
-------------------------------- ---------- ------------------------------- ----------------------------------- ---------- --------------------------------- ---------- --------------------------------- -------------------------
There are $(2^{5}-2)/2=15$ distinct dichotomizations of the set $\left\{ -2,-1,0,1,2\right\} $, and the 15 subsets $W$ in $D_{qW}^{c}$ should be chosen to avoid duplication, such as in $D_{q\left\{ 0,1\right\} }^{c}$ and $D_{q\left\{ -2,-1,2\right\} }^{c}$. Once duplication is prevented, however, all splits of all rvs one is interested in should be included. It is irrelevant that some of them can be presented as functions of the others. In fact, any split of our $R_{q}^{c}$ can be presented as a function of just three splits, chosen, e.g., as $$D_{q'}^{c}=D_{q\left\{ -1,1\right\} }^{c},D_{q''}^{c}=D_{q\left\{ 0,1\right\} }^{c},D_{q'''}^{c}=D_{q\left\{ 2\right\} }^{c}.$$ It is easy to show, however, that in the subsystem
----------------- ------------------ ------------------- -------------------------------------------------------- --------------------------
$D_{1'}^{1}$$ $ $D_{1''}^{1}$$ $ $D_{1'''}^{1}$$ $ $f\left(D_{1'}^{1},D_{1''}^{1},D_{1'''}^{1}\right)$$ $ $c=1$
$D_{1'}^{2}$$ $ $D_{1''}^{2}$$ $ $D_{1'''}^{2}$$ $ $f\left(D_{1'}^{2},D_{1''}^{2},D_{1'''}^{2}\right)$$ $ $c=2$
$\boxed{{\mathcal{D}}'}$
----------------- ------------------ ------------------- -------------------------------------------------------- --------------------------
of the system ${\mathcal{D}}$, the $f$-transformation of the maximal couplings of the first three columns, since these couplings are not jointly distributed, would not determine the coupling of the fourth column, let alone ensure that this coupling is maximal.
There is no general prescription as to which rvs should or should not be included in the system representing an empirical set of measurements: what one includes (e.g., what coarsenings of the rvs already in play one considers) reflects what aspects of the empirical situation one is interested in. Once a set of rvs is chosen, however, we uniquely form their splits and place them in a canonical system.
The remainder of the paper is organized as follows. In Section \[sec: Formal-Theory\], we present the abstract version of CbD applicable to all possible systems of categorical (and not only categorical) rvs. In Section \[sec: Split-and-Canonical\], we formalize the idea of representing any system of rvs by their splits and applying contextuality analysis to these representations only. In Section \[sec: A-systematic-study\], we investigate the representation of all coarsenings of a single pair of conteNt-sharing rvs by all possible splits. In the concluding section we explain why one might wish to consider only some rather than all possible splits.
\[rem: The-proofs-of\]The proofs of the formal propositions in the paper, unless obvious or referenced as presented elsewhere, are given in the supplementary file S, together with additional theorems and examples.
\[sec: Formal-Theory\]Formal Theory of Contextuality
====================================================
\[subsec: Basic-notions\]Basic notions
--------------------------------------
The definition of a system of rvs requires two nonempty finite sets, a set of *conteNts* $Q$ and a set of *conteXts* $C$. There is a relation $$\Yleft\subseteq Q\times C,$$ such that the projections of $\Yleft$ into $Q$ and $C$ equal $Q$ and $C$, respectively (this means that for every $q\in Q$ there is a $c\in C$, and vice versa, such that $q\Yleft c$). We read both $q\Yleft c$ and $c\Yright q$ as “$q$ *is measured in* $c$.”
A categorical rv is one with a finite set of values and its power set as the codomain sigma-algebra. A system of (categorical) rvs is a double-indexed set (we use calligraphic letters for sets of random variables) $${\mathcal{R}}=\left\{ R_{q}^{c}:q\in Q,c\in C,q\Yleft c\right\} ,\label{eq: cc-system}$$ such that (i) any $R_{q}^{c}$ and $R_{q}^{c'}$ have the same set of possible values; (ii) $R_{q}^{c}$ and $R_{q'}^{c'}$ are jointly distributed if $c=c'$; and (iii) if $c\not=c'$, $R_{q}^{c}$ and $R_{q'}^{c'}$ are *stochastically unrelated* (possess no joint distribution). For any $c\in C$ the subset $${\mathcal{R}}^{c}=\left\{ R_{q}^{c}:q\in Q,q\Yleft c\right\} =R^{c}$$ of ${\mathcal{R}}$ is called a *bunch* (of rvs) corresponding to $c$. Since the elements of a bunch are jointly distributed, the bunch is a (categorical) rv in its own right, so it can be also written as $R^{c}$. Note that we do not distinguish the representations of ${\mathcal{R}}$ as (\[eq: cc-system\]) and as $${\mathcal{R}}=\left\{ R^{c}:c\in C\right\} .$$ (See Refs. [@DzhafarovKujala(2016)Context-Content; @DzhafarovKujala(2016)Fortschritte] for a detailed probabilisitic analysis.)
For any $q\in Q$, the subset $${\mathcal{R}}_{q}=\left\{ R_{q}^{c}:c\in C,q\Yleft c\right\}$$ of ${\mathcal{R}}$ is called a *connection* (between the bunches of rvs) corresponding to $q$. Any two elements of a connection are stochastically unrelated, so it is not an rv.
\[subsec: General-definition-of-contextuality\]General definition of (non)contextuality
---------------------------------------------------------------------------------------
A (probabilistic) *coupling* $Y$ of a set of rvs $\left\{ X_{1},\ldots,X_{n}\right\} $ is a set of jointly distributed $\left\{ Y_{1},\ldots,Y_{n}\right\} $ such that $Y_{i}\sim X_{i}$ for $i=1,\ldots,n$. The tilde $\sim$ stands for “has the same distribution as.”
An (overall) coupling $S$ of a system ${\mathcal{R}}$ in (\[eq: cc-system\]) is a coupling of its bunches. That is, it is an rv $$S=\left\{ S^{c}:c\in C\right\}$$ (with jointly distributed components) such that $S^{c}\sim R^{c}$, for any $c\in C$. This implies that $$S^{c}=\left\{ S_{q}^{c}:q\in Q,q\Yleft c\right\}$$ is a set of jointly distributed rvs in a one-to-one correspondence with the identically labeled elements of ${\mathcal{R}}$.
For a given $q\in Q$, a coupling $T_{q}$ of a connection ${\mathcal{R}}_{q}$ is an rv $$T_{q}=\left\{ T_{q}^{c}:c\in C,q\Yleft c\right\}$$ such that $T_{q}^{c}\sim R_{q}^{c}.$ In particular, if $S$ is a coupling of ${\mathcal{R}}$, then $$S_{q}=\left\{ S_{q}^{c}:c\in C,q\Yleft c\right\}$$ is a coupling of ${\mathcal{R}}_{q}$, for any $q\in Q$.
Given a set ${\mathcal{T}}=\left\{ T^{c}:c\in C\right\} $ of couplings for all connections in a system ${\mathcal{R}}$, the system is said to be *noncontextual with respect to* ${\mathcal{T}}$ if ${\mathcal{R}}$ has a coupling $S$ with $S_{q}\sim T_{q}$ for any $q\in Q$. Otherwise ${\mathcal{R}}$ is said to be *contextual with respect to* ${\mathcal{T}}$.
Put differently, ${\mathcal{R}}$ is noncontextual with respect to ${\mathcal{T}}$ if and only if there is a jointly distributed set $$S=\left\{ S_{q}^{c}:q\in Q,c\in C,q\Yleft c\right\} ,$$ such that, for every $c\in C$, $S^{c}\sim R^{c}$, and for every $q\in Q$, $S_{q}\sim T_{q}$. A coupling $S$ with this property is called ${\mathcal{T}}$-*connected*.
If the couplings $T_{q}$ are characterized by some property ${\mathsf{C}}$ such that one and only one coupling $T_{q}$ satisfies this property for any given connection ${\mathcal{R}}_{q}$, then the definition can be rephrased as follows:
${\mathcal{R}}$ is said to be *noncontextual with respect to* *property* ${\mathsf{C}}$ if it has a ${\mathsf{C}}$-*connected* coupling $S$, defined as one with $S_{q}$ satisfying ${\mathsf{C}}$ for any $q\in Q$. Otherwise ${\mathcal{R}}$ is said to be *contextual with respect to* ${\mathsf{C}}$.
\[rem: multimaximally connected\]In Section \[subsec: Multimaximality-for-splits\] we will use the property of *(multi)maximality* to play the role of ${\mathsf{C}}$, and the couplings in question then are referred to as *(multi)maximally-connected*.
\[subsec: Degree-of-contextuality\]Degree of contextuality
----------------------------------------------------------
A *quasi-distribution* on a finite set $V$ is a function $V\rightarrow\mathbb{R}$ (real numbers) such that the numbers assigned to the elements of $V$ sum to 1. We will refer to these numbers as *quasi-probability masses*. A *quasi-rv* $X$ is defined analogously to an rv but with a quasi-distribution instead of a distribution.
A *quasicoupling* $X$ of ${\mathcal{R}}$ is defined as a quasi-rv $$X=\left\{ X_{q}^{c}:q\in Q,c\in C,q\Yleft c\right\} ,$$ such that $X^{c}\sim R^{c}$ for every $c\in C$. We have the following results.
\[thm: quasi-couplings\]For any system ${\mathcal{R}}$ and any set ${\mathcal{T}}$ of couplings for the connections of ${\mathcal{R}}$, there is a quasi-coupling $X$ of ${\mathcal{R}}$ such that $X_{q}=\left\{ X_{q}^{c}:c\in C,q\Yleft c\right\} \sim T_{q}$ for any $q\in Q$.
The *total variation* of $X$ is denoted by $\left\Vert X\right\Vert $ and defined as the sum of the absolute values of the quasi-probability masses assigned to all values of $X$.
The total variation $\left\Vert X\right\Vert $ reaches its minimum in the class of all quasi-couplings $X$ satisfying the conditions of Theorem \[thm: quasi-couplings\].
If $\min\left\Vert X\right\Vert $ is 1, then all quasi-probability masses are nonnegative, and the system ${\mathcal{R}}$ is noncontextual with respect to ${\mathcal{T}}$. If $\min\left\Vert X\right\Vert >1$, then the system is contextual with respect to ${\mathcal{T}}$, and $\min\left\Vert X\right\Vert -1$ can be taken as a (universally applicable) measure ** of the *degree of contextuality*.
\[sec: Split-and-Canonical\]Splits and Canonical Representations
================================================================
\[subsec: Expansions\]Expansions of the original system
-------------------------------------------------------
One is often interested not only in a system of empirically measured rvs ${\mathcal{R}}$ but also in some transformations thereof. Each such a transformation $F_{q_{1},\dots,q_{k}}$ is labeled by a set of conteNts, $q_{1},\dots,q_{k}$, and it takes as its arguments the rvs $R_{q_{1}}^{c},\ldots,R_{q_{k}}^{c}$ in each conteXt $c$ such that $c\Yright q_{1},\ldots,q_{k}$. The outcome, $$R_{q^{*}}^{c}=F_{q_{1},\dots,q_{k}}\left(R_{q_{1}}^{c},\ldots,R_{q_{k}}^{c}\right),$$ is an rv interpreted as measuring a new conteNt $q^{*}$ in the conteXt $c$. One is free to choose any such transformations and form the corresponding new conteNts, as there can be no rules mandating what one should be interested in measuring.
Using various transformations to add new conteNts and new rvs to the original system *expands* it into a larger system. Two types of expansions that are of particular interest are *expansion-through-joining* and *expansion-through-coarsening*. Joining is defined as $$R_{q_{1}}^{c},\ldots,R_{q_{k}}^{c}\longmapsto\left(R_{q_{1}}^{c},\ldots,R_{q_{k}}^{c}\right)=R_{q'}^{c},$$ whereas coarsening is transformation $$R_{q}^{c}\longmapsto F_{q}\left(R_{q}^{c}\right)=R_{q''}^{c}.$$ In fact any other transformation $F_{q_{1},\dots,q_{k}}\left(R_{q_{1}}^{c},\ldots,R_{q_{k}}^{c}\right)$ can be presented as joining followed by coarsening.
\[exa: Joining\]Consider the system
---------------------------- ---------------------------- ---------------------------- -------------------------
$R_{1}^{1}$$\ensuremath{}$ $R_{2}^{1}$$\ensuremath{}$ $\cdot$$\ensuremath{}$ $c=1$
$R_{1}^{2}$$\ensuremath{}$ $R_{2}^{2}$$\ensuremath{}$ $\cdot$$\ensuremath{}$ $c=2$
$R_{1}^{3}$$\ensuremath{}$ $\cdot$$\ensuremath{}$ $R_{3}^{3}$$\ensuremath{}$ $c=3$
$\cdot$$\ensuremath{}$ $R_{2}^{4}$$\ensuremath{}$ $R_{3}^{4}$$\ensuremath{}$ $c=4$
$\boxed{{\mathcal{R}}}$
---------------------------- ---------------------------- ---------------------------- -------------------------
.
It contains the jointly distributed $R_{1}^{1},R_{2}^{1}$ and also the jointly distributed $R_{1}^{2},R_{2}^{2}$, but in determining the maximal couplings of $R_{1}^{1},R_{1}^{2}$ and of $R_{2}^{1},R_{2}^{2}$ in the first and second columns these row-wise joints are not utilized. In some applications this would be unacceptable (e.g., in the theory of selective influences [@TNHMP; @DK2012JMP] and in the approach advocated by Abramsky and colleagues [@AbramskyBarbosaKishidaLalMansfield(2015); @AbramskyBrandenburger(2011)] this is never acceptable), and then the following expansion has to be used:
---------------- ---------------- ---------------- ------------------------------------ -----------------------------
$R_{1}^{1}$$ $ $R_{2}^{1}$$ $ $\cdot$$ $ $\left(R_{1}^{1},R_{2}^{1}\right)$ $c=1$
$R_{1}^{2}$$ $ $R_{2}^{2}$$ $ $\cdot$$ $ $\left(R_{1}^{2},R_{2}^{2}\right)$ 2
$R_{1}^{3}$$ $ $\cdot$$ $ $R_{3}^{3}$$ $ $\cdot$$ $ 3
$\cdot$$ $ $R_{2}^{4}$$ $ $R_{3}^{4}$$ $ $\cdot$$ $ 4
$\boxed{{\mathcal{R}}^{*}}$
---------------- ---------------- ---------------- ------------------------------------ -----------------------------
.$\square$
\[exa: coarsening\] If $V$ is a set of possible values of $R_{q}^{c}$, then $U=F_{q}\left(V\right)$ is the set of possible values of the rv $R_{q^{*}}^{c}=F_{q}\left(R_{q}^{c}\right).$ This rv is a coarsening of $R_{q}^{c}$. Note that any rv is its own coarsening. Since the way one labels the values of $U$ is usually irrelevant, each such function $F_{q}$ can be presented as a partition of $V$. Consider, e.g., the “mini”-system
---------------------------- -------------------------
$R_{q}^{1}$$\ensuremath{}$ $c=1$
$R_{q}^{2}$$\ensuremath{}$ $c=2$
$\boxed{{\mathcal{R}}}$
---------------------------- -------------------------
,
and let the two rvs take values on $\left\{ 1,2,3,4,5\right\} $. If these values are considered ordered, $1<\ldots<5$, one may be interested in all possible partitions of $\left\{ 1,2,3,4,5\right\} $ into subsets of consecutive numbers, such as $\left\{ 12\:|\:34\:|\:5\right\} $, $\left\{ 1\:|\:2345\right\} $, etc. There are 15 such partitions (counting $\left\{ 1\:|\:2\:|\:3\:|\:4\:|\:5\right\} $ that defines the original rvs $R_{q}^{c}$, but excluding the trivial partition $\left\{ 12345\right\} $). If the values $1,2,3,4,5$ are treated as unordered labels, one might consider all possible nontrivial partitions, such as $\left\{ \left\{ 14\right\} ,\left\{ 25\right\} ,\left\{ 3\right\} \right\} $, $\left\{ \left\{ 145\right\} ,\left\{ 23\right\} \right\} $, etc. There are 51 such partitions. In either of these two coarsening schemes the partitions can be ordered in some way, and the respective expanded systems then become
------------- --------------- ---------- ------------------------------- --------------------------
$R_{q}^{1}$ $R_{q1'}^{1}$ $\cdots$ $R_{q14'}^{1}$ $c=1$
$R_{q}^{2}$ $R_{q1'}^{2}$ $\cdots$ $R_{q14'}^{2}$$\ensuremath{}$ $c=2$
$\boxed{{\mathcal{R}}'}$
------------- --------------- ---------- ------------------------------- --------------------------
and$\quad$
------------- ---------------- ---------- ----------------- ---------------------------
$R_{q}^{1}$ $R_{q1''}^{1}$ $\cdots$ $R_{q50''}^{1}$ $c=1$
$R_{q}^{2}$ $R_{q1''}^{2}$ $\cdots$ $R_{q50''}^{2}$ $c=2$
$\boxed{{\mathcal{R}}''}$
------------- ---------------- ---------- ----------------- ---------------------------
\[rem: support small\]Although the number of the states (combinations of the values of the elements) of the bunch $R^{c}$ in ${\mathcal{R}}'$ and especially in ${\mathcal{R}}^{''}$ is very large, the support of each bunch (the set of the states with nonzero probabilities) has the same size as that of the initial random variable $R_{q}^{c}$ in ${\mathcal{R}}$ (i.e., in our example, it cannot exceed 5). This follows from the facts that each event $R_{q}^{c}=x$ uniquely defines the state of $R^{c}$ in ${\mathcal{R}}'$ and in ${\mathcal{R}}^{''}$, and that $\sum_{x}\Pr\left[R_{q}^{c}=x\right]=1$. $\square$
\[subsec: Dichotomizations-and-splits\]Dichotomizations and canonical/split representations
-------------------------------------------------------------------------------------------
A *dichotomization* of a set $V$ is a function $f:V\rightarrow\left\{ 0,1\right\} $. Applying such an $f$ to an rv $R$ with the set of possible values $V$, we get a binary rv $f\left(R\right)$. We call this $f\left(R\right)$ a *split* of the original $R$.
If $R_{q}^{c}$ is an element of a system ${\mathcal{R}}$, let us agree to identify $f\left(R_{q}^{c}\right)$ as $D_{qW}^{c}$, where $W=f^{-1}\left(1\right)$, with the understanding that $D_{qW}^{c}$ and $D_{q\left(V-W\right)}^{c}$ are indistinguishable. To make the choice definitive, we always choose $W$ as the smaller of $W$ and $V-W$; in the case they have the same number of elements, we order the elements of $V$, say $1<2<\ldots<k$, and then choose $W$ as lexicographically preceding $V-W$.
With $V=\left\{ 1,2,\ldots,k\right\} $, the jointly distributed set of splits $$\left\{ D_{q\left\{ 1\right\} }^{c},D_{q\left\{ 2\right\} }^{c},\ldots,D_{q\left\{ k\right\} }^{c}\right\}$$ is called the *split representation* of $R_{q}^{c}$. If $k=2$, then $R_{q}^{c}$ is its own split representation, because $D_{q\left\{ 1\right\} }^{c}$ and $D_{q\left\{ 2\right\} }^{c}$ are indistinguishable.
The system ${\mathcal{D}}$ obtained from a system ${\mathcal{R}}$ by replacing each of its elements by its split representations is called the *canonical* (or *split*) *representation* of ${\mathcal{R}}$.
\[exa: Joining splits\] Let all rvs in ${\mathcal{R}}$ be binary, $0/1$, whence $\left(R_{1}^{1},R_{2}^{1}\right)$ and $\left(R_{1}^{2},R_{2}^{2}\right)$ in ${\mathcal{R}}^{*}$ have 4 values each: $00,01,10,11$. Replacing them with the split representations and observing that the first three columns do not change, we get the following canonical representation of ${\mathcal{R}}^{*}$:
-------------------------- -------------------------- -------------------------- --------------------------------- --------------------------------- --------------------------------- --------------------------------- -----------------------------
$D_{1}^{1}=R_{1}^{1}$$ $ $D_{2}^{1}=R_{2}^{1}$$ $ $\cdot$$ $ $D_{12\left\{ 00\right\} }^{1}$ $D_{12\left\{ 01\right\} }^{1}$ $D_{12\left\{ 10\right\} }^{1}$ $D_{12\left\{ 11\right\} }^{1}$ $c=1$
$D_{1}^{2}=R_{1}^{2}$$ $ $D_{2}^{2}=R_{2}^{2}$$ $ $\cdot$$ $ $D_{12\left\{ 00\right\} }^{2}$ $D_{12\left\{ 01\right\} }^{2}$ $D_{12\left\{ 10\right\} }^{2}$ $D_{12\left\{ 11\right\} }^{2}$ $2$
$D_{1}^{3}=R_{1}^{3}$$ $ $\cdot$$ $ $D_{3}^{3}=R_{3}^{3}$$ $ $\cdot$$ $ $\cdot$$ $ $\cdot$$ $ $\cdot$$ $ $3$
$ $ $D_{2}^{4}=R_{2}^{4}$$ $ $D_{3}^{4}=R_{3}^{4}$$ $ $\cdot$$ $ $\cdot$$ $ $\cdot$$ $ $\cdot$$ $ $4$
$\boxed{{\mathcal{D}}^{*}}$
-------------------------- -------------------------- -------------------------- --------------------------------- --------------------------------- --------------------------------- --------------------------------- -----------------------------
.$\square$
\[exa: coarsening splits\] For the system ${\mathcal{R}}'$, it is clear that the split representations of the 15 coarsenings of $R_{q}^{c}$ multiply overlap: e.g., $D_{q\left\{ 3\right\} }^{1}$ belongs to the split representations of $R_{q}^{1}$ and of the coarsenings defined by the partitions $\left\{ 12\:|\:3\:|\:45\right\} $, $\left\{ 1\:|\:2\:|\:3\:|\:45\right\} $, and $\left\{ 12\:|\:3\:|\:4\:|\:5\right\} $. Following our rules, $W$ in the splits $D_{qW}^{c}$ comprising the split representation of ${\mathcal{R}}'$ are (when written as strings) $1,2,3,4,5,12,23,34,45,$ and $15$ (note that, e.g., the split of the coarsening {1|23|4|5} with $W=\{1,23\}$ should be denoted $D_{q\{1,23\}}^{1}$ according to our definitions, but this is the same random variable as $D_{q\{45\}}^{1}$ which we have included in the list). For the system ${\mathcal{R}}''$ the canonical representation, obviously, consists of all possible splits of $R_{q}^{c}$. It will be the target of the analysis presented in Section \[sec: A-systematic-study\].$\square$
\[subsec: Multimaximality-for-splits\]Multimaximality for canonical representations
-----------------------------------------------------------------------------------
If each connection in a canonical representation ${\mathcal{D}}$ contains just two rvs, one can compute unique maximal couplings for all of these connections. The determination of whether ${\mathcal{D}}^{*}$ is (non)contextual then can proceed in compliance with the general theory presented in Section \[subsec: General-definition-of-contextuality\], and amounts to determining if ${\mathcal{D}}^{*}$ has a *maximally-connected* coupling $S$ (see Remark (\[rem: multimaximally connected\])). If no such coupling exists, the computation of the degree of contextuality in ${\mathcal{D}}^{*}$ can be done in compliance with Section \[subsec: Degree-of-contextuality\].
In a more general case, however, with an arbitrary number of rvs in each connection, maximal couplings should be replaced with computing what we call *multimaximal couplings* [@DzhafarovKujala(2016)Fortschritte; @DzhafarovKujala(2017)LNCS].
A coupling $T_{q}$ of a connection ${\mathcal{D}}_{q}$ of a split representation ${\mathcal{D}}$ is called *multimaximal* if, for any $c,c'\in C$ such that $c,c'\Yright q$, $\Pr\left[T_{q}^{c}=T_{q}^{c'}\right]$ is maximal over all possible couplings of ${\mathcal{D}}_{q}$. (If the connection contains two rvs, its multimaximal coupling is simply maximal.)
A multimaximal coupling is known to have the following properties.
[Multimax1:]{}
: The multimaximal coupling exists and is unique for any connection ${\mathcal{D}}_{q}$ ([@DzhafarovKujala(2017)LNCS] Corollary 1).
[Multmax2:]{}
: $T_{q}$ is a multimaximal coupling of ${\mathcal{D}}_{q}$ if and only if any subset of $T_{q}$ is a maximal coupling for the corresponding subset of ${\mathcal{D}}_{q}$ ([@DzhafarovKujala(2017)LNCS] Theorem 5; [@DzhafarovKujala(2016)Fortschritte] Theorem 2.3).
[Multimax3:]{}
: In a connection ${\mathcal{D}}_{q}$, if $\left\{ c_{1},\ldots,c_{n}\right\} $ is the set of all $c\Yright q$ enumerated so that $$\Pr\left[D_{q}^{c_{1}}=1\right]\leq\ldots\leq\Pr\left[D_{q}^{c_{n}}=1\right],$$ then $T_{q}$ is a multimaximal coupling of ${\mathcal{D}}_{q}$ if and only if $\Pr\left[T_{q}^{c_{i}}=T_{q}^{c_{i+1}}\right]$ is maximal for $i=1,\ldots,n-1$, over all possible couplings of ${\mathcal{D}}_{q}$ ([@DzhafarovKujala(2016)Fortschritte] Theorem 2.3).
\[sec: A-systematic-study\]The Largest Canonical Representation of a Two-Element Connection
===========================================================================================
We consider here the case when one is interested in all possible coarsenings of the rvs in a system. The canonical/split representation of the system then contains all splits of all rvs. We will investigate in detail a fragment of the original (expanded) system involving just two $k$-valued rvs within a single connection:
---------------------------- -------------------------
$R_{1}^{1}$$\ensuremath{}$ $c=1$
$R_{1}^{2}$$\ensuremath{}$ $c=2$
$\boxed{{\mathcal{R}}}$
---------------------------- -------------------------
The canonical system with all splits of these $k$-valued rvs is
---------- ----------------- ----------------- ---------- -------------------------------------- -------------------------
$D^{1}:$ $D_{W1}^{1}$$ $ $D_{W2}^{1}$$ $ $ $ $D_{W\left(2^{k-1}-1\right)}^{1}$$ $ $c=1$
$D^{2}:$ $D_{W1}^{2}$$ $ $D_{W2}^{2}$$ $ $\cdots$ $D_{W\left(2^{k-1}-1\right)}^{2}$$ $ $c=2$
$\boxed{{\mathcal{D}}}$
---------- ----------------- ----------------- ---------- -------------------------------------- -------------------------
where $W1$, $W2$, etc. are the subsets $f^{-1}\left(1\right)$ chosen as explained in Section \[subsec: Dichotomizations-and-splits\] from the $2^{k-1}-1$ distinct dichotomizations $f$ of $\left\{ 1,\ldots,k\right\} $. The number $2^{k-1}-1$ is arrived at by taking the number of all subsets, subtracting $2$ improper subsets, and dividing by 2 because one chooses only one of $W$ and $\left\{ 1,2,\ldots,k\right\} -W$. The goal is to determine whether ${\mathcal{D}}$ is contextual. If it is, then any canonical system that includes ${\mathcal{D}}$ as its subsystem (i.e., represents an original system with ${\mathcal{R}}$ as part of its connection) is contextual.
The two original rvs have distributions $$\Pr\left[R_{1}^{1}=i\right]=p_{i},\;\Pr\left[R_{1}^{2}=i\right]=q_{i}\;,i=1,2,\ldots,k.\label{eq: constraints for two bunches}$$ A state (or value) of a bunch in the system ${\mathcal{D}}$ is a vector of $2^{k-1}-1$ zeroes and ones. However, the support of each of the bunches in system ${\mathcal{D}}$ consists of at most $k$ corresponding states, and we can enumerate them by any $k$ symbols, say, $1,2,\ldots,k$, as in the original variable: $$\Pr\left[D^{1}=i\right]=p_{i},\;\Pr\left[D^{2}=i\right]=q_{i}\;,i=1,2,\ldots,k,$$
As a result, ${\mathcal{D}}=\left\{ D^{1},D^{2}\right\} $ has $k^{2}$ possible states that we can denote $ij$, with $i,j\in\left\{ 1,2,\ldots,k\right\} $. A coupling $S=\left(S_{q}^{1},S_{q}^{2}\right)$ of ${\mathcal{D}}$ assigns probabilities $$r_{ij}=\Pr\left[S_{q}^{1}=i,S_{q}^{2}=j\right],\;i,j\in\left\{ 1,\ldots,k\right\} ,\label{eq: def of rij}$$ to these $k^{2}$ states so that they satisfy $2k$ linear constraints imposed by (\[eq: constraints for two bunches\]), $$\sum_{j=1}^{k}r_{ij}=p_{i},\;\sum_{i=1}^{k}r_{ij}=q_{j},\;i,j\in\left\{ 1,\ldots,k\right\} .\label{eq: bunch 1 of 1-2}$$ If $S$ is maximally-connected, then it should also satisfy $2^{k-1}-1$ linear constraints imposed by the maximal couplings of the corresponding connections. Specifically, if $W=\left\{ i_{1},\ldots,i_{m}\right\} \subset\left\{ 1,\ldots,k\right\} $, then the maximal coupling $\left(S_{W}^{1},S_{W}^{2}\right)$ of $\left(D_{W}^{1},D_{W}^{2}\right)$ is distributed as $$\left.\begin{array}{c}
\Pr\left[S_{W}^{1}=1\right]=\Pr\left[D_{W}^{1}=1\right]=p_{i_{1}}+p_{i_{2}}+\ldots+p_{i_{m}}\\
\Pr\left[S_{W}^{2}=1\right]=\Pr\left[D_{W}^{2}=1\right]=q_{i_{1}}+q_{i_{2}}+\ldots+q_{i_{m}}\\
\Pr\left[S_{W}^{1}=S_{W}^{2}=1\right]=\min\left(p_{i_{1}}+p_{i_{2}}+\ldots+p_{i_{m}},q_{i_{1}}+q_{i_{2}}+\ldots+q_{i_{m}}\right)
\end{array}\right].\label{eq: maximal for connections}$$ Let us use the term $m$-split to designate any split $D_{W}$ with an $m$-element set $W$ ($m\leq k/2$). Thus, $D_{W}$ with $W=\left\{ i\right\} $ is a 1-split, with $W=\left\{ i,j\right\} $ it is a 2-split, and the higher-order splits appear beginning with $k>5$. Theorem \[thm: higher-order splits\] and its corollaries below show that in determining whether the system ${\mathcal{D}}$ is contextual one needs to consider only the 1-splits and 2-splits. Let us use the term *1-2 system* for this subsystem of ${\mathcal{D}}$. An overall coupling $S$ of ${\mathcal{D}}$ contains as its part a maximally-connected coupling of the 1-2 system if and only if the probabilities $r_{ij}$ in (\[eq: def of rij\]) satisfy (\[eq: maximal for connections\]) for $m=1$ and $m=2$: $$r_{ii}=\min\left(p_{i},q_{i}\right),\;i\in\left\{ 1,\ldots,k\right\} \label{eq: 1 of 1-2}$$ and $$r_{ii}+r_{ij}+r_{ji}+r_{jj}=\min\left(p_{i}+p_{j},q_{i}+q_{j}\right),\;i,j\in\left\{ 1,\ldots,k\right\} ,i<j.\label{eq: 2 of 1-2}$$ That is, a maximally-connected coupling of the 1-2 system is described by the $3k+\binom{k}{2}$ linear equations (\[eq: bunch 1 of 1-2\])-(\[eq: 1 of 1-2\])-(\[eq: 2 of 1-2\]). We have therefore the following necessary condition for noncontextuality of ${\mathcal{D}}$.
\[thm: 1-2 =000023 constraints\]If the system ${\mathcal{D}}$ is noncontextual, then the $3k+\binom{k}{2}$ linear equations (\[eq: bunch 1 of 1-2\])-(\[eq: 1 of 1-2\])-(\[eq: 2 of 1-2\]) are satisfied.
\[rem: Rank\]Note that $3k+\binom{k}{2}<k^{2}$ for $k>5$. (For completeness only, Theorem \[thm: rank\] in the supplementary file S shows that the rank of this system of equations is $2k-1+\binom{k}{2}$.)
\[thm: higher-order splits\]In a maximally-connected coupling $S$ of ${\mathcal{D}}$ with $k>5$, the distributions of the 1-splits and 2-splits uniquely determine the probabilities of all higher-order splits. Specifically, for any $2<m\leq k/2$, and any $W=\left\{ i_{1},\ldots,i_{m}\right\} \subset\left\{ 1,\ldots,k\right\} $, the probability that the corresponding $m$-split equals 1 is $$\begin{array}{l}
\min\left(p_{i_{1}}+p_{i_{2}}+\ldots+p_{i_{m}},q_{i_{1}}+q_{i_{2}}+\ldots+q_{i_{m}}\right)=\sum_{j=1}^{m}\min\left(p_{i_{j}},q_{i_{j}}\right)\\
+\sum_{j=1}^{m-1}\sum_{j'=j+1}^{m}\left[\min\left(p_{i_{j}}+p_{i_{j'}},q_{i_{j}}+q_{i_{j'}}\right)-\min\left(p_{i_{j}},q_{i_{j}}\right)-\min\left(p_{i_{j'}},q_{i_{j'}}\right)\right].
\end{array}\label{eq: relation}$$
It is easy to find numerical examples of the distributions of $R_{1}^{1}$ and $R_{1}^{2}$ for which (\[eq: relation\]) is violated (see Example \[exa: Relation may be violated\] in the supplementary file S). As shown below, however, (\[eq: relation\]) cannot be violated if a maximally-connected coupling for the 1-2 system exists. It follows from the fact that the statement of Theorem \[thm: 1-2 =000023 constraints\] can be reversed: (\[eq: bunch 1 of 1-2\])-(\[eq: 1 of 1-2\])-(\[eq: 2 of 1-2\]) imply that ${\mathcal{D}}$ is noncontextual. We establish this fact by first characterizing the distributions of $R_{1}^{1}$ and $R_{1}^{2}$ for a noncontextual 1-2 system (Theorem \[thm: 1-2 system\] with Corollary \[cor: 1-2-system\]), and then showing that (\[eq: relation\]) always holds for such distributions (Theorem \[thm: exists unique\]).
\[thm: 1-2 system\]A maximally-connected coupling for a 1-2 system is unique if it exists. In this coupling, the only pairs of $ij$ in (\[eq: def of rij\]) that may have nonzero probabilities assigned to them are the diagonal states $\left\{ 11,22,\ldots,kk\right\} $ and either the states $\left\{ i1,i2,\ldots,ik\right\} $ for a single fixed $i$ or the states $\left\{ 1j,2j,\ldots,kj\right\} $ for a single fixed $j$ ($i,j=1,\ldots,k$).
Assuming, with no loss of generality, that the single fixed $i$ or the single fixed $j$ in the formulation above is $2,$ the theorem says that the nonzero probabilities assigned to the states of the maximally-connected coupling (shown below for $k=4$) could only occupy the cells marked with asterisks:
[c|c|c|c|c|]{} & & & & [\
]{} $1$ & $\ast$ & $\ast$ & $0$ & $0$[\
]{} $2$ & $0$ & $\ast$ & $0$ & $0$[\
]{} $3$ & $0$ & $\ast$ & $\ast$ & $0$[\
]{} $4$ & $0$ & $\ast$ & $0$ & $\ast$[\
]{}
$\quad$or
[c|c|c|c|c|]{} & & & & [\
]{} $1$ & $\ast$ & $0$ & $0$ & $0$[\
]{} $2$ & $\ast$ & $\ast$ & $\ast$ & $\ast$[\
]{} $3$ & $0$ & $0$ & $\ast$ & $0$[\
]{} $4$ & $0$ & $0$ & $0$ & $\ast$[\
]{}
.
\[cor: 1-2-system\]The 1-2 system for the original rvs $R_{1}^{1},R_{1}^{2}$ has a maximally-connected coupling if and only if either $p_{i}>q_{i}$ for no more than one $i$ (this single possible $i$ being the single fixed $i$ in the formulation of the theorem), or $p_{j}<q_{j}$ for no more than one $j$ (this single possible $j$ being the single fixed $j$ in the formulation of the theorem), $i,j\in\left\{ 1,\ldots,k\right\} $.
The relationship between $\left(p_{1},\ldots,p_{k}\right)$ and $\left(q_{1},\ldots,q_{k}\right)$ described in this corollary is some form of stochastic dominance for categorical rvs, but it does not seem to have been previously identified. We propose to say that $R_{1}^{1}$ *nominally dominates* $R_{1}^{2}$ if $p_{i}<q_{i}$ for no more than one value of $i=1,\ldots,k$ (i.e., $p_{i}\geq q_{i}$ for at least $k-1$ of them). Two categorical rvs nominally dominate each other if and only if either they are identically distributed or $k=2$. Using this notion, and combining Corollary \[cor: 1-2-system\] with Theorems \[thm: 1-2 =000023 constraints\] and \[thm: 1-2 system\], we get the main result of this section.
\[thm: exists unique\]The system ${\mathcal{D}}$ is noncontextual if and only if its 1-2 subsystem is noncontextual, i.e., if and only if one of the $R_{1}^{1}$ *and* $R_{1}^{2}$ nominally dominates the other.
Concluding remarks
==================
Contextuality analysis of an empirical situation involves the following sequence of steps:
$$\xymatrix@C=1cm{\mbox{\ensuremath{\begin{array}{c}
\textnormal{empirical}\\
\textnormal{measurements}
\end{array}}\ar[r]} & \textnormal{\ensuremath{\begin{array}{c}
\textnormal{initial}\\
\textnormal{system of rvs}
\end{array}}}\ar[r] & \textnormal{\ensuremath{\begin{array}{c}
\textnormal{expanded}\\
\textnormal{system of rvs}
\end{array}}}\ar[r] & \textnormal{\ensuremath{\begin{array}{c}
\textnormal{canonical/split}\\
\textnormal{representation}
\end{array}}}}$$ In the initial system, measurements are represented by rvs each of which generally has multiple values. Expansion means adding to the system new conteNts with corresponding connections (conteNt-sharing rvs) computed as functions of the existing connections. In a canonical representation of the system all rvs are binary, and the connections are coupled multimaximally, meaning essentially that one deals with their elements pairwise. The issue of contextuality is reduced to that of compatibility of the unique couplings for pairs of conteNt-sharing rvs with the known distributions of the conteXt-sharing bunches of rvs. Coupling the connections multimaximally ensures that a noncontextual system has all its subsystems noncontextual too.
The canonical system of rvs is uniquely determined by the expanded system, but the latter is inherently non-unique, it depends on what aspects of the empirical situation one wishes to include in the system. Thus, it is one’s choice rather than a general rule whether one considers a multi-valued measurement as representable by *all* or only *some* of its possible coarsenings. If one chooses all coarsenings, the split/canonical representation involves all dichotomizations, and then Theorem \[thm: exists unique\] says that the canonical system is noncontextual only if, for any pair of rvs $R_{q}^{c},R_{q}^{c'}$ in the expanded system, one of them, say $R_{q}^{c}$, “nominally dominates” the other. This domination means that $\Pr\left[R_{q}^{c}=x\right]<\Pr\left[R_{q}^{c'}=x\right]$ holds for no more than one value $x$ of these rvs: a stringent necessary condition for noncontextuality, likely to be violated in many empirical systems.
This is of special interest for contextuality studies outside quantum physics. Historically, the search for non-quantum contextual systems was motivated by the possibility of applying quantum-theoretic formalisms in such fields as biology [@Asanoetal2015], psychology [@WangBusemeyer2013; @BusemeyerBruza2012], economics [@HavenKhrennikov2012], and political science [@Khrennikova]. In CbD, the notion of contextuality is not tied to quantum formalisms in any special way. The possibility of non-quantum contextual systems here is motivated by treating contextuality as an abstract probabilistic issue: there are no a priori reasons why a system of rvs describing, say, human behavior could not be contextual if it is qualitatively (i.e., up to specific probability values) the same as a contextual one describing particle spins. Nevertheless, all known to us systems with dichotomous responses investigated for potential contextuality (with the exception of one, very recent experiment) have been found to be noncontextual [@DzhafarovKujalaCervantesZhangJones(2016); @DzhafarovZhangKujala(2015)Isthere; @CervantesDzhafarov2017]. The use of canonical representations with dichotomizations of multiple-choice responses offers new possibilities.
In some cases, however, the use of all possible dichotomizations is not justifiable. Notably, if the values of an rv are linearly ordered, $x_{1}<x_{2}<\ldots,x_{N}$, it may be natural to only allow dichotomizations $f$ with $f^{-1}\left(1\right)$ containing several successive values, $\left\{ x_{l},x_{l+1},\ldots,x_{L}\right\} $, for some $l,L\in\left\{ 1,\ldots,N\right\} $. An even stronger restriction would be to only allow “cuts,” with $f^{-1}\left(1\right)=\left\{ x_{l},x_{l+1},\ldots,x_{N}\right\} $ or $\left\{ x_{1},x_{2},\ldots,x_{l-1}\right\} $.

Stronger restrictions on possible dichotomizations translate into stronger restrictions on the pairs $R_{q}^{c},R_{q'}^{c}$ whose canonical representation is contextual. This fact is especially important if one considers expanding CbD beyond categorical rvs. Thus, it is easy to see that if one considers all possible dichotomizations of two conteNt-sharing rvs with continuous densities on the set of real numbers, then the system will be contextual whenever the two distributions are not identical. Let the densities of these rvs be $f\left(x\right)$ and $g\left(x\right)$ shown in the graphic above. If the set of all splits of these rvs forms a noncontextual system, then any discretization of these rvs should satisfy Corollary \[cor: 1-2-system\] to Theorem \[thm: 1-2 system\]. That is, for any $k>2$ and any partition $H_{1},\ldots,H_{k}$ of the set of reals into intervals, we should have either $$\begin{array}{c}
\int_{H_{i}}f\left(x\right)dx<\int_{H_{i}}g\left(x\right)dx\textnormal{ for no more than one of \ensuremath{i=1,\ldots,k},}\\
\textnormal{or}\\
\int_{H_{i}}f\left(x\right)dx>\int_{H_{i}}g\left(x\right)dx\textnormal{ for no more than one of \ensuremath{i=1,\ldots,k}}.
\end{array}\label{eq: continuous}$$ This is, however, impossible unless $f\left(x\right)=g\left(x\right)$. If they are different, then $f$ exceeds $g$ on some interval, and $g$ exceeds $f$ on some other interval. If we take any two subintervals within each of these intervals (in the graphic they are denoted by $A,B$ and $C,D$), any partition $H_{1},\ldots,H_{k}$ that includes $A,B,C,D$ will violate (\[eq: continuous\]). The development of the theory of canonical representations with variously restricted sets of splits is a task for future work.
#### Data accessibility. {#data-accessibility. .unnumbered}
See Remark \[rem: The-proofs-of\].
#### Competing interests. {#competing-interests. .unnumbered}
We have no financial or non-financial competing interests.
#### Authors contributions. {#authors-contributions. .unnumbered}
All authors significantly contributed to the development of the theory and drafting of the paper.
#### Acknowledgments. {#acknowledgments. .unnumbered}
We have greatly benefited from discussions with Matt Jones, Samson Abramsky, Rui Soares Barbosa, and Pawel Kurzynski.
#### Funding statement. {#funding-statement. .unnumbered}
This research has been supported by AFOSR grant FA9550-14-1-0318.
[10]{} Abramsky S, Brandenburger A 2011 The sheaf-theoretic structure of non-locality and contextuality. New J. Phys. 13, 113036-113075.
Abramsky S, Barbosa RS, Kishida K, Lal R, Mansfield, S 2015 Contextuality, cohomology and paradox. Comp. Sci. Log. 2015: 211-228.
Asano M, Basieva I, Khrennikov A, Ohya M, Tanaka Y, Yamato I 2015 Quantum information biology: from information interpretation of quantum mechanics to applications in molecular biology and cognitive psychology. Found. Phys, 45, 1362-1378.
Bacciagaluppi G 2015 Leggett-Garg inequalities, pilot waves and contextuality. Int. J. Quant. Found. 1, 1-17.
Busemeyer JR, Bruza PD. 2012 Quantum Cognition and Decision. Cambridge, UK: Cambridge University Press.
Cabello A 2013 Simple explanation of the quantum violation of a fundamental inequality. Phys. Rev. Lett., 110, 060402.
Cervantes VH, Dzhafarov EN Advanced analysis of quantum contextuality in a psychophysical double-detection experiment. To be published in Journal of Mathematical Psychology.
Dzhafarov EN, Kujala JV. 2012 Selectivity in probabilistic causality: Where psychology runs into quantum physics. J. Math. Psychol. 56, 54-63.
Dzhafarov EN, Kujala, JV. 2014 Contextuality is about identity of random variables. Phys. Scripta T163, 014009.
Dzhafarov EN, Kujala JV 2016 Probability, random variables, and selectivity. In W. Batchelder, H. Colonius, E.N. Dzhafarov, J. Myung (Eds) pp. 85-150. The New Handbook of Mathematical Psychology. Cambridge University Press.
Dzhafarov EN, Kujala JV 2016 Context-content systems of random variables: The contextuality-by-default theory. J. Math. Psych. 74, 11-33.
Dzhafarov EN, Kujala JV 2017 Probabilistic foundations of contextuality. To be published in Fort. Phys. - Prog. Phys.
Dzhafarov EN, Kujala JV 2017 Contextuality-by-Default 2.0: Systems with binary random variables. In J.A. de Barros, B. Coecke, E. Pothos (Eds.) Lect. Not. Comp. Sci. 10106, 16-32.
Dzhafarov EN, Kujala, JV, Cervantes VH 2016 Contextuality-by-Default: A brief overview of ideas, concepts, and terminology. In H. Atmanspacher, T. Filk, E. Pothos (Eds.) Lect. Not. Comp. Sci. 9535, 12-23.
Dzhafarov EN, Kujala JV, Cervantes VH, Zhang R, Jones M 2016 On contextuality in behavioral data. Phil. Trans. Roy. Soc. A 374: 20150234.
Dzhafarov EN, Kujala JV, Larsson J-Å 2015 Contextuality in three types of quantum-mechanical systems. Found. Phys. 7, 762--782.
Dzhafarov EN, Zhang R, Kujala JV 2016 Is there contextuality in behavioral and social systems? Phil. Trans. Roy. Soc. A 374, 20150099.
Fine A. 1982 Joint distributions, quantum correlations, and commuting observables. ** J. Math. Phys. 23, 13061310. (doi:10.1063/1.525514)
Haven E, Khrennikov A. 2012 Quantum Social Science. Cambridge, UK: Cambridge University Press.
Khrennikov A. 2005 The principle of supplementarity: A contextual probabilistic viewpoint to complementarity, the interference of probabilities, and the incompatibility of variables in quantum mechanics. Found. Phys., 35, 1655 - 1693.
Khrennikov A 2009 Contextual approach to quantum formalism. Springer: Berlin.
Khrennikov A 2010 Ubiquitous Quantum Structure: From Psychology to Finance. Springer: Berlin.
Khrennikova P, Haven E 2016 Instability of political preferences and the role of mass media: a representation in quantum framework . Phil. Trans. Roy. Soc. A 374, 20150106.
Kujala JV, Dzhafarov EN 2015 Probabilistic Contextuality in EPR/Bohm-type systems with signaling allowed. In E.N. Dzhafarov, S. Jordan, R. Zhang, V.H. Cervantes (Eds.) Contextuality from Quantum Physics to Psychology, pp. 287-308. New Jersey: World Scientific.
Kujala JV, Dzhafarov EN 2016 Proof of a conjecture on contextuality in cyclic systems with binary variables. Found. Phys. 46, 282-299.
Kujala JV, Dzhafarov EN, Larsson J-Å 2015 Necessary and sufficient conditions for maximal noncontextuality in a broad class of quantum mechanical systems. Phys. Rev. Lett. 115, 150401.
Kurzynski P, Ramanathan R, Kaszlikowski D 2012 Entropic test of quantum contextuality. Phys. Rev. Lett. 109, 020404.
Moore DW. 2002 Measuring new types of question-order effects. Public Opin. Quart. 66, 80-91.
Suppes P, Zanotti M. 1981 When are probabilistic explanations possible? Synthese 48, 191.
Wang Z, Busemeyer JR. 2013 A quantum question order model supported by empirical tests of an a priori and precise prediction.Top. Cogn. Sci. 5, 689--710.
Supplementary Text to “Contextuality in Canonical Systems of Random Variables” by Ehtibar N. Dzhafarov, Víctor H. Cervantes, and Janne V. Kujala (Phil. Trans. Roy. Soc. A xxx, 10.1098/rsta.2016.0389)
=======================================================================================================================================================================================================
\[thm: rank\]The rank of the system of linear equations (\[eq: bunch 1 of 1-2\])-(\[eq: 1 of 1-2\])-(\[eq: 2 of 1-2\]) is
This system of linear equations can be written as $$\mathbf{M}\times\mathbf{X}=\mathbf{P},$$ where $$\mathbf{P}^{T}=\left(\begin{array}{l}
\overset{k}{\overbrace{p_{1},\ldots,p_{k}}},\overset{k}{\overbrace{q_{1},\ldots,q_{k}}},\overset{k}{\overbrace{\min\left(p_{1},q_{1}\right),\ldots,\min\left(p_{k},q_{k}\right)}},\\
\\
\\
\overset{\binom{k}{2}}{\overbrace{\min\left(p_{1}+p_{2},q_{1}+q_{2}\right),\ldots,\min\left(p_{k-1}+p_{k},q_{k-1}+q_{k}\right)}}
\end{array}\right),$$ $$\mathbf{X}^{T}=\left\{ x_{ij}:i,j\in\left\{ 1,\ldots,k\right\} \right\} ,$$ and $\mathbf{M}$ is a Boolean matrix. The $\left(k+k+k+\binom{k}{2}\right)$ rows of matrix **$\mathbf{M}$** correspond to the elements of $\mathbf{P}$ and can be labeled as $$\left(\overset{k}{\overbrace{\mathbf{r}_{1\cdot},\ldots,\mathbf{r}{}_{k\cdot}}},\overset{k}{\overbrace{\mathbf{r}_{\cdot1},\ldots,\mathbf{r}_{\cdot k}}},\overset{k}{\overbrace{\mathbf{r}_{11},\ldots,\mathbf{r}_{kk}}},\overset{\binom{k}{2}}{\overbrace{\mathbf{r}_{12},\ldots,\mathbf{r}_{k-1,k}}}\right),$$ whereas the $k^{2}$ columns of $\mathbf{M}$ correspond to the elements of $\mathbf{X}$ and can be labeled as $$\left\{ \mathbf{c}_{ij}:i,j\in\left\{ 1,\ldots,k\right\} \right\} .$$ Thus, if $k=4$, the matrix $\mathbf{M}$ is $$\begin{array}{ccccccccccccccccc}
\begin{array}{cc}
& \mathbf{c}\\
\mathbf{r}
\end{array} & 11 & 12 & 13 & 14 & 21 & 22 & 23 & 24 & 31 & 32 & 33 & 34 & 41 & 42 & 43 & 44\\
1\cdot & 1 & 1 & 1 & 1\\
2\cdot & & & & & 1 & 1 & 1 & 1\\
3\cdot & & & & & & & & & 1 & 1 & 1 & 1\\
4\cdot & & & & & & & & & & & & & 1 & 1 & 1 & 1\\
\cdot1 & 1 & & & & 1 & & & & 1 & & & & 1\\
\cdot2 & & 1 & & & & 1 & & & & 1 & & & & 1\\
\cdot3 & & & 1 & & & & 1 & & & & 1 & & & & 1\\
\cdot4 & & & & 1 & & & & 1 & & & & 1 & & & & 1\\
11 & 1\\
22 & & & & & & 1\\
33 & & & & & & & & & & & 1\\
44 & & & & & & & & & & & & & & & & 1\\
12 & 1 & 1 & & & 1 & 1\\
13 & 1 & & 1 & & & & & & 1 & & 1\\
14 & 1 & & & 1 & & & & & & & & & 1 & & & 1\\
23 & & & & & & 1 & 1 & & & 1 & 1\\
24 & & & & & & 1 & & 1 & & & & & & 1 & & 1\\
34 & & & & & & & & & & & 1 & 1 & & & 1 & 1
\end{array}$$ We will continue to illustrate the steps of the proof using this matrix. We begin by adding to $\mathbf{M}$ the row $\mathbf{r}_{all}$ with all cells equal to 1, and denote the new matrix $\mathbf{M'}$. $$\begin{array}{ccccccccccccccccc}
all & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1
\end{array}$$ This does not change the rank of the matrix since $\mathbf{r}_{all}$ is the sum of all $\mathbf{r}_{\cdot i}$. Then we observe that the rows $\mathbf{r}_{k\cdot}$, $\mathbf{r}_{\cdot k}$, and all $\mathbf{r}_{ik}$ with $i<k$ can be deleted as they are linear combinations of the remaining rows of $\mathbf{M'}$. Indeed, it can be checked directly that $$\mathbf{r}_{k\cdot}=\mathbf{r}_{all}-\sum_{i=1}^{k-1}\mathbf{r}_{i\cdot},$$ $$\mathbf{r}_{\cdot k}=\mathbf{r}_{all}-\sum_{i=1}^{k-1}\mathbf{r}_{\cdot i},$$ $$\left(\mathbf{r}_{ik}-\mathbf{r}_{ii}-\mathbf{r}_{kk}\right)=\left(\mathbf{r}_{i\cdot}-\mathbf{r}_{ii}\right)+\left(\mathbf{r}_{\cdot i}-\mathbf{r}_{ii}\right)-\sum_{l<i}\left(\mathbf{r}_{li}-\mathbf{r}_{ll}-\mathbf{r}_{ii}\right)-\sum_{l>i}^{l<k}\left(\mathbf{r}_{il}-\mathbf{r}_{ii}-\mathbf{r}_{ll}\right),$$ for all $i<k$. Moreover, one can also delete $\mathbf{r}_{kk}$, because $$\sum_{i<j<k}\left(\mathbf{r}_{ij}-\mathbf{r}_{ii}-\mathbf{r}_{jj}\right)+\sum_{i<k}\left(\mathbf{r}_{ik}-\mathbf{r}_{ii}-\mathbf{r}_{kk}\right)+\sum_{i<k}\mathbf{r}_{ii}+\mathbf{r}_{kk}=\mathbf{r}_{all}.$$ Let the resulting matrix be $\mathbf{M''}$: $$\begin{array}{ccccccccccccccccc}
\cdot1 & 1 & & & & 1 & & & & 1 & & & & 1\\
\cdot2 & & 1 & & & & 1 & & & & 1 & & & & 1\\
\cdot3 & & & 1 & & & & 1 & & & & 1 & & & & 1\\
11 & 1\\
22 & & & & & & 1\\
33 & & & & & & & & & & & 1\\
12 & 1 & 1 & & & 1 & 1\\
13 & 1 & & 1 & & & & & & 1 & & 1\\
23 & & & & & & 1 & 1 & & & 1 & 1\\
all & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1
\end{array}$$
This matrix contains $$\overset{initial}{3k+\binom{k}{2}}-\underset{\mathbf{r}_{k\cdot},\mathbf{r}_{\cdot k},\mathbf{r}_{kk}}{\underbrace{3}}-\overset{all\:\mathbf{r}_{ik},i<k}{\overbrace{\left(k-1\right)}}+\underset{\mathbf{r}_{all}}{\underbrace{1}}=2k-1+\binom{k}{2}$$ rows. We prove that this matrix is of full row rank. Consider equation $$\sum_{all\:\mathbf{r}\textnormal{ in }\mathbf{M}''}\alpha_{\mathbf{r}}\mathbf{r}=0.$$ We use the following principle: if a row $\mathbf{r}$ intersects a columns whose only nonzero entry is in the row $\mathbf{r}$, then $\alpha_{\mathbf{r}}=0$, and we can delete the row $\mathbf{r}$ from the matrix, decreasing the row rank of the matrix by 1. The following statements can be directly verified.
$\mathbf{r}_{all}$ can be deleted because column $\mathbf{c}_{kk}$ has its only 1 in $\mathbf{r}_{all}$.
$$\begin{array}{ccccccccccccccccc}
\end{array}$$
Then each of $\mathbf{r}_{\cdot i}$can be deleted because the column $\mathbf{c}_{ki}$ has its only 1 in $\mathbf{r}_{\cdot i}$ ($i=1,\ldots,k-1$). $$\begin{array}{ccccccccccccccccc}
Then each of $\mathbf{r}_{i\cdot}$ can be deleted because the column $\mathbf{c}_{ik}$ has its only 1 in $\mathbf{r}_{i\cdot}$ ($i=1,\ldots,k-1$).
Then each of $\mathbf{r}_{ij}$ can be deleted because the column $\mathbf{c}_{ji}$ has its only 1 in $\mathbf{r}_{ij}$ ($i,j\in\left\{ 1,\ldots,k-1\right\} ,i<j$).
\end{array}$$
This leaves only $\mathbf{r}_{11},\ldots,\mathbf{r}_{\left(k-1\right)\left(k-1\right)}$ that are obviously linearly independent.
------------------------------------------------------------------------
In a maximally-connected coupling $S$ of ${\mathcal{D}}$ with $k>5$, the distributions of the 1-splits and 2-splits uniquely determine the probabilities of all higher-order splits. Specifically, for any $2<m\leq k/2$, and any $W=\left\{ i_{1},\ldots,i_{m}\right\} \subset\left\{ 1,\ldots,k\right\} $, the probability that the corresponding $m$-split equals 1 is $$\begin{array}{l}
\min\left(p_{i_{1}}+p_{i_{2}}+\ldots+p_{i_{m}},q_{i_{1}}+q_{i_{2}}+\ldots+q_{i_{m}}\right)=\sum_{j=1}^{m}\min\left(p_{i_{j}},q_{i_{j}}\right)\\
+\sum_{j=1}^{m-1}\sum_{j'=j+1}^{m}\left[\min\left(p_{i_{j}}+p_{i_{j'}},q_{i_{j}}+q_{i_{j'}}\right)-\min\left(p_{i_{j}},q_{i_{j}}\right)-\min\left(p_{i_{j'}},q_{i_{j'}}\right)\right].
\end{array}\label{eq: relation-1}$$
From (\[eq: 1 of 1-2\]) and (\[eq: 2 of 1-2\]), $$\begin{array}{cccc}
r_{12}+r_{21} & = & \min\left(p_{1}+p_{2},q_{1}+q_{2}\right)-\min\left(p_{1},q_{1}\right)-\min\left(p_{2},q_{2}\right)\\
\vdots & \vdots & \vdots\\
r_{ij}+r_{ji} & = & \min\left(p_{i}+p_{j},q_{i}+q_{j}\right)-\min\left(p_{i},q_{i}\right)-\min\left(p_{j},q_{j}\right) & (i<j).\\
\vdots & \vdots & \vdots\\
r_{\left(k-1\right)k}+r_{k\left(k-1\right)} & = & \min\left(p_{k-1}+p_{k},q_{k-1}+q_{k}\right)-\min\left(p_{k-1},q_{k-1}\right)-\min\left(p_{k},q_{k}\right)
\end{array}$$ Consider an $m$-split with $2<m\leq k/2$, and assume without loss of generality that $W=\left(1,\ldots,m\right)$. We have $$\sum_{i=1}^{m}\sum_{j=1}^{m}r_{ij}=\min\left(p_{1}+\ldots+p_{m},q_{1}+\ldots+q_{m}\right).$$ The left-hand-side sum can be presented as $$\begin{array}{l}
\sum_{i=1}^{m}r_{ii}+\sum_{i=1}^{m-1}\sum_{j=i+1}^{m}\left(r_{ij}+r_{ji}\right)\\
\\
=\sum_{i=1}^{m}\min\left(p_{i},q_{i}\right)+\sum_{i=1}^{m-1}\sum_{j=i+1}^{m}\left[\min\left(p_{i}+p_{j},q_{i}+q_{j}\right)-\min\left(p_{i},q_{i}\right)-\min\left(p_{j},q_{j}\right)\right],
\end{array}$$ whence we get (\[eq: relation\]).
------------------------------------------------------------------------
\[exa: Relation may be violated\] If
[c|c|c|c|c|c|c|]{} $R_{1}^{1}$= & $1$ & $2$ & $3$ & $4$ & $0$ & $0$[\
]{} prob. mass $p=$ & $.6$ & $.1$ & $.1$ & $.2$ & $0$ & $0$[\
]{}
,$\quad$
[c|c|c|c|c|c|c|]{} $R_{1}^{2}$= & $1$ & $2$ & $3$ & $4$ & $0$ & $0$[\
]{} prob. mass $q=$ & $.2$ & $.3$ & $.4$ & $.1$ & $0$ & $0$[\
]{}
,
then $$\begin{array}{l}
\overset{.8}{\overbrace{\min\left(p_{1}+p_{2}+p_{3},q_{1}+q_{2}+q_{3}\right)}}\\
\\
\not=\left.\begin{array}{rll}
\min\left(p_{1},q_{1}\right) & & .2\\
+\min\left(p_{2},q_{2}\right) & & .1\\
+\min\left(p_{3},q_{3}\right) & & .1\\
+\min\left(p_{1}+p_{2},q_{1}+q_{2}\right)-\min\left(p_{1},q_{1}\right)-\min\left(p_{2},q_{2}\right) & & .5-.2-.1\\
+\min\left(p_{1}+p_{3},q_{1}+q_{3}\right)-\min\left(p_{1},q_{1}\right)-\min\left(p_{3},q_{3}\right) & & .6-.2-.1\\
+\min\left(p_{2}+p_{3},q_{2}+q_{3}\right)-\min\left(p_{2},q_{2}\right)-\min\left(p_{3},q_{3}\right) & & .2-.1-.1
\end{array}\right\} =.5\\
\\
\end{array}\hfill\square$$
------------------------------------------------------------------------
A maximally-connected coupling for a 1-2 system is unique if it exists. In this coupling, the only pairs of $ij$ in (\[eq: def of rij\]) that may have nonzero probabilities assigned to them are the diagonal states $\left\{ 11,22,\ldots,kk\right\} $ and either the states $\left\{ i1,i2,\ldots,ik\right\} $ for a single fixed $i$ or the states $\left\{ 1j,2j,\ldots,kj\right\} $ for a single fixed $j$ ($i,j=1,\ldots,k$).
(The matrices illustrating the proof are shown for $k>6$ but the theorem is valid for all $k>1$.) If the only nonzero entries in the matrix are in the main diagonal, the theorem is trivially true. Assume therefore that $r_{ij}>0$ for some $i\not=j$. Without loss of generality, we can assume that $r_{12}>0$ and $p_{1}+p_{2}\leq q_{1}+q_{2}$. Indeed, if some $r_{ij}>0$, we can always rename the values so that $i=1$ and $j=2$; and if $p_{1}+p_{2}>q_{1}+q_{2}$, then we can simply rename all $p$s into $q$s and vice versa. In the following we will use the expression “$r_{ij}$ is $p$-minimized” if $p_{i}+p_{j}\leq q_{i}+q_{j}$, and “$r_{ij}$ is $q$-minimized” if $p_{i}+p_{j}\geq q_{i}+q_{j}$ (in both cases, $i\not=j$).
We have (the empty cells are those whose value is to be determined later)
---------- ---------- ------------ ---------- ---------- ---------- ---------- ---------------------- ----------
$1$ $r_{11}$ $r_{12}>0$ $p_{1}$
$2$ $r_{21}$ $r_{22}$ $p_{2}$
$3$ $r_{33}$
$4$ $r_{44}$
$5$ $r_{55}$
$6$ $r_{66}$
$\vdots$ $\vdots\vdots\vdots$ $\vdots$
$q_{1}$ $q_{2}$ $\ldots$
---------- ---------- ------------ ---------- ---------- ---------- ---------- ---------------------- ----------
..
From (\[eq: 1 of 1-2\])-(\[eq: 2 of 1-2\]), $r_{11}+r_{12}+r_{21}+r_{22}=\min\left\{ p_{1}+p_{2}q{}_{1}+q_{2}\right\} $, and since $r_{12}$ is $p$-minimized, $r_{11}+r_{12}+r_{21}+r_{22}=p_{1}+p_{2}$. This means
---------- --------------------------- --------------------------- ---------- ---------- ---------- ---------- ---------------------- -----------------------
$1$ $r_{11}$ $r_{12}>0$ $0$ $0$ $0$ $0$ **$\mathbf{0}$** $p_{1}=r_{11}+r_{12}$
$2$ $r_{21}$ $r_{22}$ $0$ $0$ $0$ $0$ **$\mathbf{0}$** $p_{2}=r_{21}+r_{22}$
$q_{1}\geq r_{11}+r_{21}$ $q_{2}\geq r_{12}+r_{22}$ $\ldots$
---------- --------------------------- --------------------------- ---------- ---------- ---------- ---------- ---------------------- -----------------------
.
We also should have
[c|c|c|c|c|c|c|c||c]{} & & & & & & & & [\
]{} $1$ & $r_{11}$ & $r_{12}>0$ & $0$ & $0$ & $0$ & $0$ & **$\mathbf{0}$** & $p_{1}=r_{11}+r_{12}$[\
]{} $2$ & $0$ & $r_{22}$ & $0$ & $0$ & $0$ & $0$ & **$\mathbf{0}$** & $p_{2}=r_{22}$[\
]{} $3$ & $0$ & & $r_{33}$ & & & & & [\
]{} $4$ & $0$ & & & $r_{44}$ & & & & [\
]{} $5$ & $0$ & & & & $r_{55}$ & & & [\
]{} $6$ & $0$ & & & & & $r_{66}$ & & [\
]{} $\vdots$ & **$\mathbf{0}$** & & & & & & $\vdots\vdots\vdots$ & $\vdots$[\
]{} & $q_{1}=r_{11}$ & $q_{2}\geq r_{12}+r_{22}$ & & & & & $\ldots$ & [\
]{}
because $r_{11}=\min\left\{ p_{1},q_{1}\right\} $ and $r_{11}<p_{1}$.
Generalizing, we have established the following rules:
(R1) If $r_{ij}>0$ and it is $p$-minimized, then all non-diagonal elements in the rows $i$ and $j$ are zero except for $r_{ij}$, and all non-diagonal elements in the column $i$ are zero.
(R2) (By symmetry, on exchanging $p$s and $q$s) If $r_{ij}>0$ and it is $q$-minimized, then all non-diagonal elements in the columns $i$ and $j$ are zero except for $r_{ij}$, and all non-diagonal elements in the row $j$ are zero.
Returning to our special arrangement of the rows and columns, let us prove now that all $r_{1j}$ with $j>2$ are $q$-minimized. Assume the contrary, and with no loss of generality, let $r_{15}=0$ be $p$-minimized. This would mean that $$r_{15}+r_{51}=p_{1}+p_{5}-r_{11}-r_{55}=r_{12}+p_{5}-r_{55}=0,$$ which could only be true if $r_{12}=0$, which it is not.
[c|c|c|c|c|c|c|c||c]{} & & & & & & & & [\
]{} $1$ & $r_{11}$ & $r_{12}>0$ & $\underset{q-min}{0}$ & $\underset{q-min}{0}$ & $\underset{q-min}{0}$ & $\underset{q-min}{0}$ & $\underset{q-min}{\mathbf{0}}$ & $p_{1}=r_{11}+r_{12}$[\
]{} $2$ & $0$ & $r_{22}$ & $0$ & $0$ & $0$ & $0$ & **$\mathbf{0}$** & $p_{2}=r_{22}$[\
]{} $3$ & $0$ & & $r_{33}$ & & & & & [\
]{} $4$ & $0$ & & & $r_{44}$ & & & & [\
]{} $5$ & $0$ & & & & $r_{55}$ & & & $p_{5}$[\
]{} $6$ & $0$ & & & & & $r_{66}$ & & [\
]{} $\vdots$ & **$\mathbf{0}$** & & & & & & $\vdots\vdots\vdots$ & $\vdots$[\
]{} & $q_{1}=r_{11}$ & $q_{2}\geq r_{12}+r_{22}$ & & & & & $\ldots$ & [\
]{}
Generalizing, we have established two additional rules:
(R3) If $r_{ij}$ and $r_{ij'}$ are both $p$-minimized (for pairwise distinct $i,j,j'$), then they are both zero (because if one of them is not, say $r_{ij}>0$, then $r_{ij'}=0$ and it must be $q$-minimized).
(R4) (By symmetry, on exchanging $p$s and $q$s) If $r_{ij}$ and $r_{i'j}$ are both $q$-minimized (for pairwise distinct $i,i',j$), then they are both zero.
Returning to our special arrangement of the rows and columns, it follows that nowhere in the matrix can we have $r_{ij}>0$ ($i>2$) which is $q$-minimized. Indeed, if $j>2$, then this would have contradicted R4 (because the zeros in the first row are all $q$-minimized), and if $j=2$, it would have contradicted R2 (because $r_{12}>0$).
Let us prove now that if $j>2$ and $i>2$ and $i\not=j$, then there is no $r_{ij}>0$ that is $p$-minimized. Assume the contrary: $r_{ij}>0$ and $q$-minimized, and consider $r_{2i},r_{i2}$. With no loss of generality, let $\left(i,j\right)$=$\left(4,6\right)$. In accordance with R1, we fill in the 4th and the 6th rows with zeros, and we fill in the 4th column with zeros too:
[c|c|c|c|c|c|c|c||c]{} & & & & & & & & [\
]{} $1$ & $r_{11}$ & $r_{12}>0$ & $0$ & $0$ & $0$ & $0$ & **$\mathbf{0}$** & $p_{1}=r_{11}+r_{12}$[\
]{} $2$ & $0$ & $r_{22}$ & $0$ & $0$ & $0$ & $0$ & **$\mathbf{0}$** & $p_{2}=r_{22}$[\
]{} $3$ & $0$ & & $r_{33}$ & $0$ & & & & [\
]{} $4$ & $0$ & $0$ & $0$ & $r_{44}$ & $0$ & $r_{46}>0$ & **$\mathbf{0}$** & $p_{4}=r_{44}+r_{46}$[\
]{} $5$ & $0$ & & & $0$ & $r_{55}$ & & & [\
]{} $6$ & $0$ & $0$ & $0$ & $r_{64}=0$ & $0$ & $r_{66}$ & **$\mathbf{0}$** & $p_{6}=r_{66}$[\
]{} $\vdots$ & **$\mathbf{0}$** & & & **$\mathbf{0}$** & & & $\vdots\vdots\vdots$ & $\vdots$[\
]{} & $q_{1}=r_{11}$ & $q_{2}\geq r_{12}+r_{22}$ & & $q_{4}=r_{44}$ & & $q_{6}\geq r_{46}+r_{66}$ & $\ldots$ & [\
]{}
Then $r_{24},r_{42}$ are both zero, whence $\min\left(p_{2}+p_{4},q_{2}+q_{4}\right)$ must equal $r_{22}+r_{44}$ to be a maximal coupling. But $$\min\left(p_{2}+p_{4},q_{2}+q_{4}\right)=\min\left(r_{22}+r_{44}+r_{46},r_{12}+r_{22}+r_{44}+x\right)>r_{22}+r_{44},$$ since both $r_{12}$ and $r_{46}$ are positive, a contradiction.
We come to the conclusion that the only positive non-diagonal elements in the matrix can be in the column $2$ (and they are all $p$-minimized).
[c|c|c|c|c|c|c|c||c]{} & & & & & & & & [\
]{} $1$ & $r_{11}$ & $r_{12}>0$ & $0$ & $0$ & $0$ & $0$ & **$\mathbf{0}$** & $p_{1}=r_{11}+r_{12}$[\
]{} $2$ & $0$ & $r_{22}$ & $0$ & $0$ & $0$ & $0$ & **$\mathbf{0}$** & $p_{2}=r_{22}$[\
]{} $3$ & $0$ & $r_{32}\geq0$ & $r_{33}$ & $0$ & $0$ & $0$ & **$\mathbf{0}$** & $p_{3}=r_{32}+r_{33}$[\
]{} $4$ & $0$ & $r_{42}\geq0$ & $0$ & $r_{44}$ & $0$ & $0$ & **$\mathbf{0}$** & $p_{4}=r_{42}+r_{44}$[\
]{} $5$ & $0$ & $r_{52}\geq0$ & $0$ & $0$ & $r_{55}$ & $0$ & **$\mathbf{0}$** & $p_{5}=r_{52}+r_{55}$[\
]{} $6$ & $0$ & $r_{62}\geq0$ & $0$ & $0$ & $0$ & $r_{66}$ & **$\mathbf{0}$** & $p_{6}=r_{62}+r_{66}$[\
]{} $\vdots$ & **$\mathbf{0}$** & $\vdots$ & **$\mathbf{0}$** & **$\mathbf{0}$** & **$\mathbf{0}$** & **$\mathbf{0}$** & $\vdots\vdots\vdots$ & $\vdots$[\
]{} & $q_{1}=r_{11}$ & $q_{2}\geq r_{12}+r_{22}$ & $q_{3}=r_{33}$ & $q_{4}=r_{44}$ & $q_{5}=r_{55}$ & $q_{6}=r_{66}$ & $\ldots$ & [\
]{}
Generalizing, let $r_{ij}>0$ and $i\not=j$. Then, if $r_{ij}$ is $p$-minimized, all non-diagonal elements of the matrix outside column $j$ are zero (and the non-diagonal elements in the $j$th column are $p$-minimized); if $r_{ij}$ is $q$-minimized, then all non-diagonal elements of the matrix outside row $i$ are zero (and the non-diagonal elements in the $i$th row are $q$-minimized).
It is easy to check that such a construction is always internally consistent.
------------------------------------------------------------------------
The 1-2 system for the original rvs $R_{1}^{1},R_{1}^{2}$ has a maximally-connected coupling if and only if either $p_{i}>q_{i}$ for no more than one $i$ (this single possible $i$ being the single fixed $i$ in the formulation of the theorem), or $p_{j}<q_{j}$ for no more than one $j$ (this single possible $j$ being the single fixed $j$ in the formulation of the theorem), $i,j\in\left\{ 1,\ldots,k\right\} $.
The “only if” part is obvious. To demonstrate the “if” part, consider (without loss of generality) the arrangement
[c|c|c|c|c|c|c|c||c]{} & & & & & & & & [\
]{} $1$ & & & & & & & $\ldots$ & $p_{1}\geq q_{1}$[\
]{} $2$ & & & & & & & $\ldots$ & $p_{2}$[\
]{} $3$ & & & & & & & $\ldots$ & $p_{3}\geq q_{3}$[\
]{} $4$ & & & & & & & $\ldots$ & $p_{4}\geq q_{4}$[\
]{} $5$ & & & & & & & $\ldots$ & $p_{5}\geq q_{5}$[\
]{} $6$ & & & & & & & $\ldots$ & $p_{6}\geq q_{6}$[\
]{} $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots\vdots\vdots$ & $\vdots$[\
]{} & $q_{1}$ & $q_{2}\geq p_{2}$ & $q_{3}$ & $q_{4}$ & $q_{5}$ & $q_{6}$ & $\ldots$ & [\
]{}
and fill it in as
[c|c|c|c|c|c|c|c||c]{} & & & & & & & & [\
]{} $1$ & $q_{1}$ & $p_{1}-q_{1}$ & $0$ & $0$ & $0$ & $0$ & **$\mathbf{0}$** & $p_{1}\geq q_{1}$[\
]{} $2$ & $0$ & $p_{2}$ & $0$ & $0$ & $0$ & $0$ & **$\mathbf{0}$** & $p_{2}$[\
]{} $3$ & $0$ & $p_{3}-q_{3}$ & $q_{3}$ & $0$ & $0$ & $0$ & **$\mathbf{0}$** & $p_{3}\geq q_{3}$[\
]{} $4$ & $0$ & $p_{4}-q_{4}$ & $0$ & $q_{4}$ & $0$ & $0$ & **$\mathbf{0}$** & $p_{4}\geq q_{4}$[\
]{} $5$ & $0$ & $p_{5}-q_{5}$ & $0$ & $0$ & $q_{5}$ & $0$ & **$\mathbf{0}$** & $p_{5}\geq q_{5}$[\
]{} $6$ & $0$ & $p_{6}-q_{6}$ & $0$ & $0$ & $0$ & $q_{6}$ & **$\mathbf{0}$** & $p_{6}\geq q_{6}$[\
]{} $\vdots$ & **$\mathbf{0}$** & $\vdots$ & **$\mathbf{0}$** & **$\mathbf{0}$** & **$\mathbf{0}$** & **$\mathbf{0}$** & $\vdots\vdots\vdots$ & $\vdots$[\
]{} & $q_{1}$ & $q_{2}\geq p_{2}$ & $q_{3}$ & $q_{4}$ & $q_{5}$ & $q_{6}$ & $\ldots$ & [\
]{}
with the empty cells filled in with zeros. Check that (a) all rows sum to the marginals; (b) the second column sums to $$\sum_{i=1}^{k}p_{i}-\left(\sum_{i=1}^{k}q_{i}-q_{2}\right)=q_{2};$$ (c) the rest of the columns sum to the marginals; (d) all $r_{ii}$ are $\min\left(p_{i},q_{i}\right)$; and (e) for all pairs $r_{ij}$ ($i\not=j$) the sums $r_{ii}+r_{ij}+r_{ji}+r_{jj}$ equal $\min\left(p_{i}+p_{j},q_{i}+q_{j}\right)$. The latter is proved by considering first all $j\not=2$, where it is obvious, and then $j=2$ where the computation is, for $i\not=2$, $$r_{ii}+r_{i2}+r_{2i}+r_{22}=q_{i}+\left(p_{i}-q_{i}\right)+0+p_{2}=p_{i}+p_{2},$$ as it should be because the values in the second column are to be $p$-minimized.
------------------------------------------------------------------------
The system ${\mathcal{D}}$ is noncontextual if and only if its 1-2 subsystem is noncontextual, i.e., if and only if one of the $R_{1}^{1}$ *and* $R_{1}^{2}$ nominally dominates the other.
The “only if” part is Theorem \[thm: 1-2 =000023 constraints\]. All we need to proof the “if “ part is to check that the relation (\[eq: relation\]) holds. Assume the arrangement is as in the previous corollary. Consider first any set $i_{1},\ldots,i_{m}$ that does not include 2: $$\min\left(p_{i_{1}}+p_{i_{2}}+\ldots+p_{i_{m}},q_{i_{1}}+q_{i_{2}}+\ldots+q_{i_{m}}\right)=q_{i_{1}}+q_{i_{2}}+\ldots+q_{i_{m}},$$ $$\sum_{j=1}^{m}\min\left(p_{i_{j}},q_{i_{j}}\right)=q_{i_{1}}+q_{i_{2}}+\ldots+q_{i_{m}},$$ $$\min\left(p_{i_{j}}+p_{i_{j'}},q_{i_{j}}+q_{i_{j'}}\right)-\min\left(p_{i_{j}},q_{i_{j}}\right)-\min\left(p_{i_{j'}},q_{i_{j'}}\right)=0.$$ So, (\[eq: relation\]) holds. If one of the indices (let it be $i_{1}$) is 2, then $$q_{2}+q_{i_{2}}+\ldots+q_{i_{m}}=\left(p_{2}+\sum_{x\not=2}\left(p_{x}-q_{x}\right)\right)+q_{i_{2}}+\ldots+q_{i_{m}}>p_{2}+p_{i_{2}}+\ldots+p_{i_{m}},$$ so $$\min\left(p_{2}+p_{i_{2}}+\ldots+p_{i_{m}},q_{2}+q_{i_{2}}+\ldots+q_{i_{m}}\right)=p_{2}+p_{i_{2}}+\ldots+p_{i_{m}}.$$ We also have
$$\sum_{j=1}^{m}\min\left(p_{i_{j}},q_{i_{j}}\right)=p_{2}+q_{i_{2}}+\ldots+q_{i_{m}},$$ and for any $j\not=2,j'\not=2$, $$\min\left(p_{i_{j}}+p_{i_{j'}},q_{i_{j}}+q_{i_{j'}}\right)-\min\left(p_{i_{j}},q_{i_{j}}\right)-\min\left(p_{i_{j'}},q_{i_{j'}}\right)=0,$$ $$\min\left(p_{2}+p_{i_{j}},q_{2}+q_{i_{j}}\right)-\min\left(p_{2},q_{2}\right)-\min\left(p_{i_{j}},q_{i_{j}}\right)=p_{i_{j}}-q_{i_{j}}.$$ Since index $i_{1}=2$ is paired with each of $i_{2},\ldots,i_{m}$ only once, the right-hand side in (\[eq: relation\]) is $$p_{2}+q_{i_{2}}+\left(p_{i_{2}}-q_{i_{2}}\right)+\ldots+q_{i_{m}}+\left(p_{i_{m}}-q_{i_{m}}\right)=p_{2}+p_{i_{2}}+\ldots+p_{i_{m}}.$$
| |
Listen up: It's National Fire Prevention Week
The Baker City Fire Department, a member of the Baker County Interagency Fire Prevention Team, along with fire departments and fire districts across the county is teaming up with the National Fire Protection Association (NFPA) to promote this year’s Fire Prevention Week campaign, “Learn the Sounds of Fire Safety.”
National Fire Prevention Week is Oct. 3-9.
It’s important to learn the different sounds of smoke alarms and carbon monoxide alarms.
“When an alarm makes noise — a beeping sound or a chirping sound — you must take action” said Lt. Ben Decker of Baker City Fire Department. “Make sure everyone in the home understands the sounds of the alarms and knows how to respond. To learn the sounds of your specific smoke and carbon monoxide alarms, check the manufacturer’s instructions on testing.”
The Baker City Fire Department and Baker County Interagency Fire Prevention Team share these tips:
• A continuous set of three loud beeps — beep, beep, beep — means smoke or fire. Get out and call 911, and remain outside.
• A single chirp every 30 or 60 seconds means the battery is weak and must be changed.
• All smoke alarms must be replaced after 10 years.
• Chirping that continues after the battery has been replaced means the alarm is at the end of its life and should be replaced.
• Make sure your smoke and CO alarms meet the needs of all your family members, including those with sensory or physical disabilities.
To find out more about National Fire Prevention Week history, programs and activities, go to www.fpw.org, or contact your local fire agency.
Sign up for our Daily Headlines newsletter
Success! An email has been sent to with a link to confirm list signup.
Watch this discussion.Stop watching this discussion.
(0) comments
Welcome to the discussion.
Keep it Clean. Please avoid obscene, vulgar, lewd,
racist or sexually-oriented language. PLEASE TURN OFF YOUR CAPS LOCK. Don't Threaten. Threats of harming another
person will not be tolerated. Be Truthful. Don't knowingly lie about anyone
or anything. Be Nice. No racism, sexism or any sort of -ism
that is degrading to another person. Be Proactive. Use the 'Report' link on
each comment to let us know of abusive posts. Share with Us. We'd love to hear eyewitness
accounts, the history behind an article. | |
Q:
On proving the convergence of $1/n^2\sum_{1\le k\le n}\varphi(k)$
Let $$\Phi_n=\frac{1}{n^2}\sum_{k=1}^n\varphi(k).$$ How one can show that $\Phi_n$ is convergent sequence?
(Here, $\varphi$ denotes the Euler's totient function.)
And please, without any monster asymptotic estimations.
Thank you!
EDIT:
Can we actually prove the convergence without finding the limit?
A:
$\sum_{k=1}^n\varphi(k)$ counts pairs of integers $a,b$ with $a < b \le n$. So if we ignore the $a < b$ condition the thing we are taking the limit of is just the probability that if we take two numbers $a, b$ less than $n$ at random that they are relatively prime. For large enough $n$ we have the probability that two numbers is divisible by a prime $p$ is very close to $1/p^2$. Now doing an inclusion-exclusion over primes we get: $P(a,b \ \text{have a common factor}) = \sum_{p}p^{-2} - \sum_{p_1,p_2}p_1^{-2}p_2^{-2} + \sum_{p_1,p_2,p_3}p_1^{-2}p_2^{-2}p_3^{-2}-...$. So $P(a,b \text{ are relatively prime}) = (1-2^{-2})(1-3^{-2})... = 1/\zeta(2) = 6/ \pi^2$. Halving this to take into account the $a<b$ condition gives the desired result. Yea, I'm being a bit fuzzy on the convergence details but this is definitely the idea.
| |
The Koniambo project, a nickel mine in New Caledonia, off the coast of east Australia will be constructed on site using tower systems to lift massive prefabricated modules that have been built in China.
The load-in will be carried out by Sarens using SPMTs. It will then travel in four shipments to a wharf near the mine site, from which they will be transported to site on more then 120 axles SPMTs and erected by help of a tower lift and skidding system.
Sarens will lift modules weighing 2,500 tonnes up to 60 m high and units weighing 2,200 tonne to 100 m. Firstly, a jacking system will be used to overcome the height of the main module lifting beams before tower lifts take place.
This jacking operation will be followed by a SARtower pushing lift/jack process, to a maximum height of 67 m. This will be followed by a skidding operation. The main processing plant comprises a stick built furnace flanked by a 2,450 tonne module measuring 20 x 35 x 31 m and a 2, 700 tonne module measuring 20 x 35 x 35 m.
A further three modules will be stacked above the furnace, those being: 2,300 tonnes, 21 x 35 x 15 m, directly above the furnace; 2,500 tonnes, 28 x 30 x 25 m, in middle of the three modules; 2,200 tonnes, 28 x 30 x 25 m, the uppermost module installed 67 m above the ground. | https://www.khl.com/news/sarens-takes-on-koniambo/50297.article |
There are a wide range of gemstones used in jewellery, with each having its own characteristic colour – or, in some cases, a range of colours. The origin of these colours has a chemical basis, and the precise colour can vary depending on the chemical composition of the gemstone. Interestingly, many minerals are actually colourless in their pure form, and it is the inclusion of impurities in their structure which leads to their colouration.
Generally speaking, we observe an object as coloured when it absorbs some wavelengths of visible light, but not others. Different colours of light have different wavelengths, so the exact wavelengths that are absorbed will affect the colour that the object appears. For example, an object that absorbs all wavelengths of visible light that pass through it, but does not absorb red light, will appear red.
Why does this absorption of light occur in the first place? This is dependent on the elements present in the structure of the gemstone. Some elements don’t lead to the absorption of visible light – for example, compounds containing metals from group 1 in the Periodic Table are commonly colourless. Conversely, the transition metals (the large group of metals in the centre of the Periodic Table) are capable of absorbing coloured light.
Transition metals have this capability because they have electrons in d orbitals. Orbitals are essentially regions of space around an atom in which electrons can be found; they can have different shapes and energy levels. The d orbitals in transition elements are partially filled, and this mean the unpaired electrons therein are capable of absorbing visible light in order to promote the electrons to a higher energy level. When they do this, the wavelength of light they absorb is removed from the light completely. They later fall back down from this ‘excited state’ releasing the excess energy as heat.
Different transition metals are capable of absorbing different wavelengths of visible light, thus giving the wide range of colours seen in gemstones. The transition metals may be part of the chemical formula of the mineral, or they may be present in the mineral as impurities. Even small amounts of these transition metal impurities, where a transition metal atom sits in the place of another atom which would usually occupy that position in the structure, can lead to an intense colouration.
The origin of colour in gemstones is not always down to the presence of transition metals, however. The transfer of electrons between ions in a gemstone’s structure, as a result of the absorption of wavelengths of visible light, can also be responsible in some cases. In sapphires, this is the case, with the colour a result of charge transfer between iron 2+ ions and titanium 4+ ions. The absence of an ion in a specific location in the structure, or the presence of a foreign non-transition metal ion, can also lead to colouration, as can simple diffraction of light through the crystal’s structure.
There are also examples of variations of colour within the same gemstone. The prime example of this is alexandrite. Alexandrite appears green in daylight, but red in incandescent light. This is due to the fact that natural light is richer in green light, to which our eyes are more sensitive, so we perceive the gem as green. Incandescent light, on the other hand, is richer in red light, leading to more red light being reflected, and our eyes perceiving the gem as red.
If this has piqued your interest and you’d like to read into the subject in a little more detail, I’ve included some interesting links with more complex explanations of the source of colour in gemstones below. Many of the gemstones included in the chart can be found in a wide range of colours; for example, garnets, although commonly the well-known red colour, can also be found in many other varieties. There are also several other causes of colour, rather than the main causes detailed here, so it’s a very varied area of chemistry!
The graphic in this article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. See the site’s content usage guidelines. | https://www.compoundchem.com/2014/06/29/what-causes-the-colour-of-gemstones/ |
Mednafen is a popular game console emulator, especially Sega Genesis and Sega Mega Drive. It contains many emulation engines, which allows us to play games from different old consoles, even Sony Playstation. Its addition, a GUI package called Mednaffe, makes it easy to configure a lot of consoles at once. But, unfortunately, mednaffe still doesn’t give access to many settings. It includes a choice of the gamepad type (3-buttons, 4-buttons, 6-buttons).
By default, 3-button gamepad is used, so how to get a 6-button?
At the end of this article describes a graphical way!
Enable 6-button gamepad in mednafen
First, open our mednafen config, in Linux by default it is located in:
~/.mednafen/mednafen.cfg
Then let’s open this file and find the following lines:
;Input device for Virtual Port 1 md.input.port1 gamepad
Here it says that a gamepad (3-button) is connected to port 1. To enable 6-button, we need to replace it with gamepad6:
;Input device for Virtual Port 1 md.input.port1 gamepad6
This will make 6-button gamepad on our port 1.
It is recommended to change gamepad on port 2 also to 6-button, otherwise in some games I had bugs with gamepad control.
All we need to do is edit this line:
;Input device for Virtual Port 2 md.input.port2 gamepad
Deal with it as with the first one, just change the type of gamepad to gamepad6.
After we save our changes, they will take effect immediately. So, our first gamepad and the second one in mednafen on Sega Genesis and Mega Drive will now become 6-button.
HowTo configure X-,Y-,Z- control buttons, which are missing in mednaffe settings?
Well, this is really crazy. I couldn’t find an easy way. So, all buttons are stored in the following format, for example A-button:
;md, Virtual Port 1, 6-Button Gamepad: A md.input.port1.gamepad6.a keyboard 0x0 89
This is A-button value (89) for the first 6-button controller (Port 1) in Sega Genesis\Mega Drive. We can find similar lines for all buttons in the config file.
We are interested in the latter value. This is the code of a keyboard button. In my case, 89 means 1(End) button on NumPad. So, 2(Down) button on NumPad is 90 and 3(PgDn) is 91. This is for my keyboard, for your keyboard the values may differ.
How to know this value? We can go to the graphical interface called mednaffe and customize visible buttons there (A, B , C buttons). Then open the config file and see changed values (A, B, C buttons). Then paste its temporal code (A, B, C) to desired button (X, Y, Z).
So, we can assign X button with A-button value:
;md, Virtual Port 1, 6-Button Gamepad: X md.input.port1.gamepad6.x keyboard 0x0 89
That means our X button will correspond to button 1 (End) of the keyboard numpad. After that, we can change A-button value in mednaffe (GUI) to another.
Update! How to enable 6-button input and assign X, Y, Z buttons for Sega
I finally found how to do it without a headache. First we need to make 6-button gamepad ports.
Run any Sega game with mednafen or mednaffe.
When the game started press F1 mednafen help will appear:
As we see, we can configure ports. First we need to change the type of port. We need to make 6-button gamepad.
From help, for this purpose it is need to use Ctrl + SHIFT + [Port number] shortcut. For example, to configure port 1 type use Ctrl + SHIFT + 1. Press it as many times until we see “6-button gamepad” as in the image below:
Second, we need to assign buttons. For example, to configure the first port use ALT+SHIFT+1 shortcut.
Close the help and make ALT+SHIFT+1:
The questions about the buttons arises twice. The second question is something like to confirm changes for current button. So, if you’re missclicked then the third question (3) will help you (it is on image above, questions marked as (1), (2), (3)…). | https://ultra-technology.org/software_settings/enable-6-button-input-sega-mednafen/ |
BOISE — Bogus Basin announced that public transportation, i.e. bus service, is back for holiday and weekend skiers and snowboarders, according to a press release. Busses will run daily through Sunday, January 6, which marks the end of the winter holiday break. After that, the service will be available on weekends and holidays only for the remainder of the season.
Local commuters were disappointed to learn at the beginning of the season that bus service to Bogus Basin had been discontinued. Officials at the area have been working to find a new partner to shuttle guests to the mountain on weekends and holidays.
Bus tickets to Bogus Basin — both round trip and one way — will cost $10, cash only. Passengers will pay when they board the bus. The bus route has changed and will include three stops at new locations.
Although one trip up and down the mountain per day is planned for now, additional trips may be added if ridership warrants. For more information, visit bogusbasin.org or call 208-332-5330. | https://apnews.com/a81bd3eec4b242798ed693764aab3a80 |
The Sleepy Sloth Who Wanted to Fly
There was once a sloth named Mabel who lived in the jungle. Due to her chronic exhaustion, Mabel frequently daydreamed about being a bird and flying all over the world.
Mabel witnessed a mystical bird flying high above the jungle and became curious as to how it achieved such a feat. She sought counsel from the elderly tree.
The sloth woke up the old tree and asked, "Oh wise old tree, how can I soar like the wonderful bird?"
To be successful, the tree advised, she must have faith in herself and the strength to take risks.
Mabel then left on her adventure. A friendly squirrel she saw along the road offered to show her the way.
The sloth asked the squirrel, "Could you assist me in discovering a means of flight similar to that of the enchanted bird?"
Curious, the squirrel responded: "In a word, yes! I have an idea that might help you out."
The squirrel led her along an uncharted trail in the bush.
A cruel vulture they encountered soon after informed them they were useless at flying.
You two are nothing but a couple of foolish animals, the vulture said cruelly.
It seemed like there was no hope for Mabel and the squirrel.
To Mabel's relief, a kind firefly stepped in and reassured her that she, too, could soar through the air if she just had faith in herself.
A magical butterfly, after being guided there by the firefly, consented to aid Mabel.
Will you help me soar like the mythical bird? the sloth asked the butterfly.
Answering with a touch of magic, the butterfly said: "Yes! If you're confident in yourself, I can teach you to soar."
A magical dust that the butterfly sprinkled on Mabel gave her the ability to fly.
Mabel thanked the butterfly and the firefly, then dusted herself and took to the air.
With the wind in her hair and the sun on her back, Mabel soared to new heights. To be so unrestricted and joyful was an experience she had never before had.
Mabel boarded a plane and traveled the world, finally seeing all the stunning sights about which she had only fantasized.
Mabel eventually made her way back to the jungle, where she expressed her gratitude to the sage old tree. After then, Mabel was no longer Mabel the sloth; she was Mabel the daring aviator!
This story's takeaway is simple: If you have faith in yourself, you can do anything. | https://chooz.co/the-sleepy-sloth-who-wanted-to-fly/ |
Many social media sites let you categorize your posts by using hashtags. Hashtags consist of a pound sign (#) followed by a key word(s). These subject tags can be searched on sites such as Twitter, Facebook, Google+, Instagram, YouTube, Pinterest, Flickr, and Tumblr to find information about a given topic. For example, #LDS and #LatterDaySaints are common hashtags that people add to posts about the Church and #GeneralConference is a common hashtag for general conference.
Below is a list of popular hashtags used to connect and share gospel-related content.
You can include more than one hashtag in a post, but don’t get carried away. For example, including both #GeneralConference and #Christian would be appropriate when tweeting a quote from conference that referred to Jesus Christ. Using the hashtag #Christian may help it be seen by more than just the Latter-day Saint community.
Learn more about using hashtags on Twitter and Facebook.
See the hashtag recommendations on the Church’s Newsroom.
Feel free to download the image below and share it with your friends on social media. | https://lds365.com/lds-hashtags/ |
Manhattan, and Waverly, New York, 1920s-1942. Manoil produced hollow-cast toy soldiers, also known as dime store soldiers.
The American Manoil Manufacturing Company was a plastic and metal toy-company. By 1940, Manoil figures were in such demand that they had to move the company to a larger production facility on Providence Street, Waverly, NY. Production lasted until April 1, 1942. Unlike many other manufacturers, Manoil was not able to develop any defense contracts; post-War production continued though not with the vigor or popularity that the company experienced pre-War. Its prominence was from 1937-1941 when it produced hollowcast toy soldiers (sometimes called dime store soldiers) along with toy airplanes and cars.
Maurice Manoil (1896-1974) and his brother Jack (1902-1955) produced a variety of items from 1927 until they began making toys in 1934.
After producing die-cast toy cars, Manoil began to produce toy soldiers in 1935. They were sculpted by Walter Baetz. | https://www.fabtintoys.com/manoil/ |
PROBLEM TO BE SOLVED: To provide a universal joint preventing damage during manufacturing and looseness during use.
SOLUTION: The center D of a spider 22 of the universal joint 4 is offset from both center positions G, H between end faces 27, 28 of two pairs of trunnions 251, 261; 252, 262. Thus, relatively-long long trunnions 251, 252 and relatively-short short trunnions 261, 262 are provided opposing to each other and sandwiching the center D of the spider 22. A radial gap of a bearing 32 supporting the short trunnions 261, 262 is set relatively small, and a radial gap of a bearing 31 supporting the long trunnions 251, 252 is set relatively large.
COPYRIGHT: (C)2009,JPO&INPIT | |
The First Personality Reading Based On
The Sacred Geometry Of Your Name Symbols.
All numbers with the number 6 for reduction are obtained using the following four forms:
• 15 = 1 + 5 = 6
• 24 = 2+ 4 = 6
• 33 = 3 + 3 = 6
• 60 = 6 + 0 = 6
The number 60 is the rarest of all, appearing in only 10% of cases.
The Hexagon, associated with the number 60, represents the "Emotional Body".
The Hexagon, whose length on each side is equal to the radius of the circle, is considered for this reason as the symbol of Perfection.
Perfect number for Pythagoras, the Hexagon represents Creation, the archetype of the Soul of the World. Sign of Harmony and complementarity of polarities.
You aspire to find happiness in love. Idealist and perfectionist, you attach great importance to your body and you blossom totally in the family and in the artistic fields.
This symbol is closely related to the Vesica Piscis, the archetype of Love. It represents its materialization in the Human World. The Word made flesh. Your exacerbated emotionality could push you to fall into sentimentality.
« And verbum caro factum is
And the word was made flesh,
and she lived among us,
full of grace and truth. »
John 1.14
"I wish a passionate love. I'm in search of my soul mate."
Your symbol is located in the Human World (on the second line, which represents the Self and the close relational circle), and in the attribute of the Body and its means of expression (in the third column, the one that evokes the relationship between the Self and society/universe).
Place in the matrix :
Location in the matrix: 6
Center row - Third column
60 = 6 + 0 = 6
Because it is a materialization of the number 6 (symbolic of Creation), it is the emblem of the incarnation of Life (and associated notions: maternity, home, creativity, ...). Because each side measures the length of the ray, the figure expresses the harmony of proportions: it is the image of beauty and art.
Several ways exist to express the qualities of the hexagon but certain ways are more promising than others, because they allow a favorable alignment of energies to the realization of its vocation.tel is the case in particular of the "arrow" formed of the numbers 167.
The "arrow 167" is that of the creator (1), revealed to the public (7) thanks to his creation (6). | https://geo-numerology.com/symbolism-of-the-number-60 |
Q:
Is it possible to get Nth Fibonacci number in sublinear time?
I was researching the topic of Fibonacci numbers and asymptotic complexity of generating them. Coming across a seemingly paradoxical conclusion, I've decided to check out if you agree with my explanation.
The naive algorithms runs in $O(n)$, if we ignore the cost of addition.
The Binet's formula or matrix exponentiation method should both theoretically run in $O(\lg(n))$ (since exponentiation of both matrices and real numbers takes $O(\lg(n))$ steps)
The problem arises when you analyze the size of Nth Fibonacci number by assuming that after first few members of the sequence, the ratio between two numbers is at least 1.5 (picked randomly, I assume it can be easily proved by induction).
We can then bound the number to be at least as big as: $c_1*(1.5)^n$.
Its logarithm gives us the number of digits the Nth Fibonacci number has ($c_2*n$).
Am I right to assume that you can't print out (or calculate) linear number of digits in sublinear time?
My explanation of this "paradox" is that I forgot to add multiplication costs in 2nd algorithm (Binet/matrix), making its complexity $n*\lg(n)$.
I've found out that naive algorithm (1st) runs better for very small inputs, and 2nd algorithm runs better for bigger ones (python3).
Is my explanation of complexity correct, and should the naive algorithm get better running time at even larger inputs ($n>10^9$ or such)?
I do not consider this to be a duplicate question, since it adresses the problem of arbitary values and arbitary integer aritmetic.
A:
Let's assume that you could store such large Fibonacci numbers. In that case, the length of $n^{th}$ Fibonacci number is $\lfloor n \log_{10}\phi+1\rfloor$. That is the length of the $n^{th}$ Fibonacci number is $\Theta(n)$.
If you go by the naive method, then calculating the $n^{th}$ Fibonacci number would cost you $\Theta(n^2)$ time.
In comparison, matrix exponentiation will take $O(n^2\log n)$ time, assuming you use schoolbook long multiplication to multiply numbers. (A slightly more refined analysis shows that it is in fact $\Theta(n^2)$, by summing a geometric series.) But if instead we use Karatsuba multiplication , the running time to compute the $n$th Fibonacci number using matrix exponentiation becomes $O(n^{1.585}\log n)$. (Analogous to before, a slightly more refined analysis shows that the running time is in fact $\Theta(n^{1.585})$.) Thus, if you use sophisticated multiplication algorithms, matrix exponentiation would be better for large $n$ compared to the naive algorithm.
In short: which method is better depends on which multiplication algorithm used.
| |
Q:
How to evaluate this infinite product: $\prod\limits_{n=1}^{\infty }{\left( 1-\frac{1}{{{2}^{n}}} \right)}$
How to evaluate this one
$$\prod\limits_{n=1}^{\infty }{\left( 1-\frac{1}{{{2}^{n}}} \right)}$$
A:
Hint(From Complex Analysis by Lars V. Ahlfors, $3$rd Edition, Page-$192$, Theorem-$5$):The infinite product $\Pi (1+a_n)$ with $1+a_n\neq 0$ converges simultaneously with the series $\sum_{1}^{\infty} \log(1+a_n)$ whose terms represent the values of the principal branch of the logarithm.
| |
Since 2007, Active Earth has completed over 2,000 independent environmental investigations in Southwestern B.C. and Vancouver Island. Our investigation process is efficient, thorough, and cost-effective. All investigations are completed by highly-skilled team members with experience in a wide range of site conditions.
Our top priorities are to determine the likely worst-case conditions, to accurately assess environmental liabilities, and to provide results that are directly relevant to each project. This approach prioritizes the long-term interests of our clients.
Active Earth carries out Stage 1 and 2 Preliminary Site Investigations (also referred to as Phase I and II Environmental Site Assessments) and Detailed Site Investigations across a range of industries. Clients include those in commercial, industrial, mining, and forestry sectors as well as federal, provincial, and municipal governments and First Nations. Active Earth has successfully addressed contaminants associated with historic filling, automotive fuelling and servicing, leakage from storage tanks, mining and blasting, electrical equipment (PCBs), metalworking and machining, sandblasting, wood preservatives, coal processing, and numerous other industrial activities.
Remediation and Soil Relocation
Active Earth is an industry leader in completing site remediation under Contaminated Soil Relocation Agreements. This approach provides substantial savings to our clients, who also benefit from our in-depth knowledge of the excavation and disposal industry in B.C.
Our remediation expertise ranges from simple excavation-and-disposal projects to complex multi-year programs. Active Earth’s remediation techniques include excavation, bio-remediation, air-sparging, soil vapour extraction, chemical oxidation, and risk assessment.
We specialize in cost-benefit analysis, detailed remediation planning, and remediation using both Approvals in Principle and Independent Remediation to obtain Certificates of Compliance and environmental closure.
Regulatory Approvals and Permits
Active Earth’s team of technical experts have a deep understanding of the key environmental Acts and Regulations in BC, and a thorough knowledge of local government bylaws. We have a strong reputation with various regulatory bodies and decision makers, and have a demonstrated history of completing applications for approvals and permits in a timely and cost-effective manner.
Where applications require input from multiple parties, we possess the project management skills to effectively assemble and coordinate the required teams. Through regular, brief progress meetings, we efficiently track the progress of multiple concurrent deliverables, and ensure that potential conflicts are identified and resolved as intelligently and economically as possible. This approach results in improved designs, more comprehensive application packages, and quicker regulatory review timelines.
Our regulatory expertise includes, but is not limited to:
- BC Environmental Management Act: Site Profiles, Certificates of Compliance, Approvals in Principal, Determinations, Releases, Background Concentration Approvals, Notifications of Independent Remediation, Notice of Likely or Actual Migration, Site Risk Classifications, Waste Discharge Authorizations, etc.
- BC Water Sustainability Act: Water Well Licenses, Changes In and About a Stream, Short Term Use Approvals, etc.
- Local government bylaws: stormwater management and discharge approvals, groundwater dewatering and treatment approvals, soil removal permits, fill placement permits, etc.
- BC Mines Act: Reclamation Plans, annual monitoring for permit compliance, etc.
- Vancouver Fraser Port Authority: Project Environmental Reviews
- Disposal at Sea Regulations: Site-Specific Disposal Permits, Multi-Site Annual Disposal Permits, Letters of Confirmation, etc.
Disposal at Sea
Active Earth completes more Disposal at Sea Approvals for land-based excavations than any other engineering firm in Western Canada. To facilitate these approvals, our team completes all aspects of the regulatory process on behalf of our clients, including technical site reviews, applications to Environment Canada, and the collection of confirmatory soil samples.
The handling of clean soil via Disposal at Sea can often save significant transportation and disposal costs for our clients. By using the services of Active Earth, our clients benefit from our experience and positive reputation with Environment Canada. Our clients also enjoy some of the lowest laboratory prices available for Disposal at Sea soil testing.
Environmental Management Plans and Monitoring
Active Earth has extensive experience in developing clear, efficient, and responsible Environmental Management Plans (EMPs) for gravel extraction and aggregate mining projects in B.C. Our Senior Hydrogeologists have developed dozens of EMPs and Monitoring Programs to ensure that gravel/aggregate mining programs meet all environmental regulatory requirements and do not adversely impact groundwater or surface water in the area. Active Earth completes regular monitoring and regulatory reporting for numerous gravel/aggregate mining projects in B.C. Our Senior Hydrogeologists routinely speak at public hearings on behalf of our gravel/aggregate clients.
Active Earth also develops EMPs for various construction and remediation projects. Our EMPs cover waste management as well as assessment and mitigation measures aimed at protecting soil, groundwater, surface water, and air quality. Our EMPs summarize the roles and responsibilities of the relevant parties and are written clearly and concisely. Considered “living documents”, our EMPs are updated and refined regularly as project conditions change.
While acting as the Environmental Monitors (EM) on construction or remediation projects, we ensure all parties are familiar with the relevant environmental issues and mitigation requirements. Our EM staff are recognized for their professionalism, expertise, communication skills, and collaborative problem-solving approach.
Landfills
Active Earth has completed over 30 landfill-related projects throughout B.C. Our team includes leading experts in waste management, landfill design, and landfill management and monitoring. Successful past projects include the evaluation of proposed landfill sites, conceptual/detailed landfill designs, and ongoing monitoring and assessment of existing landfills.
We work in rural and urban areas helping to provide effective waste disposal options for small-scale domestic wastes, contaminated soils, industrial and medical wastes, wood waste, and large-scale municipal solid waste. Our assessments often evaluate the risks to local and regional groundwater and surface water quality, landfill gas emissions, soil cover assessment and design, human health and ecological risk assessment, and development and implementation of monitoring plans. The Active Earth team also prepares and implements responsible and cost-effective closure plans for landfills at or near capacity. | https://activeearth.ca/our-services/environmental/ |
Poetry, since time immemorial, has been a way for people to express their thoughts and opinions in a romantic and aesthetic manner. Unlike most of the other art forms that exist, it does not require any formal training, and extending to the beautiful quote by John Green about pain, it can be said that, ‘Poetry just demands to be felt’.
The roots of poetry can be traced to the folklores that used to be sung in the villages with a distinct rhyming scheme. The writing in later years also reflects a heavy influence of verbal poetry and hymns signifying the origin of poetry is closely linked to music.
In India and neighboring countries, it can be seen that ghazal is a prevalent form of poetry that is sung. But there are various other forms of poetry as well that dominate the world in various geographical locations such as sonnet, haiku, limerick, tanka and more.
However, the new emoticon generation seems to have forgotten about the beauty of words or simply the usage of words at all! Symbols or commonly called emojis are taking over the world, eliminating the need of using words but there are still a few categories of people that believe in the power words hold, such as linguists, poets and lovers and more.
The poems that come up in the new era might lessen in number but till man remains there will always be a need to express, and when such need arises, a poem will always be born.
As rightfully said in Dead Poets Society, “Medicine, law, business, engineering, these are noble pursuits and necessary to sustain life. But poetry, love, beauty, romance, these are what we stay alive for.”
Coming back from the philosophical aspect of poetry, a shift is seriously needed to push the attention of the youth from entertainment to poetry as studies show that writing content is diminishing and the people who still write are unable to reach a good audience as a result of algorithms of different social media platforms.
Thus, it can be said that poetry is becoming a lost art and to preserve its beauty is our responsibility. As a reader me must support writers before the AI shift completely wipes off authentic writers from the world. | https://reflections.live/articles/1874/poetry-an-emotion-not-an-emoticon-an-article-by-mannat-arora-7056-l9zi3175.html |
We’re all busy just trying to survive. A bit of pressure from this or that just adds stress to your day. It molds you, shapes your character, you adapt and keep moving. Well with trees, its not a whole lot different. A tree that grows up sheltered in a secluded area with good soil and protected from the harsh environment, grow up tall and straight. Other trees (the 99%) grow up in not so great a location; side of an embankment, poor soil conditions, high winds. It all adds to the stress of growing up straight and tall. Trees, just like everything else, adapt and carry on.
-
Getting Out the Ash Logs
With one large ash tree cut and hauled to the road, we spent another afternoon doing the same with an even bigger ash tree. This one was located just down from the first. This time we decided to try a slightly different approach.
-
Milling Up Spruce For Lumber
The other day Jim told me he talks to the trees!! It might be time to make him an appointment 😉 Probably only a concern if he thinks they answer. Before we could start milling the hemlock and pine logs we got a few weeks ago, we had to mill the spruce we got earlier in the year.
-
Hemlock and Pine Logs Arrived
Why is it Jim is never home when an order of logs or something of that nature arrives? Not sure, but he missed all the excitement. We weren’t expecting them so soon, but sure enough, here they are.
-
Purpose of Moisture Meter Readings
The moisture content of materials plays a roll in woodworking, especially when your woodworking projects will be used indoors. Generally speaking, most woodworkers prefer their wood material to have a moisture content of about 8%. For furniture making, I can see the importance of that. For crafts however, I feel you can go with a higher moisture content.
-
Milling Lumber for the Kiln Shed
When we built the walls for the 10 x 20 foot kiln shed, we milled the 2 x 4 lumber and built a wall. Then repeat, repeat, repeat until the four walls were up. Now we are on to the roof trusses and decided to mill all of the lumber first.
-
Uses for Wooden Cookie Slices
Honestly, I had never heard of the term cookies when referring to wood before. All my cookies come out of the oven. Always learning something new. These cookie slices were cut on the sawmill.
-
Kiln Shed: The Walls
On Mondays post I showed you the beginnings of the kiln shed. Last fall we prepared the base, no easy task getting things level, and last weekend Jim got started on the walls.
-
Sugar Maple Cookies
I milled some of the smaller sugar maple limbs for live edge slabs. The limbs had a curve to them, so I cut them live edge to get the most I could from the limbs. The limbs were about 9-10 inches wide. (This post original aired Jul 07 2017 but has been updated (see bottom) as of Feb 15 2018.)
-
Bleaching Wood Slabs
Hey there, happy Friday. Jim had cut these pieces of maple on the sawmill a few weeks ago and stood them up in the workshop while deciding what to do with them. Keep in mind our kiln isn’t set up yet, in fact the building that will house the kiln isn’t built yet. He was in the workshop this morning and took a look at the maple only to find out it was starting to go fuzzy. Normally the wood would be put through the planer anyways but these pieces were too wide for our current machine. | https://woodchuckcanuck.com/work/work-portfolio/sawmill/milling-lumber/page/2/ |
Being away for a few days has me catching up to get these route updates, well, up to date.
Lufthansa:
Frankfurt-Tripoli effective February 2 is resuming 3x/week service.
Frankfurt-Tripoli effective March 25 will increase to 1x/day.
Austrian:
Vienna-Tripoli effective March 25 is resuming 5x/week service.
Thai:
Bangkok-Colombo effective April 16 is reduced from 4x/week to 3x/week.
Bangkok-Auckland effective April 16-June 14 is reduced from 5x/week to 4x/week.
Bangkok-Frankfurt effective May 1-June 25 is reduced from 14/x week to 12x/week.
Bangkok-Sydney effective June 18 increases from 11x/week to 14x/week.
Bangkok-Perth effective June 16 is reduced from 6x/week to 5x/week.
Bangkok-Stockholm effective May 1-June 15 is reduced from 1x/day to 5x/week.
Bangkok-Seoul-Los Angeles effective May 1 begins 4x/week service.
Singapore:
Singapore-Adelaide effective July 2012 will increase from 1x/day to 10x/week.
Ethiopian:
Addis Ababa–Maputo effective March 25 increases from 4x/week to 6x/week.
Addis Ababa–Ouagadougou–Abidjan effective March 25 increases from 4x/week to 5x/week.
Addis Ababa–Seychelles effective March 25 will begin 4x/week service.
Addis Ababa-Dammam effective February 2 will begin 3x/week service.
United:
Washington Dulles-Honolulu effective June 8 will begin 1x/day service.
Seasonal Routes effective between June 7 to August 27:
Houston-Rapid City: Daily Service.
San Francisco-Jackson Hole: Daily service.
Denver-Fairbanks: Daily service.
Houston-Jackson Hole: 2x/week service. | https://lufthansaflyer.boardingarea.com/star-alliance-route-updates-lufthansa-thai-singapore-ethiopian-united/ |
TECHNICAL FIELD
BACKGROUND ART
CITATION LIST
Patent Literatures
SUMMARY OF INVENTION
Technical Problem
Solution to Problem
Advantageous Effects of Invention
DESCRIPTION OF EMBODIMENTS
REFERENCE SIGNS LIST
The present invention relates to a concentration controller, a gas control system, a deposition apparatus, concentration control method, and a program for the concentration controller used in, for example, a semiconductor manufacturing process.
In the past, as equipment for controlling the concentration of gas used in a semiconductor manufacturing process or the like, as disclosed in Patent Literature 1, there has been a concentration controller used together with a vaporization apparatus for vaporizing liquid or solid material.
The vaporization apparatus intermittently supplies carrier gas to a vaporization tank storing the liquid or solid material, thereby intermittently leads out material gas produced by vaporization of the material from the vaporization tank, and supplies mixed gas produced by mixing diluent gas with the led-out material gas to a chamber or the like used in, for example, a semiconductor manufacturing process. In such a configuration, a supply period of supplying the mixed gas and a stop period of stopping the supply are repeated, and the concentration controller controls the flow rates of the carrier gas and diluent gas during the supply period, and thereby performs feedback control so that the concentration of the material gas contained in the mixed gas comes close to a preset target concentration.
However, in the above-described configuration, since the supply period and the stop period are repeated, the concentration of the material gas in the vaporization tank gradually increases during the stop period, and therefore there is the problem that the concentration of the material gas in the mixed gas overshoots immediately after the start of the supply period.
Patent Literature 1: Japanese Unexamined Patent Application Publication No. 2014-224307
Therefore, the present invention has been made in order to solve the above-described problem, and a main object thereof is to control the flow rates of carrier and diluent gases so that the concentration of material gas can be prevented from overshooting immediately after the start of a supply period.
That is, the concentration controller according to the present invention is one used in a vaporization apparatus that includes: a vaporization tank that stores liquid or solid material; a carrier gas supply path that supplies carrier gas to the vaporization tank; a material gas lead-out path through which material gas produced by vaporization of the material and led out of the vaporization tank flows; a diluent gas supply path that merges with the material gas lead-out path and supplies diluent gas to the material gas lead-out path; a flow rate control device provided in at least one of the carrier gas supply path and the diluent gas supply path; and a concentration monitor provided on the downstream side of the merging point with the diluent gas supply path in the material gas lead-out path, and repeats supplying and stopping supplying the material gas.
In addition, the concentration controller includes: a concentration calculation part that calculates the concentration of the material gas on the basis of an output signal from the concentration monitor; and a set flow rate calculation part that, on the basis of actual concentration calculated by the concentration calculation part at a first predetermined time point during a supply period (hereinafter referred to as a first supply period) when the material gas is supplied, an actual flow rate outputted from the flow rate control device at the first, predetermined time point during the first supply period or the set flow rate of the flow rate control device at the first predetermined time point during the first supply period, and a preset target concentration, calculates the initial set flow rate of the flow rate control device during an initial interval that is a period from the start of a supply period (hereinafter referred to as a second supply period) after the first supply period to a second predetermined time point during.
The concentration controller configured as described above makes it possible to, in the flow rate control device during the initial interval of the second supply period, set the initial set flow rate calculated on the basis of the actual concentration calculated at the first predetermined time point during the first supply period, the actual flow rate outputted from the flow rate control device at the first predetermined time point during the first supply period or the set flow rate of the flow rate control device at the first predetermined time point during the first supply period, and the preset target concentration, and therefore overshoot immediately after the start of the second supply period can be reduced.
In order to more surely reduce the overshoot immediately after the start of the second supply period, it is preferable that during the initial interval of the second supply period, control is performed so that a flow rate through the flow rate control device becomes equal to the initial set flow rate.
It is preferable that the concentration controller is configured to, after the initial interval of the second supply period, perform feedback control by controlling the set flow rate of the flow rate control device so that the actual concentration calculated by the concentration calculation part comes close to the target concentration.
In such a configuration, even after the initial interval of the second supply period, the control can be stabilized.
Specific embodiments include one in which the set flow rate calculation part calculates the initial set flow rate during the initial interval of the second supply period by multiplying the ratio of the target concentration to the actual concentration calculated by the concentration calculation part at the first predetermined time point during an initial interval of the first supply period by the actual flow rate outputted from the flow rate control device at the first predetermined time point during the initial interval of the first supply period.
Also, other embodiments include one further including a flow rate-concentration relationship data storage part that stores flow rate-concentration relationship data indicating the relationship between the flow rate of the carrier gas or the flow rate of the diluent gas, and the concentration of the material gas flowing on the downstream side of the merging point with the diluent gas supply path in the material gas lead-out path, in which the set flow rate calculation part calculates the initial set flow rate during the initial interval of the second supply period with use of the flow rate-concentration relationship data.
It is preferable that the set flow rate calculation part calculates an average initial set flow rate resulting from averaging initial set flow rates respectively calculated for multiple supply periods when the material gas has already been supplied.
By setting the average initial set flow rate in the flow rate control device, respective errors of the calculated initial set flow rates can be reduced, and the overshoot immediately after the start of the second supply period can be more surely reduced.
When the relationship between the flow rate and the concentration does not match a calculation expression for the initial set flow rate, an increase in initial concentration and a decrease in initial concentration may be alternately repeated during successive supply periods.
Therefore, it is preferable that the average initial set flow rate is one resulting from averaging the initial set flow rates calculated by the set flow rate calculation part respectively for successive even-numbered supply periods.
In such a configuration, since the successive even-numbered estimated flow rates are averaged, increases in initial concentration and decreases in initial concentration can be cancelled out.
Also, the gas control system according to the present invention includes an vaporization apparatus that includes: a vaporization tank that stores liquid or solid material; a carrier gas supply path that supplies carrier gas to the vaporization tank; a material gas lead-out path through which material gas produced by vaporization of the material and led out of the vaporization tank flows; a diluent gas supply path that merges with the material gas lead-out path and supplies diluent gas to the material gas lead-out path; a flow rate control device provided in at least one of the carrier gas supply path and the diluent gas supply path; and a concentration monitor provided on the downstream side of the merging point with the diluent gas supply path in the material gas lead-out path, and repeats supplying and stopping supplying the material gas, a concentration calculation part that calculates the concentration of the material gas on the basis of an output signal from the concentration monitor, and a set flow rate calculation part that, on the basis of initial concentration calculated by the concentration calculation part at a first predetermined time point during a supply period (hereinafter referred to as a first supply period) when the material gas is supplied, an actual flow rate outputted from the flow rate control device at the first predetermined time point during the first supply period or the set flow rate of the flow rate control device at the first predetermined time point during the first supply period, and a preset target concentration, calculates the initial set flow rate of the flow rate control device during an initial interval that is a period from the start of a supply period (hereinafter referred to as a second supply period) after the first supply period to a second predetermined time point.
A deposition apparatus including the above-described gas control system and a chamber to be supplied with the material gas is also included in the present invention.
The concentration control method according to the present invention is one used for a vaporization apparatus that includes: a vaporization tank that stores liquid or solid material; a carrier gas supply path that supplies carrier gas to the vaporization tank; a material gas lead-out path through which material gas produced by vaporization of the material and led out of the vaporization tank flows; a diluent gas supply path that merges with the material gas lead-out path and supplies diluent gas to the material gas lead-out path; a flow rate control device provided in at least one of the carrier gas supply path and the diluent gas supply path; and a concentration monitor provided on the downstream side of the merging point with the diluent gas supply path in the material gas lead-out path, and repeats supplying and stopping supplying the material gas. In addition, the concentration control method includes: a step of calculating the concentration of the material gas on the basis of an output signal from the concentration monitor; and a step of, on the basis of initial concentration calculated at a first predetermined time point during a supply period (hereinafter referred to as a first supply period) when the material gas is supplied, an actual flow rate outputted from the flow rate control device at the first predetermined time point during the first supply period or the set flow rate of the flow rate control device at the first predetermined time point during the first supply period, and a preset target concentration, calculating the initial set flow rate of the flow rate control device during an initial interval that is a period from the start of a supply period (hereinafter referred to as a second supply period) after the first supply period to a second predetermined time point.
The program for a concentration controller according to the present invention is one used for a vaporization apparatus that includes: a vaporization tank that stores liquid or solid material; a carrier gas supply path that supplies carrier gas to the vaporization tank; a material gas lead-out path through which material gas produced by vaporization of the material and led out of the vaporization tank flows; a diluent gas supply path that merges with the material gas lead-out path and supplies diluent gas to the material gas lead-out path; a flow rate control device provided in at least one of the carrier gas supply path and the diluent gas supply path; and a concentration monitor provided on the downstream side of the merging point with the diluent gas supply path in the material gas lead-out path, and repeats supplying and stopping supplying the material gas. In addition, the program instructs a computer to fulfill functions as: a concentration calculation part that calculates the concentration of the material gas on the basis of an output signal from the concentration monitor; and a set flow rate calculation part that, on the basis of initial concentration calculated by the concentration calculation part at a first predetermined time point during a supply period (hereinafter referred to as a first supply period) when the material gas is supplied, an actual flow rate outputted from the flow rate control device at the first predetermined time point during the first supply period or the set flow rate of the flow rate control device at the first predetermined time point during the first supply period, and a preset target concentration, calculates the initial set flow rate of the flow rate control device during an initial interval that is a period from the start of a supply period (hereinafter referred to as a second supply period) after the first supply period to a second predetermined time point.
Such gas control system, deposition apparatus, concentration control method, and program for a concentration controller can produce the same working effect as that of the above-described concentration controller.
According to the present invention configured as described above, in a configuration adapted to intermittently supply the material gas, the flow rates of the carrier and diluent gases can be controlled so as to suppress the concentration of the material gas from overshooting immediately after the start of a supply period.
In the following, one embodiment of the concentration controller according to the present invention will be described with reference to the drawings.
100
100
A gas control system of the present embodiment is one incorporated in, for example, a semiconductor manufacturing line or the like to supply a predetermined concentration of gas to a chamber CH used in the semiconductor manufacturing process. In addition, together with the chamber CH, the gas control system constitutes a deposition apparatus used in semiconductor manufacturing or the like.
FIG. 1
100
10
1
10
2
10
3
2
20
30
Specifically, as illustrated in , the gas control system includes: a vaporization tank that stores liquid or solid material; a carrier gas supply path L that supplies carrier gas to the vaporization tank ; a material gas lead-out path L that leads out material gas produced by vaporization of the material from the vaporization tank ; a diluent gas supply path L that supplies diluent gas for diluting the material gas to the material gas lead-out path L; a switching mechanism for switching between supplying the material gas to the chamber CH and stopping the supply; and a concentration controller that controls the concentration of the material gas to be supplied to the chamber CH.
1
40
40
The carrier gas supply path L is provided with a first flow rate control device for controlling the flow rate of the carrier gas. The first flow rate control device is a mass flow controller including: a flow rate sensor of, for example, a thermal type; a flow rate regulation valve such as a piezo valve; and a control circuit including a CPU, a memory, and the like.
2
50
50
51
52
The material gas lead-out path L is provided with a concentration monitor for measuring the concentration of the material gas contained in mixed gas. The concentration monitor in the present embodiment is one using the principle that the concentration (vol %) of the material gas contained in the mixed gas is represented by the ratio of the pressure (partial pressure) of the material gas contained in the mixed gas to the pressure (total pressure) of the mixed gas, and specifically, includes: a pressure meter for measuring the total pressure; and a partial pressure meter for measuring the partial pressure using, for example, a non-dispersive infrared absorption method.
3
50
2
60
40
60
The diluent gas supply path L is connected to the upstream side of the temperature monitor in the material gas lead-out path L, and provided with a second flow rate control device for controlling the flow rate of the diluent gas. As with the first flow rate control device , the second flow rate control device is a mass flow controller including: a flow rate sensor of, for example, a thermal type; a flow rate regulation valve such as a piezo valve; and a control circuit including a CPU, a memory, and the like.
20
1
3
1
3
10
10
100
FIG. 2
The switching mechanism includes multiple valves V to V that receive a valve switching signal to open/close, and by opening/closing these valves V to V, for example, at timing preset by a user, supplying and stopping supplying the carrier gas to the vaporization tank are repeated. As a result, as illustrated in , the material gas is intermittently led out of the vaporization tank and intermittently supplied to the chamber CH. That is, the gas control system of the present embodiment is configured to repeat a supply period of supplying the material gas (specifically, the mixed gas produced by mixing the material gas and the diluent gas) to the chamber CH and a stop period of stopping the supply.
20
4
1
2
1
4
1
2
4
2
3
4
The switching mechanism here includes: a bypass flow path L connecting between the carrier gas supply path L and the material gas lead-out path L; a first valve V provided on the downstream side of a connection point with the bypass flow path L in the carrier gas supply path L; a second valve V provided on the upstream side of a connecting point with the bypass flow path L in the material gas lead-out path L; and a third valve V provided in the bypass flow path L.
1
2
3
1
2
3
In addition, the supply period is started by opening the first and second valves V and V and closing the third valve V, whereas the stop period is started by closing the first and second valves V and V and opening the third valve V.
30
The concentration controller is one that controls the flow rates of the carrier gas and diluent gas and thereby controls the concentration of the material gas to be supplied to the chamber CH.
30
30
31
32
33
FIG. 3
Specifically, the concentration controller is a computer having a CPU, a memory, an AC/DC converter, input means, and the like. In addition, the CPU executes a program stored in the memory, and thereby as illustrated in , the concentration controller has functions as a target concentration reception part , a concentration calculation part , and a set flow rate calculation part .
31
The target concentration reception part is one that receives a target concentration signal indicating a target concentration inputted by user's input operation through the input means such as a keyboard or transmitted from another device.
32
50
32
51
52
52
51
The concentration calculation part is one that on the basis of output signals outputted from the concentration monitor , calculates the concentration (hereinafter also referred to as actual concentration) of the material gas contained in the mixed gas. Specifically, the concentration calculation part acquires the output signals respectively from the pressure meter and the partial pressure meter , and calculates the ratio of the partial pressure detected by the partial pressure meter to the total pressure detected by the pressure meter as the actual concentration (vol %) of the material gas contained in the mixed gas.
33
32
31
40
The set flow rate calculation part is one that acquires an actual concentration signal outputted from the concentration calculation part and the target concentration signal received by the target concentration reception part , and calculates a set flow rate (hereinafter referred to as an FB set flow rate) for feedback control of the first flow rate control device .
FIG. 4
33
31
1
32
1
40
2
40
Further, in the present embodiment, as illustrated in , the set flow rate calculation part is configured to, on the basis of the target concentration Cg received by the target concentration reception part , the actual concentration C calculated by the concentration calculation part at a predetermined time point during an initial interval of a supply period (hereinafter referred to as a first supply period) when the material gas is supplied, and the actual flow rate Q of the carrier gas outputted from the first flow rate control device at the predetermined time point during the initial interval of the first supply period, calculate the initial set flow rate Q of the first flow rate control device during an initial interval of a supply period (hereinafter referred to as a second supply period) after the first supply period.
1
Note that “the initial interval of the first supply period” refers to a period from the start of the first supply period to a first predetermined time point after a preset first predetermined time T has passed. The first predetermined time point is set to a time point when the peak of the actual concentration first appears during the first supply period or set to around the time, and here set to a time point when the first peak of the actual concentration finishes falling.
2
2
1
Also, “the initial interval of the second supply period” refers to a period from the start of the second supply period to a second predetermined time point after a preset second predetermined time T has passed, and here the second predetermined time T is set to the same time as the first predetermined time T. That is, as with the first predetermined time point, the second predetermined time period is set to a time point when the peak of the actual concentration first appears during the second supply period or set to around the time, and here set to a time point when the first peak of the actual concentration finishes falling.
33
1
32
1
2
Specifically, the set flow rate calculation part acquires the actual concentration C calculated by the concentration calculation part at the first predetermined time point during the initial interval of the first supply period and the actual flow rate Q of the carrier gas calculated by the CPU of the first flow rate control device at the first predetermined time point during the initial interval of the first supply period, and calculates the initial set flow rate Q during the second supply period using a predetermined calculation expression.
33
2
In the present embodiment, on the assumption that there is a proportional relationship between the concentration of the material gas contained in the mixed gas and the flow rate of the carrier gas, the set flow rate calculation part uses the following calculation expression to calculate the initial set flow rate Q during the second supply period after the first supply period.
Q
Q
Cg
C
Initial set flow rate 2=Actual flow rate 1×(Target concentration /Actual concentration 1)
33
40
33
40
The set flow rate calculation part in the present embodiment outputs the initial set flow rate during the initial interval of the second supply period calculated in accordance with the above-described calculation expression to the first flow rate control device . On the other hand, after the initial interval of the second supply period, the set flow rate calculation part outputs the above-described FB set flow rate to the first flow rate control device .
33
40
60
In the present embodiment, as described above, control is performed so that the total flow rate of the carrier gas flow rate and the diluent gas flow rate becomes constant, and the set flow rate calculation part subtracts the set flow rate to be outputted to the first flow rate control device from the predetermined total flow rate, and calculates and outputs the set flow rate of the second flow rate control device .
30
FIG. 5
Next, the operation of the concentration controller of the present embodiment will be described with reference to a flowchart of . Note that in the following description of the operation, for the purpose of convenience, the first supply period and the second supply period are simply referred to as supply periods, and the first predetermined time and the second predetermined time are set to the same time, and therefore simply referred to as a predetermined time.
100
30
33
1
First, when the operation of the control system is started, the concentration controller determines whether or not the current time point is within the supply period. Here, the set flow rate calculation part acquires the valve switching signal and on the basis of the signal, determines whether or not the current time point is within the supply period (S).
1
33
40
60
5
2
100
40
60
When it is determined in S that the current time point is not within the supply period, i.e., when it is determined that the current time point is within the stop period, the set flow rate calculation part sets the flow rates of the first flow rate control device and the second flow rate control device and to initial set flow rates calculated in S described later, and keeps the initial set flow rates (S). In addition, during the stop period until the first supply period is started after the start of the operation of the gas control system , for example, a user keeps output of preset set flow rates respectively for the first flow rate control device and the second flow rate control device .
1
33
3
On the other hand, when it is determined in S that the current time point is within the supply period, the set flow rate calculation part determines whether or not the current time point is the predetermined time point or later (S).
3
33
40
2
When it is determined in S that the current time point is not the predetermined time point or later, i.e., when it is determined that the current time point is within the initial interval of the supply period, the set flow rate calculation part keeps output of the set flow rate set for the first flow rate control device , i.e., keeps output of the initial set flow rate regardless of the actual concentration (S).
3
33
4
When it is determined in S that the current time point is the predetermined time point or later, the set flow rate calculation part determines whether or not the current time point is the predetermined time point (S).
4
33
5
When it is determined in S that the current time point is the predetermined time point, the set flow rate calculation part calculates an initial set flow rate during the next supply period (S). In addition, a detailed calculation method is as described above.
4
33
40
6
On the other hand, when it is determined in S that the current time point is not the predetermined time point, without calculating the above-described initial set flow rate, the set flow rate calculation part outputs the FB set flow rate to the first flow rate control device to feedback-control the concentration of the material gas (S).
2
6
33
40
60
In addition, in S and S, the set flow rate calculation part subtracts the set flow rate to be outputted to the first flow rate control device from the predetermined total flow rate, and calculates and outputs the set flow rate of the second flow rate control device .
30
7
1
7
After that, it is determined whether or not a control end signal by the concentration controller has been received (S), and until the control end signal is received, S to S are repeated. When the control end signal is received, the operation is ended.
100
33
2
1
1
2
2
FIG. 4
In the gas control system according to the present embodiment configured as described above, as illustrated in , the set flow rate calculation part calculates the initial set flow rate Q during the second supply period on the basis of the target concentration Cg, the actual flow rate Q at the first predetermined time point during the initial interval of the first supply period, and the actual concentration C at the first predetermined time point during the initial interval of the first supply period, and outputs the initial set flow rate Q to the first flow rate control device for the initial interval of the second supply period. Accordingly, the initial concentration C during the second supply period can be brought close to the target concentration Cg, and therefore overshoot immediately after the start of the second supply period can be suppressed.
Note that the present invention is not limited to the above-described embodiment.
33
40
60
32
60
33
60
For example, the set flow rate calculation part in the above-described embodiment calculates the initial set flow rate of the first flow rate control device during the second supply period, but may calculate the initial set flow rate of the second flow rate control device during the second supply period. Specific configurations include one in which on the basis of the target concentration, the actual concentration calculated by the concentration calculation part at the predetermined time point during the initial interval of the first supply period, and the actual flow rate of the diluent gas outputted from the second flow rate control device at the predetermined time point during the initial interval of the first supply period, the set flow rate calculation part calculates the initial set flow rate of the second flow rate control device during the initial interval of the second supply period.
In this case, the initial set flow rate of the first flow rate control device is only required to be a flow rate obtained by subtracting the initial set flow rate of the second flow rate control device from the predetermined total flow rate.
2
1
1
40
Also, to calculate the initial set flow rate Q during the second supply period, without limitation to the calculation expression in the above-described embodiment, for example, in place of the actual flow rate Q, the initial set flow rate Q′ of the first flow rate control device at the predetermined time point during the initial interval of the first supply period may be used to use the following calculation expression.
Q
Q
Cg
C
Initial set flow rate 2=Initial set flow rate 1′×(Target concentration /Actual concentration 1)
2
Further, the calculation expression for the initial set flow rate Q does not have to be limited to that in the above-described embodiment.
30
33
For example, an embodiment in which the concentration controller further includes a flow rate-concentration relationship data storage part for storing flow rate-concentration relationship data indicating the relationship between the flow rate of the carrier or diluent gas and the concentration of the material gas flowing on the downstream side of the merging point with the diluent gas supply path in the material gas lead-out path, and the set flow rate calculation part uses the flow rate-concentration relationship data to calculate the initial set flow rate during the second supply period can be cited.
The flow rate-concentration relationship data may be, for example, a calculation expression applicable when there is no proportional relationship between the concentration of the material gas contained in the mixed gas and the flow rate of the carrier gas or diluent gas, such as a nonlinear calculation expression different from the calculation expression in the above-described embodiment.
Even when there is no proportional relationship between the concentration of the material gas contained in the mixed gas and the flow rate of the carrier gas or diluent gas, by using such flow rate-concentration relationship data, the initial set flow rate during the second supply period can be accurately calculated.
33
1
1
The calculation timing of the initial set flow rate by the set flow rate calculation part is not limited to the predetermined time point in the above-described embodiment. For example, as long as the actual concentration C and the actual flow rate Q at the predetermined time point are temporarily stored in the memory, the calculation may be performed after the predetermined time point.
The above-described embodiment is configured to, at the end of the first supply period, output the initial set flow rate during the next second supply period. However, the output timing is not limited to this, but may be, for example, any timing during the stop period from the first supply period to the next second supply period, or may be, for example, immediately after the start of the second supply period as long as the overshoot can be reduced.
33
The set flow rate calculation part may be configured to calculate an average initial set flow rate obtained by averaging initial set flow rates respectively calculated for multiple supply periods when the material gas has already been supplied.
In such a configuration, by setting the average initial set flow rate in the flow rate control device during the initial interval of the supply period, the errors of the respective calculated initial set flow rates can be reduced, and therefore the overshoot during the initial interval of the supply period can be more surely reduced.
Also, when the relationship between the flow rate and the concentration does not match the calculation expression for the initial set flow rate, an increase in initial concentration and a decrease in initial concentration may be alternately repeated during successive supply periods.
33
For this reason, it is preferable that the set flow rate calculation part calculates the above-described average initial set flow rate by averaging initial set flow rates respectively calculated for successive even-numbered supply periods including a supply period immediately before a supply period for which an initial set flow rate is to be set.
In such a configuration, increases in initial concentration and decreases in initial concentration during successive supply periods are averaged and cancelled out, and therefore the initial set flow rate for bringing the actual concentration close to the target concentration can be more accurately set.
30
33
The concentration controller of the above-described embodiment is configured so that the set flow rate calculation part calculates and outputs both of the FB set flow rate and the initial set flow rate. However, the concentration controller may include the function of calculating the FB set flow rate and the function of calculating the initial set flow rate separately.
33
Also, for example, when the actual concentration during the initial interval of the first supply period exceeds a preset threshold value, the set flow rate calculation part may calculate the initial set flow rate during the second supply period on the basis of the actual concentration (threshold value) at the time and the actual flow rate at the time. In this case, a time point when the actual concentration exceeds the threshold value is the predetermined time point.
40
60
The first flow rate control device and the second flow rate control device may be mass flow controllers including differential pressure type flow rate sensors.
40
60
Also, as the first flow rate control device and the second flow rate control device , flow rate control valves such as piezo valves may be used without using mass flow controllers.
50
50
32
30
The concentration monitor may be one that directly measures the concentration of the material gas by, for example, ultrasonic waves. Also, the concentration monitor may be adapted to include a function as the concentration calculation part , and output the concentration to the concentration controller .
100
60
100
50
2
Although the gas control system of a bubbler type (bubbling type) is described, a control method may be basically any method, and only has to be one using carrier gas. For example, the gas control system may be one not including the second flow rate control device . As the gas control system in this case, for example, one configured to provide a valve such as a piezo valve downstream of the concentration monitor in the material gas lead-out path L and perform pressure control of the supply amount of the material gas by controlling the opening level of the valve can be cited. Further, the present invention is also applicable to a gas control system of a DLI type (direct vaporization type) that carries material in a liquid state and vaporizes it near a use point (e.g., a deposition chamber) to perform flow rate control.
The concentration controller of the above-described embodiment is one that suppresses the concentration of the material gas from overshooting immediately after the start of the supply period. However, the concentration controller of the present invention may be one that suppresses the concentration of the material gas from undershooting immediately after the start of the supply period.
Besides, it should be appreciated that the present invention is not limited to the above-described embodiments, but can be variously modified without departing from the scope thereof.
100
: Gas control system
10
: Vaporization tank
1
L: Carrier gas supply path
2
L: Material gas lead-out path
3
L: Diluent gas supply path
30
: Concentration controller
31
: Target concentration reception part
32
: Concentration calculation part
33
: Set flow rate calculation part
50
: Concentration monitor
BRIEF DESCRIPTION OF DRAWINGS
FIG. 1
is a diagram schematically illustrating the overall configuration of a gas control system of the present embodiment;
FIG. 2
is a graph illustrating the intermittent flow of material gas in the gas control system of the same embodiment;
FIG. 3
is a functional block diagram illustrating functions of a concentration controller of the same embodiment;
FIG. 4
is a graph for explaining an initial set flow rate calculation method of the same embodiment; and
FIG. 5
is a flowchart illustrating the operation of the concentration controller of the same embodiment. | |
Healthcare data breaches are scary… scarier than you might think. Our healthcare records contain highly sensitive information, including personal health details and private billing data. And yet, despite how closely guarded this information should be, nearly 41,335 pieces of protected health information (PHI) are breached every day.
Every.
Single.
Day.
That’s just about twelve every minute, meaning there have been numerous breaches since you started reading. Furthermore, on average, that means over half a million accounts are breached on a monthly basis. Wyoming has a population of just over 580,000, which equates to nearly the entire population of Wyoming in data being lost every month. Healthcare cannot afford to bleed private medical information. Worse yet? The bleeding is getting worse. And it costs billions.
Over 90% of major healthcare organizations suffered a data breach within the past two years. You can read the details here. That breaks down to $6.2 billion dollars in damages. 11 million patient records were breached in June of 2016 alone, making it the worst month for information security in an already bad year. Even NFL teams have been victims of healthcare data breaches, with thousands of players’ records stolen in April of 2016. It’s not just the big guys either. The organizations that are most to susceptible to a criminal attack, unfortunately, are often the least likely to have the resources needed to address cyber threats and keep their patient data safe. That’s not good.
But why is this happening? The reasons are numerous, but here’s a quick summary.
- Information is not secured: 74% of mobile medical devices are not encrypted. Only 21% of BYOD devices are scanned before connecting to a network.
- Users are uneducated. 47% of healthcare providers are not confident in themselves to keep patient data secured, even though over 91% of them use some sort of cloud-based service.
- Technology moves too fast. By the time standards are adopted by providers, new threats will develop and even newer standards will be required. That’s why, despite record losses in 2015, 2016 looks to be even worse.
Simply put, healthcare data breaches are far too common. So how do organizations address the challenge?
Learn more about HITRUST.
Remind me again… what is HITRUST certification?
HITRUST is a comprehensive and verifiable security framework for healthcare organizations. It was developed by healthcare and IT professionals to address risks associated with information security. HITRUST’s (Health Information Trust Alliance) Common Security Framework (CSF) provides a robust and detailed roadmap and features controls necessary for managing the many healthcare information security compliance requirements outlined by state and federals laws as well as other standards and compliance organizations. While HIPAA requirements provide a framework for healthcare security and privacy, HITRUST goes well beyond that by identifying specific business practices, processes, and systems and ensures they are implemented through a certified third party that actually goes on-site to investigate, interview, assess, and validate evidence of proper implementation and compliance. There is a big difference between an organization that self-assesses their operations and calls themselves HIPAA compliant vs. an organization that has completed a HITRUST self-assessment, provided evidence of implementation of literally hundreds of requirements, and then had each individual requirement investigated and approved by a certified auditor and ultimately HITRUST.
Do I need HITRUST certification?
It’s not a requirement, but if you are trusting third parties or vendors with sensitive information you probably don’t want to take any chances. Trusting a third-party vendor that is HITRUST Certified to protect your sensitive PHI saves your practice the time and cost it takes to obtain the certification. If you are ever in a situation where you have to respond to a data breach incident, be assured, you won’t want to do it again. With the trends in data security, HITRUST will soon become the industry standard for verifying an organization’s ability to effectively handle sensitive information. Below are five reasons why HITRUST is becoming a necessity for healthcare information processing companies.
What are the benefits of HITRUST?
- Prevents data exposures
- Combats against attacks
- Establishes industry-wide reliability
- Promotes transparency
- Builds trust
1. Prevents Data Exposures
When PHI get breached, it’s more than just an invasion of privacy. It costs organizations time, money, and their reputation. Millions of individuals have their personal data compromised each year, resulting in billions of dollars spent on remediation, fines, and penalties
HITRUST ’s robust controls help organizations identify risks and prevent compliance issues.
2. Combats Against Attacks
The leading cause of data breaches is deliberate, malicious attacks from hackers. Check out this list of breaches affecting 500 or more individuals. Look under “Type of Breach”. See a pattern? These aren’t dumb people, and they capitalize on dumb mistakes.
One of the most prominent tools in a hacker’s arsenal is Ransomware, malicious software that blocks access to a computer system or certain elements of a system until a ransom is paid. Just like a typical ransom, even when payment is delivered, a safe return is not guaranteed. Even worse than real-world ransoms? The attacker never has to set foot near the site of the breach. With cybercrime at an all-time high, the use of ransomware by hackers increasing, and networks under constant attack, something must be done. HITRUST takes into account the many reasons for breaches and addresses processes and security measures to reduce exposure and risk.
3. Establishes Industry-Wide Reliability
HIPAA is a great foundation. I am sure all of the organizations that have experienced data breaches were “HIPAA compliant.” HITRUST goes beyond this with a comprehensive security framework that is audited, certified and verifiable.
4. Promotes Transparency
HITRUST CSF (Comprehensive Security Framework) is a standardized approach for healthcare organizations to follow in mitigating information security risks. When an organization tells another, they are HITRUST certified in the healthcare industry, that entity can be assured of the level of information protection being utilized. The CSF makes it easy for an organization to understand and verify another organization’s stance and status as it relates to healthcare information security.
5. Builds Trust
When it comes to HITRUST, all you need to know is in the name. With media reports of security breaches undermining consumer confidence regarding the handling of PHI, organizations need to know they can trust their vendors, patients need to know they can trust healthcare providers, and members need to know they can trust their insurance companies. HITRUST is not only a means for an organization to ensure they are handling information properly but also a way to convey trust to the parties they do business with.
How do I become HITRUST certified?
HITRUST certification is a long, costly, and comprehensive endeavor. While the benefit of HITRUST is to help ensure an organization’s stance on information security, it may not be feasible for some organizations to invest in the certification. Depending on the type of organization, it may make more sense to rather rely on vendors that are HITRUST certified; if you are entrusting sensitive information to a third party how do you really know the information is effectively protected? MailMyStatements is one of the first healthcare and patient billing and payments services to become HITRUST certified and is trusted by healthcare clients nationwide. To learn more about our services please request a demo today.
Hugh Sullivan is the CEO of MailMyStatements, an industry-leading healthcare billing, and payments company. He has over 25 years of experience as a seasoned healthcare executive, was the co-founder of ENS Health — a highly successful national healthcare electronic data interchange company, and has served in various leadership roles within Optum, a UnitedHealth Group company. Considered as an industry thought leader, Hugh is an expert in using health IT to improve healthcare information exchange, which can enhance the quality of care, improve efficiency, and reduce costs. | https://mailmystatements.com/2018/07/02/5-ways-hitrust-prevents-breaches-and-saves-millions/ |
The government announced today that it is raising the subsidy levels for projects to upgrade sewage treatment and disposal systems in small, rural communities with high levels of deprivation to up to ninety per cent of the costs of the project.
Prime Minister Helen Clark and Associate Health Minister Pete Hodgson said more than eighty such communities with a total population of about 30,000 would be able to benefit from the increased rates of subsidy.
“The government will pay a greater share of this vital work because we are strongly committed to improving public health services and environmental standards,” Helen Clark said.
“The Sanitary Works Subsidy Scheme (Sewerage) currently provides a subsidy of up to fifty per cent of the capital cost of providing a sewerage system in a community with a population of 2000 or less. The rate of subsidy declines in a straight line to ten per cent for communities of 10,000 people.
“The subsidy scheme was began on 1 July 2003, with $150 million being made available in Vote Health at the rate of $15 million a year over 10 years.
“Even with a fifty per cent government subsidy, however, there are small communities with a high deprivation index which can’t afford to establish reticulated sewerage schemes.
“Under the new rules being announced today, the maximum subsidy level under the scheme will rise to ninety per cent. The high rates of subsidy will be related to the level of deprivation in communities with deprivation indices averaging over seven out of a maximum of ten.”
Associate Health Minister Pete Hodgson said the public health benefits of proper sewage treatment and disposal would include a reduction in the incidence of water-borne diseases and contamination, which are a significant source of preventable illness.
“The correlation between a good sewerage system and a healthy community has led the government to prioritise funding support for communities whose sewage treatment and disposal needs upgrading.
“While the subsidy scheme was initially heavily subscribed, applications have more recently tapered off. Forty-four applications, involving $73.9 million worth of subsidies, have been given provisional approval.
“To determine the reasons for the slowing down in applications, a survey of communities experiencing unsanitary conditions was carried out on behalf of the Ministry of Health.
“It showed that the most deprived communities cannot afford the local share of funding and were therefore reluctant to adopt reticulated sewerage.
“Increasing the level of subsidy will enable us to target hard-pressed communities with high health risks,” Pete Hodgson said. | http://www.scoop.co.nz/stories/PA0507/S00509.htm |
Q:
Can I Misty Step into Midair?
If I had a flying speed due to being Aarakocra or having fly cast on me, would I be able to use misty step to teleport to midair rather than staying on the ground?
A:
Yes, so long as you can see it
The description of misty step (PHB, pg. 260) says:
Briefly surrounded by silvery mist, you teleport up to 30 feet to an unoccupied space that you can see.
So long as you can see the point in the air that you wish to teleport to (and it's no more than 30 feet away from you) you should be able to teleport up to it, and your flight speed from being an Aarakocra or from being affected by fly should keep you in midair.
A:
Anyone can Misty Step into midair.
Misty step just says:
Briefly surrounded by silvery mist, you teleport up to 30 feet to an unoccupied space that you can see.
Nothing in the spell says that space has to be on the ground.
A:
Yes. The spell Misty Step does what the description states.
As long as the space is within range, unoccupied, and you can see it, it is a valid misty step destination. Flying does not matter.
you teleport up to 30 feet to an unoccupied space that you can see.
| |
sep.
ellipse_coeffs(a, b, theta)¶
Convert from ellipse axes and angle to coefficient representation.
Parameters:
- a, b, theta : array_like
Ellipse(s) semi-major, semi-minor axes and position angle respectively. Position angle is radians counter clockwise from positive x axis to major axis, and lies in range
[-pi/2, pi/2]
Returns:
- cxx, cyy, cxy : | https://sep.readthedocs.io/en/v1.1.x/api/sep.ellipse_coeffs.html |
Organizational behavior c.202: Introduction Effective September..
based approach to work, setting Parma in a new direction. The 1990 agreement formally documented their joint priorities of team-based work groups, extensive employee training, and a supportive working environment. The assistant personnel director for hourly employment, Bill Marsh, felt that, although this was another positive step in their ongoing relationship with Local 1005, the negotiating process seemed more �traditional� than the previous negotiation in 1987. Bob Lintz, the plant manager, agreed. Unexpectedly, the new Shop Committee chairman, who is Local 1005�s prime negotiator, had introduced over 600 demands at the start of Parma�s local contract negotiation. Even though management and the union were still able to finalize an agreement quickly, the tension created by the enormous list of demands still lingered. It could destroy the collaborative relationship that had been built over the past decade between management and the union leadership as well as the openness that Bob Lintz had managed to foster between himself and the hourly employees.
To implement this agreement, Parma�s top management and Local 1005 created the Team Concept Implementation Group, or TCIG, to introduce this new Team Concept and spent $40 million on extensive training of the entire workforce in problem solving, group dynamics, and effective communication skills. By 1990, the Team Concept had empowered hourly employees to assume more responsibility in their jobs and to focus on problem-solving and work-related matters and to move beyond status differences exemplified by position titles or neckties.
him about every detail, they did support his efforts out of loyalty to him and to his relationship with Bob.
Bob and his managers are concerned about the tension that has been created by the new Shop Committee chairman�s large number of demands, especially because the union made only about 100 demands during the previous contract negotiations. Roger had publicly endorsed this new chairman of the Shop Committee, yet management was not certain that he would continue Roger�s strategy of collaboration within the union and between management and the union. With several new individuals in the union leadership, Parma�s management also had to consider the possibility that the entire union leadership was actually becoming more adversarial, especially as the two political factions within the union continued to compete for support among members of Local 1005. Relations between hourly and salaried employees on the production floor could also suffer. | http://booksbw.com/index.php?id1=4&category=management&author=osborn-rn&book=2002&page=202 |
Here are the answers for the Octordle words #394 today, released on February 22, 2023 and some hints to help you solve them.
UPDATE: Click here for the hints and the answers to Octordle 395!
Octordle is a difficult game in which players must guess eight five-letter words at the same time while only having thirteen guesses! The game functions similarly to Wordle in that there are no clues to assist you in guessing the words, but once you have guessed a word, the tiles will change colour.
The colours will show you if you guessed the correct letters and if they are in the correct order. There is no correct or incorrect way to play the game, but we recommend attempting to guess words that use the majority of the letters of the alphabet in as few guesses as possible first.
This will help you realize which letters appear in each word so you can solve the words quickly and efficiently. There may be some words that have repeated letters, so make sure you keep that in mind when guessing the words.
As this puzzle is tricky, we do have some hints to help you if you are struggling, all of which can be seen below.
Octordle 394 Words Hints Today (February 22, 2023)
Here are the clues we have for all the eight words for Octordle 394 today.
Hint 1: There is a Y in words 3, 4 and 5.
Hint 2: There is an F in word 6 only.
Hint 3: There is a W in word 3 only.
Hint 4: There is a C in words 6, 7 and 8.
Hint 5: There is a repeated letter in word 1, 2, 4 and 5.
Hint 6: There are no double letters in any words today.
Hint 7: Here are the starting letters of each word:
- Word 1: S
- Word 2: T
- Word 3: G
- Word 4: R
- Word 5: T
- Word 6: C
- Word 7: M
- Word 8: C
Hint 8: Here is a little description or clue for all of the words:
- Word 1: to begin or set out, as on a journey or activity.
- Word 2: the trunk of the human body.
- Word 3: nervously awkward and ungainly.
- Word 4: try (a defendant or case) again.
- Word 5: a private romantic rendezvous between lovers.
- Word 6: (with reference to a part of the body) make or become sore by rubbing against something.
- Word 7: a single instruction that expands automatically into a set of instructions to perform a particular task.
- Word 8: yearn to possess (something, especially something belonging to another).
What is the Octordle 394 Answer Today? (2/22/23)
Here are all of the answers for Octordle 394 released today on February 22nd, 2023:
- Word 1: START
- Word 2: TORSO
- Word 3: GAWKY
- Word 4: RETRY
- Word 5: TRYST
- Word 6: CHAFE
- Word 7: MACRO
- Word 8: COVET
You can see how I got the words in the image below:
Let us know in the comments section below if you managed to get some or all of the Octordle words today. Click here for the hints and the answers to Octordle 395! | https://fortniteinsider.com/daily-octordle-answers-394-february-22-2023-hints-and-solutions-2-22-23/ |
The start-up behavior of many linear integrated circuits is determined by external component selection. This is particularly true for a shunt regulator that operates over a wide range of voltages and has various types of external components interfaced with it. In order to allow flexibility for the end-user, it is desirable to have the shunt regulator circuit operate over a large range of external factors, such as input voltage, and components such as input and output capacitors, etc.
FIG. 1 shows the fundamentals of a typical closed loop type linear shunt regulator 10. The regulator has a device 12 such as a transistor for shunting excess bias current to ground to maintain regulation. The transistor is of suitable capacity for the amount of current being regulated. Input voltage Vin is supplied through a resistor R1 to the regulating device 12 drain. The source of the device 12 is connected to ground. An output bypass capacitor C1 has one end connected to the junction of the device 12 drain and R1 and the other end connected to ground. The regulator output voltage Vout is taken at the upper end of C1, which is included to reduce noise and improve transient response. The resistor R1 sets a value of a current IBIAS that is determined by the formula:
IBIAS = Input Voltage - Output Voltage R 1
The regulator 10 is a closed loop type circuit that has an error amplifier 14 whose output is connected to the gate electrode of the regulating device 12 to control its conduction state. The negative (inverting) input to the error amplifier 14 is a highly stable reference voltage, Vref, from a bandgap core 16, which is the target value for the regulator output voltage Vout. Many conventional circuits are known for implementing the bandgap core 16. The bandgap core 16 is connected between a point of Vout and ground to receive an operating voltage. The error amplifier 14 positive (non-inverting) input is the voltage Vout which completes the closed loop. The regulator 10 operates so that if the voltage Vout exceeds the reference voltage Vref applied to the error amplifier 14, then the error amplifier 14 produces an output signal that causes device 12 to conduct current, which it dissipates as heat. This regulates the output voltage Vout to be at the target value.
When a regulator circuit such as that shown in FIG. 1 is turned on, it takes some time for the circuit components to stabilize so that it will be operating in a linear range of the error amplifier 14 in which it is able to regulate Vout. The start-up requirements of a typical shunt regulator such as that of FIG. 1 are principally determined by the values of the bypass capacitor C1 and the bias current setting resistor R1. If C1 ranges from 10 pF to 10 μF, and IBIAS ranges from 10 μA to 10 mA then the start-up time constant can vary by as much as nine orders of magnitude. The start up time constant is defined as the time required for the regulation loop to reach its linear operating range as set by the external factors and component values. In practical terms, this means that the shunt regulator must be able to respond in nanoseconds to prevent large amounts of Vout overshoot from the target value when C1 is small and IBIAS is large. Conversely, the regulator must avoid motor-boating and meta-stable states with long start-up time constants of seconds due to a large C1 value and a small IBIAS value.
Conventional shunt regulators allow the start-up to be controlled by the error amplifier steady-state regulation loop. The minimum start-up time constant to avoid Vout output overshoot is limited by the amount of time it takes for the regulation loop to become biased into its linear operating region and also the bandwidth of the error amplifier. This generally requires tens to hundreds of microseconds. Very large start-up time constants (seconds) increase the likelihood that the loop will motor-boat or get trapped in a metastable state because the loop is held out of its linear regulating region for long portions of the start-up time and its ability to control the output voltage varies.
The steady-state regulation loop is typically optimized and compensated for steady-state operation and not for start-up behavior. The steady state regulation loop requires certain bias conditions and a certain amount of time to behave in a predictable, well controlled manner. Either one or both of these factors are not met over certain ranges of start-up time constants.
Accordingly, a need exists to provide a shunt regulator that does not have the above-described disadvantages of the prior art circuits, even though the external factors and components of the circuit might vary over a large range.
| |
For most of us there are difficult times in life when we feel the need for help to work through our experiences. It takes courage to seek support at these times.
There can also be times when we just want to work through something, to talk to someone who will listen to us without judgement; someone who will help us to explore and understand our life through our own experience.
If you are experiencing difficulties, counselling can support you to find ways to cope and give you choice in your life.
I offer a safe, supportive, friendly and confidential space for you to explore your feelings. A place to express yourself and to uncover your concerns and anxieties, and to move towards a more positive future.
Whether you are experiencing:
fear
loss
grief
trauma
depression
anger
a phobia
addiction
relationship difficulties
self-harm
abuse
click here for more
If you are having relationship difficulties, with either a current or past relationship, you may wish to come as a couple, or as an individual. A ‘couple’ can be partners (I work with mixed gender and same gender couples), friends, parents and siblings, colleagues or any relationship you are struggling with.
You can feel supported while you explore your deepest feelings and experiences. | http://wise-counselling.com/ |
---
abstract: 'Semiconducting quantum wires defined within two-dimensional electron gases and strongly coupled to thin superconducting layers have been extensively explored in recent experiments as promising platforms to host Majorana bound states. We study numerically such a geometry, consisting of a quasi-one-dimensional wire coupled to a disordered three-dimensional superconducting layer. We find that, in the strong coupling limit of a sizable proximity-induced superconducting gap, all transverse subbands of the wire are significantly shifted in energy relative to the chemical potential of the wire. For the lowest subband, this band shift is comparable in magnitude to the spacing between quantized levels that arise due to the finite thickness of the superconductor (which typically is $\sim500$ meV for a 10-nm-thick layer of Aluminum); in higher subbands, the band shift is much larger. Additionally, we show that the width of the system, which is usually much larger than the thickness, and moderate disorder within the superconductor have almost no impact on the induced gap or band shift. We provide a detailed discussion of the ramifications of our results, arguing that a huge band shift and significant renormalization of semiconducting material parameters in the strong-coupling limit make it challenging to realize a topological phase in such a setup, as the strong coupling to the superconductor essentially metallizes the semiconductor. This metallization of the semiconductor can be tested experimentally through the measurement of the band shift.'
author:
- Christopher Reeg
- Daniel Loss
- Jelena Klinovaja
bibliography:
- 'bibStrongCouplingDisorder.bib'
title: 'Metallization of Rashba wire by superconducting layer in the strong-proximity regime'
---
Introduction {#secIntro}
============
The search for Majorana fermions in various condensed matter systems has intensified considerably in recent years [@Alicea:2012]. Among the most promising proposals for realizing these exotic states involve coupling a conventional superconductor to a topological insulator [@Fu:2008; @Fu:2009; @Hasan:2010; @Qi:2011; @Liu:2011; @Wiedenmann:2016], an atomic magnetic chain [@NadjPerge:2014; @Ruby:2015; @Pawlak:2016; @Klinovaja:2013; @Vazifeh:2013; @Braunecker:2013; @NadjPerge:2013; @Pientka:2013; @Awoga:2017], or a semiconductor with strong spin-orbit coupling [@Sato:2009; @Sato:2009b; @Lutchyn:2010; @Oreg:2010; @Sau:2010; @Alicea:2010; @Chevallier:2012; @Halperin:2012; @Sticlet:2012; @Prada:2012; @Dominguez:2012; @Klinovaja:2012; @Nakosai:2013; @DeGottardi:2013; @Weithofer:2013; @Weithofer:2014; @Vernek:2014; @Maier:2014; @Thakurathi:2015; @Dmytruk:2015; @Dominguez:2017; @Maska:2017]. The first generation of experiments on semiconductor/superconductor hybrid structures, which showed zero-bias peaks in the tunneling conductance of nanowires, were plagued by significant subgap conductance [@Mourik:2012; @Deng:2012; @Das:2012; @Churchill:2013; @Finck:2013; @Takei:2013; @Stanescu:2014]. This led to the development of epitaxially grown thin shells of superconducting Aluminum (Al) that form a very strong and uniform contact with InAs or InSb nanowires, thus ensuring robust proximity couplings and hard induced superconducting gaps that are nearly as large as the gap of the Al layer [@Chang:2015; @Albrecht:2016; @Deng:2016; @Gazibegovic:2017; @Zhang:2017_2; @Vaitiekenas:2017; @Deng:2017]. The epitaxial growth of Al has also been extended to InAs two-dimensional electron gases (2DEGs), with the hope that such systems can be used to form complex networks of Majorana fermions [@Kjaergaard:2016; @Shabani:2016; @Kjaergaard:2017; @Suominen:2017; @Nichele:2017].
With the experiments shifting to the strong-coupling regime, the proximity effect in topological setups has gained renewed attention. It is well known that as the coupling between the superconductor and semiconductor is enhanced, the electron wave function acquires a larger weight within the superconductor, thus leading to a larger proximity-induced gap ($E_g$) and a renormalization of semiconducting material parameters such as $g$-factor, spin-orbit splitting ($E_{so}$), and effective mass ($m^*$) [@Sau:2010prox; @Stanescu:2010; @Potter:2011; @Tkachov:2013; @Zyuzin:2013; @Cole:2015; @vanHeck:2016; @Hell:2017; @Stanescu:2017; @Liu:2017; @Reeg:2017_2]. This result can be obtained analytically in the limit of a single 1D or 2D semiconducting subband coupled to a clean 3D bulk superconductor by “integrating out" the superconducting degrees of freedom to obtain an effective self-energy describing induced superconductivity. In this description, the relevant superconducting energy scale determining the strength of the proximity effect is the gap $\Delta$ [@strongcoupling], and all physics of the proximity effect should occur on this small energy scale. Despite such theories being employed frequently to describe the experiments utilizing thin epitaxial superconducting layers [@Cole:2015; @vanHeck:2016; @Deng:2016; @Zhang:2017_2; @Stanescu:2017; @Liu:2017; @Hell:2017], their applicability is unclear because the experimental setup consists of multiple wire channels and a thin disordered superconducting layer (rather than a clean bulk superconductor).
![\[experiment\] A quantum wire is lithographically defined within a semiconducting 2DEG and coupled to a superconducting layer with width $W$ and thickness $d$, as studied in Refs. [@Suominen:2017; @Nichele:2017]. ](experiment.pdf){width="\linewidth"}
The finite thickness ($d$) of the superconducting layer was incorporated analytically in Ref. [@Reeg:2017_3] in an attempt to better describe the experimental setup. This finite thickness introduces a large energy scale given by the level spacing $\pi \hbar v_{Fs}/d$ ($v_{Fs}$ is the Fermi velocity of the superconductor), and it was shown that the relevant superconducting energy scale determining the strength of the proximity effect in this case is the level spacing rather than the gap. Thus, in the thin-layer limit, $\hbar v_{Fs}/d\gg\Delta$, a much stronger proximity coupling is required in order to open a gap in the wire. Most importantly, in addition to the usual parameter renormalization, the strong-coupling limit is also accompanied by a large band shift in the semiconducting wire that is comparable to the level spacing $\pi \hbar v_{Fs}/d$ [@Reeg:2017_3]. In stark contrast, such a band shift is completely absent in the case of a bulk superconductor [@Sau:2010prox; @Stanescu:2010; @Potter:2011; @Tkachov:2013; @Zyuzin:2013; @Cole:2015; @vanHeck:2016; @Hell:2017; @Stanescu:2017; @Liu:2017; @Reeg:2017_2].
However, Ref. [@Reeg:2017_3] studied an idealized model of a single 1D semiconducting subband coupled to a clean 2D superconductor. The conclusion that a very strong proximity coupling is required to open a sizable gap is strongly dependent on the assumption of momentum conservation within the tunneling process. In a realistic experimental setup with superconductor thickness $d\sim10$ nm and width $W\sim50$ nm [@Suominen:2017; @Nichele:2017] (see Fig. [\[experiment\]]{}), there are $\sim10^4$ occupied superconducting subbands; in the presence of unavoidable disorder within the superconducting layer, which breaks translational symmetry and therefore removes momentum conservation, one could expect that subbands of the wire can much more easily couple to the many superconducting subbands that lie at the Fermi level, thus significantly reducing the coupling strength required to open a sizable gap in the wire. Additionally, if there is a large band shift that exceeds the transverse level spacing of the wire, then higher subbands become increasingly important and a single-band analytical model like that studied in Ref. [@Reeg:2017_3] is insufficient. While there have been several works to investigate the stability of topological superconducting phases in multiband wires [@Wimmer:2010; @Stanescu:2011; @Potter:2010; @Potter:2011b; @Lutchyn:2011; @Lutchyn:2011b; @Zhou:2011; @Law:2011; @Kells:2012; @Pientka:2012; @Gibertini:2012; @Rainis:2013; @Rieder:2014; @Pekerten:2017], there are no systematic studies of the proximity effect in the strong-coupling limit. It is the goal of this work to provide such a study. In this paper, we numerically study the proximity effect in a quasi-1D quantum wire that is defined within a semiconducting 2DEG and strongly coupled to a thin disordered superconducting layer with thickness $d$ and width $W$, as shown in Fig. \[experiment\]. First, we show that in the strong-coupling limit, which is characterized by the wire having a proximity-induced gap $E_g$ that is comparable to the gap $\Delta$ of the superconductor, all transverse subbands of the wire are significantly shifted with respect to their positions in the absence of coupling, and all semiconducting material parameters (such as effective mass $m^*$, spin-orbit splitting $E_{so}$, and $g$-factor) are significantly renormalized toward their values in the superconductor. Next, we study in detail the role played by both the finite width $W$ of the system and disorder within the superconductor. We find, quite surprisingly, that neither the finite width $W$ nor moderate levels of disorder have a substantial impact on the proximity effect. For the specific case of an InAs quantum wire coupled to a thin epitaxial layer of Al, we show that the semiconducting wire becomes metallized by the superconductor. This metallization is characterized by the occupation of many transverse subbands in the wire (thus pushing the system far from the desired 1D limit) and a significant renormalization of the semiconducting material parameters. Additionally, we discuss the challenges involved in realizing a topological phase in the metallized limit, arguing that the ability to do so is unlikely but is also largely device dependent, and propose how to experimentally test our theory. Our results presented in this work suggest that it is more promising to search for Majorana fermions in systems with a weak proximity coupling (such as nanowires sputtered by a thick superconducting slab [@Mourik:2012]) than in systems with strong proximity coupling accompanied by a large band shift.
While our focus is directed primarily toward engineering topological superconductivity in 1D systems, the study of a strong proximity coupling between a semiconductor with Rashba spin-orbit interaction (SOI) and an $s$-wave superconductor has far-reaching consequences. In particular, we expect that our results can be extended to studying the proximity effect in topological insulator surface states [@Fu:2008; @Fu:2009; @Hasan:2010; @Qi:2011; @Liu:2011; @Crepin:2015; @Dayton:2016; @Wiedenmann:2016; @Charpentier:2017; @Cayao:2017], odd-frequency triplet superconductivity [@BlackSchaffer:2012; @Asano:2013; @Bergeret:2013; @Bergeret:2014; @Reeg:2015; @Hart:2016; @Yang:2017] and magnetoelectric effects [@Edelstein:2003; @Bobkova:2017] induced by SOI, superconducting spintronics [@Buzdin:2005; @Bergeret:2005; @Eschrig:2011; @Linder:2015], Cooper pair splitting [@Byers:1995; @Choi:2000; @Deutscher:2000; @Lesovik:2001; @Recher:2001; @Yeyati:2007; @Shekhter:2016], as well as various aspects of the superconductor-insulator transition [@Bottcher:2017].
The remainder of the paper is organized as follows. In Sec. \[secModel\], we describe our numerical tight-binding simulation. The results of our calculation for a disordered 3D system are presented in Sec. \[secNumerics\], where we justify that such a system can be realistically described by a clean 2D model. In Sec. \[secEstimates\], we provide a numerical calculation specific to epitaxial Al/InAs experimental setups. We also argue that the metallization of InAs inhibits the ability to observe a 1D topological phase in such a setup and discuss how to experimentally test our theory. Our conclusions are summarized in Sec. \[secConclusion\].
Model {#secModel}
=====
We consider the geometry sketched in Fig. \[experiment\], which consists of a quasi-1D semiconducting quantum wire of width $W$ with Rashba SOI, assumed to be lithographically defined within a 2DEG similarly to the devices studied in Refs. [@Suominen:2017; @Nichele:2017], tunnel coupled to a superconducting layer of width $W$ and thickness $d$. We do not consider explicitly the finite thickness of the 2DEG, as we assume that the subband spacing arising from the finite thickness is very large (for the experimental thickness $\sim5$ nm [@Shabani:2016], this is a valid assumption). We describe this setup by a tight-binding Hamiltonian, assuming for now that the system is clean and translationally invariant along its length. The total tight-binding Hamiltonian is given by $H=\sum_kH_{k}$, where $k$ is a conserved momentum along the wire axis; for a given momentum, we consider $$\label{Htot}
H_k=H_k^w+H_k^s+H_k^t.$$
The Hamiltonian of the wire is given by $$\begin{aligned}
H_k^w&=\sum_{x=1}^{W/a}\biggl[b_{x,k}^\dagger\{\xi_{k}^w+(\alpha/a)\sin(ka)\sigma_x-\Delta_Z\sigma_y\}b_{x,k} \\
&-\{b_{x,k}^\dagger(t_w-i\alpha\sigma_y/2a) b_{x+1,k}+H.c.\}\biggr],
\end{aligned}$$ where $b_{x,k}=(b_{x,k,\uparrow},b_{x,k,\downarrow})^T$ is a spinor, $b_{x,k,\sigma}$ annihilates a state of momentum $k$ and spin $\sigma$ at position $x$ within the wire, $t_w$ is the hopping matrix element, and $a$ is the lattice constant. In addition, the Hamiltonian contains a Rashba SOI term [@Bychkov:1984; @rashba] characterized by the SOI constant $\alpha$ as well as a Zeeman term $\Delta_Z=|g|\mu_BB/2$ caused by an external magnetic field of strength $B$ applied along the wire axis ($g$ is the $g$-factor of the wire and $\mu_B$ is the Bohr magneton). The SOI term induces a spin-orbit splitting $E_{so}$ on each transverse subband of the wire, which is defined as the difference in energy between the crossing point of spin-split bands at $k=0$ and the bottom of the band. Due to the finite width $W$, the spin-orbit splitting is different for each transverse subband; in the limit $W/a=1$, the splitting is given by the usual expression $E_{so}=m^*\alpha^2/2\hbar^2=\alpha^2/4t_wa^2$ [@Bychkov:1984; @Rainis:2013], where $m^*$ is the effective mass of the wire (in terms of tight-binding parameters, $m^*=\hbar^2/2t_wa^2$). We take $\xi_k^w=2t_w\{1-\cos(ka)-(1+\alpha^2/8t_w^2a^2)\cos[\pi W/(W+1)]\}-\mu_w$, such that the chemical potential of the wire $\mu_w$ is measured from the Rashba crossing point (at $k=0$) of the lowest transverse subband.
The Hamiltonian of the superconducting layer is $$\begin{aligned}
H_k^s&=\sum_{x=1}^{W/a}\sum_{z=1}^{d/a}\biggl[c^\dagger_{x,z,k}(\xi_{k}^s-\Delta_Z^s\sigma_y)c_{x,z,k}-\{t_sc_{x,z,k}^\dagger c_{x+1,z,k} \nonumber\\
&+t_sc_{x,z,k}^\dagger c_{x,z+1,k}+\Delta c_{x,z,-k,\downarrow}^\dagger c_{x,z,k,\uparrow}^\dagger+H.c.\}\biggr],\end{aligned}$$ where $c_{x,z,k,\sigma}$ annihilates a state of momentum $k$ and spin $\sigma$ at position $(x,z)$ within the superconductor, $t_s$ is the hopping matrix element, and $\Delta$ is the pairing potential. The external magnetic field is incorporated in the superconductor through the Zeeman term $\Delta_Z^s=(2/|g|)\Delta_Z$, and we take $\xi_k^s=2t_s\{2-\cos[\pi W/(W+1)]-\cos(ka)\}-\mu_s$, such that the chemical potential of the superconductor $\mu_s$ is measured from the bottom of the lowest subband.
Finally, tunneling between the wire and superconductor, which is assumed to preserve spin and momentum, is described by $$\label{Ht}
H_k^t=-t\sum_{x=1}^{W/a}\{c_{x,1,k}^\dagger b_{x,k}+H.c.\},$$ where $t$ is real and denotes the tunneling strength.
Our model assumes that the chemical potential in the system is fixed externally, and hence any change in particle number can be attributed to the attached leads. However, as any change in particle number that may occur is small compared to the total number of particles in the system (see Sec. \[secTopological\]), we expect a negligible deviation from what would be obtained in the case of fixed particle number.
Numerical results {#secNumerics}
=================
In this section, we present results obtained numerically. For now, we do not attempt to explicitly model any existing experimental setup; due to the very short Fermi wavelength of the metal, doing so would be extremely expensive computationally. Rather, we focus on deducing various numerical trends that arise when keeping the physical separation of energy scales intact (e.g. $\mu_s\gg\mu_w$). As we will see, these results will allow us to make more quantitative predictions about the experimental setup in the following section.
Strong coupling limit {#secStrongCoupling}
---------------------
First, we study the transition from the weak-coupling regime, characterized by a proximity-induced gap $E_g\ll\Delta$, to the strong-coupling regime, characterized by $E_g\sim\Delta$, focusing on the behavior of subbands that originate in the wire (*i.e.*, those subbands that have zero weight in the superconductor in the limit $t=0$) as a function of tunneling strength $t$. We obtain the spectrum $E(k)$ numerically from Eq. . As the Fermi wavelength of the superconductor is much smaller than that of the semiconductor, the spectrum consists primarily of subbands originating in the superconductor; wire subbands are distinguished by their appreciable spin-orbit splitting $E_{so}$ and small effective mass $m^*$ (see for example Fig. \[specweak\]).
![\[specweak\] (a) Excitation spectrum in the absence of tunneling ($t=0$) obtained numerically from Eq. . The two lowest subbands of the wire are distinguished by color, and black curves correspond to subbands of the superconductor. (b) If a weak tunnel coupling is turned on ($t/t_s=0.035$), the subbands of the wire undergo a substantial shift in energy ($\delta E_n\sim10\Delta$). Tight-binding parameters are fixed to $d/a=42$, $W/a=175$, $\mu_s/t_s=0.1$ [@tb], $\Delta/t_s=10^{-4}$, $t_w/t_s=5$, $\mu_w=0$, $\alpha/(at_s)=0.05$, $\Delta_Z=0$.](specweak.pdf){width="\linewidth"}
In the absence of tunneling \[Fig. \[specweak\](a)\], the spectrum of the wire at $k=0$ is given by $$\label{spectrum}
E_n(0)=E_1(0)+\frac{\hbar^2\pi^2}{2m^*W^2}(n^2-1),$$ where $n\in\mathbb{Z}^+$ is a subband index and $E_n(0)$ is the energy of the $n$th subband at $k=0$. In the presence of a weak tunnel coupling \[Fig. \[specweak\](b)\], the superconductor induces a small gap and, even more strikingly, a very substantial energy shift on the subbands of the wire. We define the band shift of the $n$th subband of the wire, which is a function of tunneling $t$, as $$\delta E_n=|E_n^t (0)-E_n^{t=0}(0)|.$$ In the weak-coupling limit of Fig. \[specweak\](b), the band shift $\delta E_n\sim10\Delta$ is more than two orders of magnitude larger than the induced proximity gap ($E_g\sim0.03\Delta$).
The evolution of the wire spectrum from the weak- to the strong-coupling limit as a function of $t$ is shown in Fig. \[specvst\]. To reach the limit of strong coupling (defined such that $E_g\sim\Delta$), the tunneling strength must be made comparable to $t_s$ \[see Fig. \[specvst\](a)\]; therefore, a substantial gap $E_g$ can be induced only if there is an extremely high-quality semiconductor/superconductor interface. When such strong tunneling is present, we also observe a very large energy shift in all subbands of the wire \[see Fig. \[specvst\](b)\], with the bottom of each subband saturating to a different energy at large $t$. This band shift is significantly larger for higher subbands, and, as a result, it requires a larger tunneling strength for higher subbands to reach their saturation positions. Crucially, for all values of $t$, we find that each band is shifted such that Eq. remains satisfied \[see Fig. \[specvst\](b) inset\], provided that we allow the effective mass $m^*$ to acquire a $t$-dependence. The effective mass $m^*$, which can be found by fitting Fig. \[specvst\](b) to Eq. , increases as a function of $t$ as the bands originating from the semiconductor acquire a larger weight inside the superconductor \[Fig. \[specvst\](c)\]. Additionally, the spin-orbit splitting $E_{so}$ \[Fig. \[specvst\](d)\] and $g$-factor \[Fig. \[specvst\](e)\] (extracted from the Zeeman splitting at $k=0$ [@gfactor]) of each subband are reduced as a function of $t$. All parameters of the semiconducting wire saturate to their corresponding values within the superconductor in the limit of large $t$ ($m^*/m_e\to1$, $E_{so}\to0$, and $|g|\to2$). However, similarly to the band shifts, this parameter renormalization is not the same for all subbands, as higher subbands require a larger tunneling strength to become fully renormalized.
![\[specvst\] (a) Proximity-induced gap $E_g$ increases as a function of tunneling strength $t$, with a large gap being induced only for $t\sim t_s$. (b) The four lowest transverse subbands of the wire (distinguished by color) shift in energy as $t$ is increased. The energy of each subband is measured at $k=0$, as schematically indicated by gray dashed subbands, and the energy of occupied bands is negative. Inset: Subband energy increases quadratically with index $n$ (evaluated at $t=0.25t_s$), in agreement with Eq. . (c) The effective mass $m^*$, which is obtained by fitting panel (b) to Eq. , increases as a function of $t$. In the limit of large $t$, the mass approaches that of the superconductor ($m^*/m_e\to1$). (d-e) The spin-orbit splitting $E_{so}$ and $g$-factor $|g|$ of each subband are reduced as a function of $t$, also approaching their values in the superconductor ($E_{so}\to0$ and $|g|\to2$) in the limit of large $t$. All parameters are the same as in Fig. \[specweak\] \[with $|g|=15$ in panel (e)\].](specvst.pdf){width="\linewidth"}
Role of finite width $W$ {#secFiniteW}
------------------------
To elucidate the dependence of the spectrum on the finite width $W$, we present a comparison of the cases $W/a=1$ (henceforth referred to as a 2D geometry) and $W/a\gg1$ (henceforth referred to as a 3D geometry) in Fig. \[finiteW\]. We find that both the induced gap $E_g$ \[Fig. \[finiteW\](a)\] and the energy of the lowest transverse subband at $k=0$, $E_1(0)$ \[Fig. \[finiteW\](b)\], are identical in the 2D and 3D cases. In fact, we find that $E_g$ is completely independent of $W$ over several orders of magnitude \[Fig. \[finiteW\](a) inset\], suggesting that the width $W$ plays a rather trivial role in the proximity effect. To better understand these results, we plot the spectrum explicitly in Fig. \[finiteW\](c-d). In the 3D limit, despite there being several transverse superconducting subbands at low energies (and thus a significantly reduced level spacing in the superconductor), these subbands do not couple to the lowest subband of the wire, as evidenced by the absence of anticrossings in the spectrum. As a result, the spectrum of the lowest wire subband is virtually unchanged as the width $W$ is increased. We provide a detailed analytical justification of this numerical result in Appendix \[AppendixA\].
The fact that the spectrum of the lowest wire subband is independent of $W$ can be understood in a rather simple way. In the limit $W\to\infty$, it is possible to define an additional conserved momentum $k_x$. Therefore, the spectrum in the limit $W\to\infty$ is the same as that for $W\to0$, with the simple replacement $k\to\sqrt{k^2+k_x^2}$. As the spectrum is identical for $W\to\infty$ and $W\to0$, it is unsurprising that it is independent of $W$ for intermediate widths as well. Note that such an argument cannot be made if we take $d\to\infty$, as the semiconductor/superconductor interface always breaks translational invariance in the $z$-direction.
![\[finiteW\] (a) The induced gap $E_g$ is the same for both widths $W/a=1$ (black) and $W/a=175$ (red) at all values of $t$. Inset: $E_g$ (blue) is independent of width $W$ over several orders of magnitude. (b) Energy of lowest subband at $k=0$, $E_1(0)$, is also the same for $W/a=1$ and $W/a=175$ at all values of $t$. (c-d) Excitation spectrum for $W/a=1$ and $W/a=175$, respectively. The spectrum of the lowest wire subband $E_1(k)$ (blue) is virtually unchanged as the width $W$ is increased and does not couple to low-energy superconducting subbands (black) appearing for $W/a=175$. All parameters are the same as in Fig. \[specweak\].](finiteW.pdf){width="\linewidth"}
Based on Fig. \[finiteW\], we conclude that the finite width $W$ of the system only introduces a finite level spacing between transverse subbands but otherwise has no effect on the induced gap or band shift. It is very computationally expensive to treat the finite width $W$ explicitly as we have done to this point, thus, we forego doing so in the calculations that follow. A 2D calculation can be performed to reliably reproduce the behavior of the lowest subband, and by calculating the effective mass in this subband (for a given $t$), we can deduce the transverse level spacing in the 3D limit. The only drawback to using such an approach is that it does not fully capture the weaker parameter renormalization in higher transverse subbands, as shown in Fig. \[specvst\] and as discussed in Sec. \[secStrongCoupling\]. However, this is a relatively minor omission and should not affect any of our results qualitatively.
Effect of Disorder {#secDisorder}
------------------
So far we have considered only clean translationally invariant systems; however, the superconductor that is utilized in any realistic experimental setup will inevitably be disordered. As discussed in [Sec. \[secIntro\]]{}, the breaking of translational symmetry by disorder could allow a stronger proximity coupling between the wire and superconductor at lower energies (due to the fact that the superconductor has several occupied subbands at the Fermi level). In this section, we study the influence of various types of disorder on the induced gap in the wire, thereby relaxing the requirement of momentum conservation imposed in Eq. . While moderate chemical potential disorder within the superconductor has very little effect on the proximity gap, we find that both interface inhomogeneity as well as strong surface disorder lead to a small enhancement of the proximity gap. For computational reasons, all disorder calculations are performed in a 2D geometry ($W/a=1$) and without SOI ($\alpha=0$).
### Disorder in superconductor {#secBulkDisorder}
First, we incorporate disorder within the superconductor as random on-site Gaussian-distributed fluctuations in the chemical potential $\mu_s$ and pairing potential $\Delta$, $\mu_s\to\mu_s+\delta\mu_s$ and $\Delta\to\Delta+\delta\Delta$. Fluctuations are taken to have standard deviation $\sigma_{\mu(\Delta)}$ and zero mean, $\langle\delta\mu_s\rangle=\langle\delta\Delta\rangle=0$. The wire is taken to be clean. Furthermore, we consider a finite length $L$ of our system chosen such that, in the absence of disorder, we reproduce the proximity gap previously obtained in the $L\to\infty$ limit in which the momentum $k$ is conserved. We find that the induced gap is largely unaffected by moderate disorder in both the weak-coupling \[Fig. \[disorder\](a)\] and strong-coupling \[Fig. \[disorder\](b)\] limits. The gap is enhanced and the sharp interference peaks, which arise due to the finite thickness of the superconducting layer [@Reeg:2017_3], are smeared only when fluctuations in the chemical potential become comparable to the chemical potential itself, $\sigma_\mu\sim\mu_s$.
![\[disorder\] Induced gap $E_g$ as a function of superconductor thickness $d$ in a disordered 2D tight-binding model (with length $L/a=3\times10^4$) with various disorder strengths $\sigma_{\mu(\Delta)}$, plotted for (a) $t=0.05t_s$ and (b) $t=0.25t_s$. In both the weak- and strong-coupling limits, the gap is largely unaffected by disorder unless chemical potential fluctuations in the superconductor are extreme ($\sigma_\mu=1000\Delta=\mu_s$). Tight-binding parameters are fixed to $d/a=42$, $W/a=1$, $\mu_s/t_s=0.1$, $\Delta/t_s=10^{-4}$, $t_w/t_s=5$, $\mu_w/\Delta=1$, $\alpha=0$, $\Delta_Z=0$.](disorder.pdf){width="\linewidth"}
The fact that the well-pronounced interference peaks \[see Fig. \[disorder\]\] survive in the dirty limit can be understood straightforwardly on physical grounds. Due to the large mismatch in effective mass and Fermi momentum between the wire ($k_{Fw}$) and superconductor ($k_{Fs}$), the relevant superconducting trajectories that are responsible for inducing a gap have momentum along the wire axis $k\lesssim k_{Fw}\ll k_{Fs}$. However, these trajectories are nearly perpendicular to the semiconductor/superconductor interface and are ballistic, since typical values for the mean free path $\ell$ of (bulk) Al are larger than the thickness of the superconducting film ($\ell\gtrsim d$). More quantitatively, it was shown in Ref. [@Reeg:2017_3] that the relevant energy scale determining the tunneling strength needed to induce a sizable gap in the wire in the clean limit is the level spacing of the superconducting layer, $\pi\hbar v_{Fs}/d$. On the other hand, in the dirty limit the relevant scale is given by the Thouless energy $\hbar D/d^2$ [@Usadel:1970; @Belzig:1996; @Belzig:1999; @Reeg:2014], where $D\sim v_{Fs}\ell$ is the diffusion coefficient. However, as $\ell\sim d$, the two energy scales are comparable ($\hbar v_{Fs}/d\sim\hbar D/d^2$). Disorder therefore does not qualitatively change the behavior of the induced gap by introducing a low-energy scale unless $d\gg\ell$. The bulk limit of the superconductor, where the induced gap $E_g$ no longer depends on $d$, is reached only for $d\gg\xi_\text{dirty}$ (or, equivalently, $\hbar D/d^2\ll\Delta$), where $\xi_\text{dirty}=\sqrt{\ell\xi_\text{clean}}$ is the effective coherence length of the superconductor in the dirty limit [@bulkdisorder]. We note that these physical arguments do not rely on the width $W$ of the system being negligibly small, and hence we expect them to hold also in the 3D limit.
### Disorder in tunneling
Next, we incorporate possible interface inhomogeneity through fluctuations in the tunneling strength $t\to t+\delta t$ (which again are Gaussian distributed with standard deviation $\sigma_t$ and zero mean, $\langle\delta t\rangle=0$). As shown in Fig. \[disorder2\](a), fluctuations in the tunneling amplitude lead to an increase in the induced gap. This is a reflection of the finite level spacing within the superconductor. When the length $L$ of the system is finite, the momentum along this direction becomes quantized. If the tunnel barrier is uniform along the interface between the two materials, then only subbands in the wire and superconductor with the same quantum number can couple (see also discussion in Appendix \[AppendixA\]), but inhomogeneity can lead to nonzero matrix elements between states with different quantum numbers and, hence, an increase in the gap of the wire. However, as we observe, interface fluctuations must be comparable to the tunneling strength in order to induce a qualitative change to the behavior of the gap in the clean case; in the epitaxial Al devices, which were developed specifically to have a very homogeneous interface, this seems an unlikely scenario.
### Strong surface disorder
![\[disorder2\] (a) Induced gap $E_g$ for various strengths of tunneling fluctuations: $\sigma_t=0$ (black), $\sigma_t=0.2t$ (blue), $\sigma_t=t$ (green), and $\sigma_t=2t$ (red). The gap is largely unaffected unless fluctuations become comparable to tunneling strength $t$. (b) Strong surface disorder (green) slightly enhances the induced gap compared with the clean limit (black). Surface disorder is incorporated through chemical potential fluctuations with $\sigma_\mu=1000\Delta$ on the five sites furthest from the interface and with $\sigma_\mu=10\Delta$ on the remaining sites within the superconductor. Tight-binding parameters are the same as in Fig. \[disorder\].](disorder2.pdf){width="\linewidth"}
Finally, we investigate the effects of strong surface disorder of the superconducting layer, which could be present due to an oxidized surface layer. We model this scenario by taking very large chemical potential fluctuations on the five sites furthest from the interface and moderate fluctuations on the remaining sites. We find that surface disorder can modestly enhance the size of the induced gap away from resonance and broaden the resonance peak \[see Fig. \[disorder2\](b)\]. This behavior can be understood in the following way. As explained previously, in the clean limit only trajectories within the superconductor with $k\ll k_{Fs}$ can open a gap in the wire. In the presence of strong surface scattering, trajectories that begin with momentum $k\sim k_{Fs}$ can scatter at the surface into trajectories with $k\ll k_{Fs}$. Therefore, strong surface scattering allows for more trajectories to open a gap and the magnitude of the gap is increased. However, this rough surface essentially sets an upper bound on the mean free path, such that ${\ell}\sim d$. As explained in [Sec. \[secNumerics\]\[secBulkDisorder\]]{}, these values of $\ell$ are not expected to have a substantial effect on the induced gap.
Experimental Consequences {#secEstimates}
=========================
In Sec. \[secNumerics\], we argued that a 3D geometry with various types of disorder present can actually be very well described by a clean 2D model. The elimination of the finite width $W$ allows us to explore a much larger region of parameter space in a tight-binding calculation. In this section, we explicitly model the experimental setup of an InAs 2DEG coupled to an epitaxial Al layer of thickness $d\sim10$ nm [@Nichele:2017; @Suominen:2017]. We also provide a discussion of the feasibility of utilizing such a setup to realize a topological phase. In addition, we propose an experimental test of our theory.
Proximity-induced gap and band shift {#sec2D}
------------------------------------
As a starting point for our calculation, we note that all proximity-induced gaps that have been observed in epitaxial systems are a sizable fraction of the Al gap, $E_g\sim\Delta$ [@Gazibegovic:2017; @Zhang:2017_2; @Kjaergaard:2016; @Shabani:2016; @Kjaergaard:2017; @Suominen:2017; @Nichele:2017; @Vaitiekenas:2017; @Deng:2017]. We thus assume that the system is in the strong-coupling limit; *i.e.*, that the tunneling strength $t$ is large enough such that a sizable gap is induced for all $d\sim10$ nm.
![\[deltamuvsd\] (a) Proximity-induced gap $E_g$ and (b) energy of lowest subband at $k=0$, $E_1(0)$, for parameters corresponding to epitaxial Al/InAs. The tunneling strength $t=0.1t_s$ is chosen large enough such that a sizable gap $E_g$ is induced for all values of $d$. In the strong-coupling limit, the wire subband undergoes a huge band shift $\delta E_1=|E_1(0)|\sim200$ meV. The tight-binding parameters are set to $t_s=117$ eV (corresponding to lattice spacing $a=0.175$ Å), $\mu_s=11.7$ eV, $\Delta=250$ $\mu$eV, $t_w=50t_s$ (corresponding to $m^*=0.02m_e$), $\mu_w=0$, $\alpha=0.42$ eV Å(corresponding to $E_{so}=250$ $\mu$eV), and $\Delta_Z=0$.](deltamuvsd.pdf){width="\linewidth"}
When $t$ is made large enough to satisfy the strong-coupling condition \[see Fig. \[deltamuvsd\](a)\], we find that the wire subband undergoes a huge energy shift. Consistent with the analytical results of Ref. [@Reeg:2017_3], the magnitude of this band shift is comparable to the level spacing in the superconducting layer, $\pi\hbar v_F^\text{Al}/d\sim500$ meV (with $v_F^\text{Al}=2\times10^6$ m/s), and is very sensitive to the thickness $d$, varying between $E_1(0)\in(-400\text{ meV}, 50\text{ meV})$ with a period that is half of the Fermi wavelength of Al ($\approx2$ Å) \[see Fig. \[deltamuvsd\](b)\]. Additional results of this calculation are provided in Appendix \[AppendixB\]. Unsurprisingly, the large band shift in the strong-coupling limit is accompanied by significant renormalizations of the effective mass ($m^*\sim0.3m_e$), spin-orbit splitting ($E_{so}\sim10$ $\mu$eV), and $g$-factor ($|g|\sim2$).
Metallization and impact on topological superconductivity {#secTopological}
---------------------------------------------------------
Using the results of the 2D calculation of Sec. \[sec2D\], we present a schematic illustration of the 3D spectrum renormalization in Fig. \[renormalization\]. Due to the extreme sensitivity of the induced gap and band shift on the thickness of the superconducting layer $d$, we are only able to provide a qualitative picture of this spectrum renormalization in a typical device.
![\[renormalization\] Schematic illustration of the metallization of the Rashba wire. (a) Spectrum of the wire in the absence of tunneling. We assume the chemical potential to be tuned to the crossing point of the lowest subband and take the wire to have a small effective mass $m^*\sim0.02m_e$ and large spin-orbit splitting $E_{so}\sim250$ $\mu$eV. (b) Renormalization of the spectrum in the strong-coupling limit. The lowest wire subband experiences a large band shift \[taken from Fig. \[deltamuvsd\] to be $\delta E_1\sim-200$ meV\], and renormalization of the effective mass ($m^*\sim0.2m_e$) leads to the occupation of many wire subbands. The subbands of the wire also have a significantly reduced spin-orbit splitting $E_{so}\sim25$ $\mu$eV.](renormalization.pdf){width="\linewidth"}
We assume that the quantum wire has width $W=50$ nm [@Nichele:2017] and that the chemical potential of the bare wire (without the superconducting layer) is tuned to the Rashba crossing point at $k=0$ of the lowest subband (though, as shown in Appendix \[AppendixB\], our final results are independent of this assumption). Taking $m^*=0.02m_e$ for the mass of (bare) InAs, the spectrum at $k=0$ in the absence of tunneling is given by \[Fig. \[renormalization\](a)\] $$E_n(0)=(7.5\text{ meV})(n^2-1).$$ In the strong-coupling limit, based on Fig. \[deltamuvsd\], we take an intermediate value for the band shift of $ \delta E_1\sim200$ meV; in this case, the mass is renormalized to $m^*\sim0.2m_e$ (see Appendix \[AppendixB\]) and the spectrum at $k=0$ is given by \[Fig. \[renormalization\](b)\] $$\label{spectrumlarget}
E_n(0)=(-200\text{ meV})+(0.75\text{ meV})(n^2-1).$$ Figure \[renormalization\](b) illustrates the central finding of our work. Due to the large band shift, which acts as an effective enhancement of the chemical potential of the wire, and increase in effective mass, many additional transverse wire subbands become occupied and the semiconductor is essentially metallized by the superconductor. While there are certainly more electrons in the wire in the metallized limit (compared to the bare wire), as noted previously, these electrons can be supplied to the system by external leads, and the number of electrons added to the wire is negligible compared to the total number of electrons in the system. We also note that states belonging to the spectrum of Fig. \[renormalization\](b) are delocalized throughout the superconductor, thus helping to reduce the redistribution of charge into the wire.
The Zeeman splitting required to drive the system through a topological phase transition is given by $$\Delta_Z=\sqrt{\mu_\text{min}^2+E_g^2},$$ where $\Delta_Z$ includes the renormalized $g$-factor of the wire and $|\mu_\text{min}|=\min[|E_n(0)|]$ is the effective chemical potential of the transverse subband that lies closest to the Fermi level. Therefore, to reach a topological phase in the system, the chemical potential must ideally be tuned to the Rashba crossing point of one of the transverse wire subbands, such that $\mu_\text{min}=0$. In this special case, it is possible to reach a topological phase before destroying superconductivity in the Al shell; taking for example a renormalized $g$-factor of $|g|\sim5$ (see Appendix \[AppendixB\]) and an induced gap $E_g\sim200$ $\mu$eV, the topological phase can be reached with a field strength $B\sim1.5$ T, which is smaller than the critical field of a thin Al layer (which is Clogston-limited [@Clogston:1962], $B_c=\Delta/\sqrt{2}\mu_B\sim3$ T). Therefore, the renormalization of the semiconductor alone does not make the topological phase inaccessible *a priori*. However, it is very unlikely that the limit $\mu_\text{min}=0$ will be satisfied in practice; as we have seen, the position of the Fermi level is entirely determined by the large band shift, which is extremely sensitive to $d$ and is therefore highly device dependent. The worst-case scenario corresponds to a maximal band shift $|E_1(0)|\sim400$ meV that places the Fermi level at the midpoint between the two closest transverse subbands. For a maximal band shift, the transverse subband spacing in the vicinity of the Fermi level can be estimated as $(\pi\hbar/W)\sqrt{2|E_1(0)|/m^*}\sim35$ meV, thus placing an upper bound of $|\mu_\text{min}|\sim17.5$ meV. In this case, the field strength that would be required to reach a topological phase is $B=2|\mu_\text{min}|/|g|\mu_B\sim120$ T. Of course, such an unrealistically large value for the required magnetic field simply means that superconductivity in Al will be destroyed before reaching the topological phase [@fielddependence]. Due to the relative lack of control over the band shift in the limit of a thin superconducting layer, the field strength required to reach a topological phase can lie anywhere in the range between 1.5 T and 120 T with roughly equal probability. Given that superconductivity in the Al layer is destroyed at $B_c\sim3$ T, it is thus very challenging to reach the topological phase in such a device. In order to reliably produce a topological phase, one needs to be able to control the chemical potential over a range of $\sim10-20$ meV in order to offset large $|\mu_\text{min}|$ [@orbital]. While current experiments on 2DEGs [@Suominen:2017; @Nichele:2017] do not have gates available to tune the chemical potential, even if such gates were implemented we expect that screening effects arising from the strong coupling and close proximity to a metal will not allow for such a large range of tunability.
Nevertheless, even if a subband is shifted to the ideal position $\mu_\text{min}=0$, we must stress that this is no guarantee that one will observe well-separated Majorana fermions in the system. First, the spin-orbit splitting is renormalized to a prohibitively small value $E_{so}\sim25$ $\mu$eV. As it is the SOI that is responsible for inducing $p$-wave pairing in the wire, the localization of the Majorana wave function could (possibly greatly) exceed the length of the wire. Second, it is unclear whether 1D topological physics would be observed in the metallized case due to the presence of many occupied transverse levels.
We note that our main result, namely the metallization of the wire depicted in Fig. [\[renormalization\]]{}, is independent of our choice $\mu_w=0$. We show in Appendix [\[AppendixC\]]{} that the spectrum in the strong-coupling limit is independent of $\mu_w$. Hence, our result holds even if there are several occupied transverse subbands at $t=0$. Additionally, we checked that our results also hold in the presence of Fermi surface anisotropy within the superconductor, which we implement by allowing for anisotropic hopping $t_{s,z}=2t_{s,y}$.
Controlling the band shift
--------------------------
![\[shiftlarged\] (a) The band shift is significantly reduced by increasing the thickness $d$. (b) When $d$ is increased, the crossover from weak-coupling ($E_g\ll\Delta$) to strong-coupling ($E_g\sim\Delta$) occurs at much smaller $t$. Tight-binding parameters are the same as in Fig. \[deltamuvsd\].](shiftlarged.pdf){width="\linewidth"}
As suggested in Ref. [@Reeg:2017_3], the detrimental band shift can be reduced by increasing the thickness of the superconducting layer. More specifically, while the band shift still exhibits oscillations on the scale of the Fermi wavelength as in Fig. \[deltamuvsd\](b), the oscillation amplitude and typical magnitude (i.e., the average over a single oscillation) are reduced with increasing $d$ [@Reeg:2017; @Schrade:2017]. We confirm this numerically, as shown in Fig. \[shiftlarged\](a). Therefore, the size of the band shift can be tuned experimentally by varying the thickness $d$ of the superconducting layer, and measuring a sharp decrease in the band shift (for example, by angle-resolved photoemission spectroscopy) in systems with larger thickness would constitute a clear experimental verification of our theory. The band shift could also be controlled through the addition of a tunnel barrier between the superconductor and 2DEG, with the band shift decreasing in magnitude as the thickness of the barrier layer is increased.
Even though increasing the thickness $d$ of the superconducting layer can reduce the band shift, this is not necessarily beneficial for inducing a topological phase. As shown in Fig. \[shiftlarged\](b), increasing $d$ also shifts the crossover from weak-coupling ($E_g\ll\Delta$) to strong-coupling ($E_g\sim\Delta$) to significantly smaller $t$. The tunneling strength is a property of the interface, so it should not be affected by the thickness of the superconducting layer. Therefore, if tunneling is strong enough to induce a sizable gap for $d\sim10$ nm as it is for the epitaxial interface, the system will be deep within the strong-coupling regime if the thickness $d$ is increased. In this limit, all semiconducting properties are completely eliminated by the strong coupling to the superconductor and it is challenging to realize a topological phase [@Potter:2011].
Conclusion {#secConclusion}
==========
We have studied the proximity effect in a quasi-1D quantum wire (defined within a 2DEG) strongly coupled to a thin disordered superconducting layer. We showed that, even in the strong-coupling limit, the behavior of the lowest transverse subband in such a system can be very well described by a single 1D channel coupled to a clean 2D superconductor, as studied analytically in Ref. [@Reeg:2017_3]. Utilizing this result, we found that if the proximity-induced gap in an epitaxial Al/InAs heterostructure is comparable to the gap of Al (as observed experimentally), the semiconductor is metallized by the superconductor. Not only do the subbands of the wire undergo a huge band shift $\sim200$ meV, which leads to the occupation of many transverse levels and effectively places the wire far from the 1D limit, but the semiconducting properties that are attractive for realizing a topological phase (large $g$-factor, large spin-orbit splitting, small effective mass) are also significantly renormalized toward their metallic values. We argued that this metallization effect makes it challenging to realize a topological phase in an epitaxial Al/InAs setup, with the ability to do so being largely device dependent. We also proposed that our theory can be verified experimentally by observing a decrease in the magnitude of the band shift when the thickness $d$ of the superconducting layer is increased.
Despite the recent emphasis on electron-electron interaction effects inside hexagonal nanowires [@Antipov:2018; @Woods:2018; @Mikkelsen:2018], we do not consider such effects in our model. Most importantly, interaction effects give rise to a nontrivial spatial profile of the electrostatic potential across the diameter of such nanowires, which spans roughly $50-100$ nm ([*i.e.*]{}, interactions give rise to a band bending effect within the semiconductor). In the setup that we consider, where a quantum wire is defined within a 2DEG, there is no spatial extent over which such a profile can develop (the thickness of the 2DEG is only $\sim5$ nm [@Shabani:2016]). Additionally, as the states in the wire are in such close proximity to a metal, one expects that interactions in the wire are heavily screened [@Woods:2018]. We also neglect potential Luttinger liquid effects that can suppress the induced superconducting gap in both clean and disordered nanowires [@Gangadharaiah:2011; @Stoudenmire:2011; @Disorder_RG]. It is worth noting that Ref. [@Antipov:2018] suggests that in the strong-coupling limit states in the wire are highly localized near the interface; thus, our model may be applicable to the hexagonal nanowire case as well. However, this result was obtained by treating the superconductor simply as a boundary condition in the Poisson equation [@Antipov:2018]; as pointed out in Ref. [@Woods:2018], such a treatment does not adequately describe the strong-coupling limit where the states in the nanowire are strongly affected by the presence of the superconductor (for example, the significant reduction in the transverse level spacing of the wire by the proximity effect, one of the key results of our work, is not captured).
We find experimental support for our theory in Refs. [@Deng:2016; @Gazibegovic:2017; @Zhang:2017_2]. Possible evidence of Majorana fermions in the form of zero-bias conductance peaks have been experimentally observed in both the nanowire [@Deng:2016; @Zhang:2017_2; @Vaitiekenas:2017; @Deng:2017] and 2DEG [@Suominen:2017; @Nichele:2017] epitaxial geometries. In both cases, these zero-bias peaks emerge at finite magnetic field strength from coalescing Andreev bound states that originate from quantum dot-like normal sections at the system ends. For this reason, there has been a significant debate over how to differentiate between zero-bias peaks arising from such Andreev bound states and from Majorana fermions [@Deng:2017; @Zhang:2017_2; @Nichele:2017; @Setiawan:2017; @Liu:2017; @Hell:2017; @Rosdahl:2018]. However, interestingly, it has been observed that in the absence of any quantum dot physics, where there are no Andreev bound states at low field strengths, there is no topological gap-closing transition or zero-bias peak emergence with increasing field strength before superconductivity in Al is destroyed [@note]. Although this is not definitive confirmation of our model description, it is consistent with the chemical potential being tuned between transverse subbands of the wire (and thus outside of the regime for which topological superconductivity can be induced), which we expect to be the case in a majority of devices.
The metallization of the semiconductor discussed in the present work is a direct consequence of the extremely high-quality interface provided by the epitaxial growth of Al on InAs. In order to more reliably induce 1D topological superconducting phases, a weaker proximity effect should be sought to a superconductor with a larger gap such as NbTi, which has a gap $\Delta\sim2$ meV that is an order of magnitude larger than that of Al.
We thank M. Leuenberger, C. Marcus, D. Maslov, F. Nichele, J. Nygård, and Y. Volpez for helpful discussions. This work was supported by the Swiss National Science Foundation and the NCCR QSIT.
Induced gap independent of width {#AppendixA}
================================
In Sec. \[secFiniteW\], we showed numerically that the proximity-induced gap is independent of the width $W$ of the quasi-1D quantum wire. In this section, we support our numerical calculations analytically by determining the induced gap within second-order perturbation theory in the weak-coupling limit. In the tunneling-Hamiltonian approach, the tunneling amplitude between a given subband of the wire and a given subband of the superconductor is $$t=\int_{-d_w}^ddx\int_0^Wdz\,\psi^*_w(x,z)V(z)\psi_s(x,z),$$
![\[real2D\] (a) Proximity-induced gap $E_g$, (b) energy of lowest subband at $k=0$, $E_1(0)$, (c) effective mass $m^*$, (d) spin-orbit splitting $E_{so}$, and (e) $g$-factor plotted as a function of tunneling strength $t$. Tight-binding parameters are the same as those in Fig. \[deltamuvsd\], with $d=9.08$ nm (corresponding to $d/a=519$).](real2D.pdf){width="\linewidth"}
where $d_w$ is the finite thickness of the wire, $\psi_{w(s)}(x,z)$ is the wave function of a given subband in the wire (superconductor), and $V(z)$ is a barrier potential that we assume is uniform along the interface. \[Note that this is not the same $t$ that was introduced in Eq. .\] Given that the wave functions are separable in the coordinates $(x,z)$, the integral over $z$ simply yields the tunneling amplitude $t_0$ in the limit $W=0$, $$\label{t2}
t=t_0\int_0^Wdx\,\psi_w^*(x)\psi_{s}(x).$$ Quantization along the width gives the wave functions $\psi_{w,s}(x)=\sqrt{2/W}\sin(\pi n_{w,W}x/W)$, where $n_{w,W}\in\mathbb{Z}^+$ are the quantum numbers for transverse subbands in the wire and superconductor, respectively. Evaluating the integral in Eq. , we find that only transverse subbands with the same quantum number couple to each other, $$t=t_0\delta_{n_w,n_W}.$$ To second order in tunneling, the induced gap on a given wire subband (characterized by $n_w$) takes the form $$\label{pt}
E_{g,n_w}(d,W)\propto\sum_{n_d,n_W}\frac{|t|^2}{E_{n_d,n_W}},$$ where $n_d$ and $n_W$ are quantum numbers characterizing the spectrum of the superconductor (due to the finite thickness $d$ and finite width $W$, respectively), which is given by $$\label{Esc}
E_{n_d,n_W}=\sqrt{\left(\mu_s-\frac{\hbar^2\pi^2n_d^2}{2m_sd^2}-\frac{\hbar^2\pi^2n_W^2}{2m_sW^2}\right)^2+\Delta^2}.$$ In Eq. we neglect the momentum dependence of the spectrum, as we assume that only momenta $k\ll k_{Fs}$ are relevant.
As the quantum wire has at most only a few occupied subbands, this restricts $n_W\sim1$. Furthermore, since relevant $n_d\sim50$ (determined by requiring $\mu_s\sim\hbar^2\pi^2n_d^2/2m_sd^2$ and taking $\mu_s\sim10$ eV and $d\sim10$ nm) and $W\gg d$, we have $n_W^2/W^2\ll n_d^2/d^2$. Provided that $|\mu_s-\hbar^2\pi^2n_d^2/2m_sd^2|\gg\hbar^2\pi^2n_W^2/2m_sW^2$, which is true for almost all $d$, the term containing $W$ in Eq. is negligible. Performing the sum over $n_W$ then gives $$E_{g,n_w}(d,W)\propto\sum_{n_d}\frac{|t_0|^2}{E_{n_d}},$$ where $$E_{n_d}=\sqrt{\left(\mu_s-\frac{\hbar^2\pi^2n_d^2}{2m_sd^2}\right)^2+\Delta^2}$$ is the spectrum of the superconductor in the limit $W=0$. We see that both $W$ and $n_w$ have dropped out of the expression for the gap completely. Hence, the induced gap is the same for all subbands of the wire and is independent of the width $W$.
![\[shiftvsmu\] Energy of lowest subband at $k=0$, $E_1(0)$, plotted as a function of tunneling strength $t$ for various wire chemical potentials $\mu_w$. While the band shift $\delta E_1$ is dependent on $\mu_w$, the bottom of the band always approaches the same energy in the strong-coupling limit. This indicates that the spectrum is determined entirely by the superconductor in this limit. Tight-binding parameters are the same as in Fig. \[deltamuvsd\].](shiftvsmu.pdf){width=".8\linewidth"}
Additional calculations for epitaxial Al/InAs {#AppendixB}
=============================================
In Sec. \[sec2D\], we discussed how the proximity-induced gap and band shift behave as a function of superconductor thickness $d$ in the strong-coupling limit. Here, we demonstrate that the parameters of the wire are significantly renormalized by the tunnel coupling. Our results are displayed in Fig. \[real2D\]. For realistic experimental parameters, we find even more drastic changes in semiconducting properties than in Fig. \[specvst\]. For $t=0.1t_s$, which was the tunneling strength used to generate Fig. \[deltamuvsd\], the effective mass of the lowest subband is $m^*\sim0.3m_e$, the spin-orbit splitting is $E_{so}\sim10$ $\mu$eV, and the $g$-factor is $|g|\sim2$. \[While the effective mass was previously deduced by fitting to Eq. , here, we determine it as $m^*=\hbar^2k_{so}^2/2E_{so}$, where $k_{so}$ is the momentum at which the band attains its minimum.\] As shown in Sec. \[secStrongCoupling\], the higher subbands that lie closer to the chemical potential will have a slightly weaker parameter renormalization, which is why we quote slightly different values while making estimates in Sec. \[secTopological\].
Strong-coupling spectrum independent of wire chemical potential $\mu_w$ {#AppendixC}
=======================================================================
In all calculations of the main text, we have assumed that the chemical potential of the wire is tuned to the Rashba crossing point of the lowest transverse subband ($\mu_w=0$) at $t=0$. In reality, however, it is not known how many transverse subbands are occupied in the wire and it is possible that $\mu_w$ takes a different value. It is thus important to test whether our main result, namely the shift of the lowest transverse subband to large energies induced by the superconductor in the strong-coupling limit, is affected by our choice of $\mu_w$.
The energy of the lowest subband at $k=0$, $E_1(0)$, is plotted as a function of $t$ for various $\mu_w$ (ranging between 0 and 175 meV) in Fig. \[shiftvsmu\] for the 2D (single subband) case. While the band shift $\delta E_1$ is dependent on $\mu_w$, $E_1(0)$ converges to the same energy regardless of the initial position of the wire chemical potential $\mu_w$.
![\[deltamumanybands\] (a) Energy of four lowest transverse subbands at $k=0$, $E_n(0)$, plotted as a function of tunneling strength $t$. Parameters are the same as in Fig. \[specweak\], with $\mu_w=150\Delta$. Comparing with Fig. \[specvst\](b) (which has $\mu_w=0$), we see that all four subbands approach the same energy in the limit of large $t$ regardless of $\mu_w$. (b) Effective mass $m^*$ for both $\mu_w=150\Delta$ (black curve) and $\mu_w=0$ \[red curve, corresponding to Fig. \[specvst\](c)\]. Because the band shifts are smaller when $\mu_w=150\Delta$, the mass is renormalized more quickly as a function of $t$.](deltamumanybands.pdf){width="\linewidth"}
Similarly, the energy of the four lowest transverse subbands at $k=0$, $E_n(0)$, is plotted in Fig. \[deltamumanybands\](a) as a function of $t$ for the 3D case when there are several occupied subbands in the limit $t=0$. Comparing with Fig. \[specvst\](b), we find that the energies of all four subbands approach the same values in the strong-coupling limit. However, because the band shifts are smaller for the case of several occupied subbands, the material parameters are more quickly renormalized as a function of $t$ \[e.g., renormalization of the effective mass $m^*$ is shown in Fig. \[deltamumanybands\](b)\].
These results indicate that the spectrum of the system is determined entirely by the superconductor in the strong-coupling limit. Thus, regardless of how many subbands are occupied in the wire to begin with, the metallization picture presented in Fig. \[renormalization\](b) still holds.
| |
Presentation is loading. Please wait.
Published byAnthony Dreyer Modified over 3 years ago
1
Year 3 Objectives : Measurement MEASUREMENT Objective 1m: Measure, compare, add and subtract: lengths (m/cm/mm); mass (kg, g); volume and capacity (l/ ml) Practise using appropriate tools to measure distances and weight Recognise 1m as having 100cm Know that 50cm is ½ a metre Measure to the nearest metre a distance of up to 10m Measure accurately a distance of up to 30cm using a ruler Measure a distance of up to 5m using a tape measure giving the answer in m and cm Recognise 1Kg as having 1000g Know that 500g is ½ a Kg Measure to the nearest Kg a weight of up to 10Kg on a weighing machine Measure accurately a weight of up to 500g on a weighing scale Measure a weight of up to 5Kg using a weighing machine giving the answer in Kg and g MEASUREMENT Objective 2m: Measure the perimeter of simple 2D shapes Know the term ‘perimeter’ Know that the perimeter is the distance around the four sides of a rectangle Know that the perimeter is the distance around the outside of any shape Measure accurately each side of a 2D shape and add up all the sides to find the perimeter Objective 3m: Add and subtract amounts of money to give change, using both £ and p in practical contexts Add any two amounts of money using notes and coins Sort out an amount of money by organising it into sets of the same coins and then making up sets of pounds, etc. From a given amount give change from £1, £5, £10 © Focus Education (UK) Ltd 20141
2
Year 3 Objectives : Measurement 2 MEASUREMENT Objective 4m: Tell and write the time from an analogue clock, using Roman numerals 1 to X11, and 12 hour & 24 hour clocks Recognise all Roman numerals from 1 to 12 and their associated place on a clock Can tell the time on an analogue clock and write down its equivalent, e.g. ten past two can be written as 2:10 Understand the 24 hour system, e.g. 2pm is 1400 hours Objective 5m: Estimate and read time with increasing accuracy to the nearest minute Revise reading the time in five minute intervals Read the time to one minute intervals Estimate the time to the nearest five minutes, eg, it is almost ten past three MEASUREMENT Objective 6m: Record and compare time in terms of seconds, minutes, hours Know that 60 seconds is one minute and that 60 minutes is one hour Know that quarter past is 15 minutes past; and that half past is 30 minutes past Know that 90 seconds is a minute and a half Know that 75 minutes is one hour and a quarter Objective 7m: Use vocabulary such as: o’clock, am, pm, morning, afternoon, noon and midnight Know that am represents time from midnight to noon Know that pm represents time from noon to midnight © Focus Education (UK) Ltd 20142
3
Year 3 Objectives : Measurement 4 MEASUREMENT Objective 8m: Know the number of seconds in a minute; minutes in an hour; and the number of days in each month, year and leap year Know that 60 minutes make 1 hour; and that 60 seconds make 1 minute Know that the number of days per month varies between 28 to 31 Know that the number of days in a year varies between 365 and 366 and know the term leap year Know the rhyme associated with days of the month Objective 9m: Compare durations of events, e.g. calculate time taken by particular events or tasks Measure in minutes and seconds any duration using a stop watch or hand held clock Know that certain events last a given time: e.g. lunch hour = 60 minutes, playtime = quarter of an hour, football match = 90 minutes © Focus Education (UK) Ltd 20143
Similar presentations
© 2018 SlidePlayer.com Inc.
All rights reserved. | http://slideplayer.com/slide/3135244/ |
.
Griechische Mathematik: Archimedes - Kubische und biquadratische Gleichungen
They were very upset when I said the development of the greatest importance to mathematics in Europe was the discovery by Tartaglia that you can solve a cubic equation: although it is of little use in itself, the discovery must have been psychologically wonderful because it showed that a modern man could do something no ancient Greek could do. It therefore helped in the Renaissance, which was freeing man from the intimidation of the ancients. What the Greeks are learning in school is to be intimidated into thinking that they have fallen so far below their ancestors. [From: "Cubic Equations, or Where Did the Examination Question Come From?" (by H. B. Griffiths and A. E. Hirst (quoting Richard Feynman reporting on a 1980 visit to Greece), American Mathematical Monthly, February 1994, pp.151-161 ]
The three most famous mathematical problems of Ancient Greeks were:
We know that these problems cannot be solved using Plato and Euclid's restriction that only a compass and a straightedge should be used. While this seems too be an academic restriction it is a “scientific” approach not found in Babylonian or Egyptian mathematics. Pappus one of the last ancient Greek mathematicians called problems that can be solved by compass and straightedge a plane problem. Further a problem that requires one or more conic sections was called a solid problem and the other problems were called linear problems.
Archimedes of Syracuse provided a non-Platonic solution for the angle trisection problem.
Take a circle K and an angle MON with 0o < λ < 90o. Archimedes says without proof that: A point S exists such that the line NS cuts the line l0 through MM' in Q with QS = r, where r is the circle radius. Then SQO is λ/3 and this can be algebraically described as a solution of the cubic equation:
x3-3r2x-2r3cos(λ)=0
In this way Archimedes provided probably one of the simplest geometric “solutions” of the angle trisection problem.
Note that the geometric construction requires a marked straightedge which is not allowed by idealists like Plato. For the geometric construction it is required to mark a distance equal to the circle radius on the straightedge. The straightedge requires two marks (notched in two places). The Greeks called such constructions neusis (νεύσης) or "verging" constructions.
In On Spirals, according to Heath, Archimedes says that: If N is some point on a circle K and given a distance α, where N is not M, see Figure above then:
A line l exists through N that cuts the circle K in a point S and l0 outside the circle in Q such that QS = α. Where l0 is a line through MM'.
Actually another line l' exists that goes through the point N and that intersects the circle in S' and the line l0 in Q' such that Q'S' = QS = α.
The algebraic analog of this geometric problem can be formulated as a bi-quadratic equation:
This equation according to the statement of Archimedes has for α < 0 exactly a real root in (-infinite , -r) and exactly one real root in (r, +infinite). Even if Archimedes did not provide a proof for these geometrical problems he knew indirect that for specific equations of third and fourth degree real solutions always exist, even if he did not use the algebraic notation.
Greeks therefore consider problems with third and fourth order polynomials but mainly geometrically. The lack of an algebraic notation that appeared only very late with Diophantus probably can be explained by the discovery of the irrationals. To some extend also the Platonic ideal was responsible for considering mainly geometric problems. Also the complex and sometimes not standardized notation of numbers could be responsible that geometry was more advanced than algebra.
For more information see: Constructions Using a Compass and Twice-Notched Straightedge (PDF File)
See also:
- Archimedes and the Palimpsest
- Archimedes Mathematics
- Archimedes and Combinatorial Problems (The loculus of Archimedes)
- Archimedes the Arbelos and the Salinon
- Archimedes Semiregular Convex Solids
LINKS
Angle Trisection by Archimedes
Thirteen elementary straightedge and compass constructions
References
H. -W. Alten, A. Djafari Naini, M. Folkerts, H. Schlosser, K.-H. Schlote, H. Wußling, 4000 Jahre Algebra, Springer Verlag (In German). | http://www.hellenicaworld.com/Greece/Science/en/ArchimedesEquations.html |
Findings from a new North Dakota State University survey reveal that the majority of students identifying as liberal or liberal-leaning are not proud of America.
In response to the question “Are you proud to be American?” 57 percent of liberal identifying students answered ‘no’. This is in contrast to the 73 percent majority of conservatives who answered ‘yes’ to the same question. | https://georgiastarnews.com/tag/north-dakota-state-university/ |
What is Floating Stock?
Floating stock is described as the aggregate shares of a company’s stock that are available in the open market. It represents the number of outstanding stock or shares available to the public for trading and does not include closely held shares or restricted stock.
A company with a low number of shares available has a low float, and it may be difficult to find sellers or buyers due to fewer shares available to trade. Hence, a small float stock will usually have more volatility than a large float stock.
The floating stock of a company may vary over time. If a company sells additional shares to secure more capital, the floating stock increases. On the contrary, if the company buys back the shares, the outstanding stock will decrease; hence, the percentage of floating stock will decrease.
Summary
- Floating stock signifies the aggregate shares of a stock of a company that is open for the public to trade.
- A large floating stock number reflects a higher availability of shares for trading and makes it easier for investors to buy or sell. Hence, institutional investors are attracted to large floating stocks.
- Floating stock level helps to define a stock’s liquidity and volatility.
Formula for Calculating Floating Stock
The number of outstanding shares of a company does not always represent the floating stock amount. The following formula can be used to find the floating stock figure:
Floating Stock = Outstanding Shares – Restricted Shares – Institution-owned Shares – ESOPs
Where:
- Restricted shares cannot be traded until the lock-up period after the initial public offering (IPO) is over. The shares are non-transferable.
- ESOP is an employee stock ownership plan in a company through which the employees get an ownership interest.
For example, a company may have 5 million outstanding shares. However, out of the 5 million shares, 3.5 million shares are owned by some large institutions, management owns 0.5 million shares, and 0.3 million shares are contributed to ESOP.
Hence, the floating stock is only 0.7 million (5 million – 3.5 million – 0.5 million – 0.3 million). The floating stock as a percentage of outstanding stock will be 14% (0.7 million / 5 million = 0.14 * 100).
Features of a Floating Stock
- The floating stock number of a company’s stock helps investors understand how many shares are available to them for trading in the market.
- A higher percentage of floating stock indicates a lower amount of controlled shares or large blocks owned by institutions, management or other insiders..
- The amount of floating stock helps to define a stock’s liquidity and volatility.
- A large floating stock number reflects the high availability of shares for trading. Hence, it makes buying and selling easier, thus attracting a larger pool of investors. Institutional investors seek to invest in large blocks of a company’s stocks with a larger float. However, the share price will not be affected much by these large purchases.
- Companies with a high floating stock, have share prices that are highly sensitive to company or industry news. This volatility and liquidity allows more opportunity to buy and sell the stock..
- The floating stock number reflects the shares of a company’s particular stock owned by the public. Companies may decide to increase or decrease that amount depending on their goals.
Limitations of a Floating Stock
- Floating stock with a small float will have fewer investors since the low availability of stocks discourages investors from investing. This lack of availability may discourage many investors despite the company’s business prospects.
- In an attempt to increase the floating stock, a company may issue extra shares even if additional capital is not required. Such an action will lead to stock dilution, much to the dismay of the existing shareholders.
Additional Resources
CFI is the official provider of the Capital Markets & Securities Analyst (CMSA)™ certification program, designed to transform anyone into a world-class financial analyst.
In order to help you become a world-class financial analyst and advance your career to your fullest potential, these additional resources will be very helpful: | https://corporatefinanceinstitute.com/resources/knowledge/trading-investing/floating-stock/ |
A rose by any other name would smell as sweet — except maybe “corpse flower.”
But that name is already spoken for by the titan arum, a species native to the equatorial rain forests of Sumatra that when it blooms emits an odor like rotting meat.
The College of Biological Sciences Conservatory on the University of Minnesota’s St. Paul campus announced Monday morning that its resident corpse flower is blooming, and the public is invited to view it — and smell it, though the odor had significantly ebbed by Monday afternoon. The conservatory — which is normally open to the public only on weekdays between 9 a.m. and 3:30 p.m. — was open till 9 p.m. Monday. It is reverting to its regular schedule Tuesday.
“Botanical gardens around the world build entire festivals around this single plant,” conservatory curator Lisa Aston Philander said in a news release. “Tens of thousands of visitors show up just to inhale this awful ‘carrion’ smell.”
The flower, which can reach heights of 6 feet, emits its signature stench to attract pollinators, such as sweat bees. As of Monday, the flower was 5 feet tall.
Corpse flower blooms last only a few days, and they don’t come around very often — the conservatory’s corpse flower last bloomed seven years ago.
The conservatory is located at 1534 Lindig St. in St. Paul. | https://www.twincities.com/2016/02/01/corpse-flower-blooming-stpaul/ |
Position summary:
The state events coordinator is responsible for choosing the date and location of state meetings, in consultation with the president and the program vice president.
Please keep a record of your activities during the year and upgrade this description as needed.
Leadership skills: good organizational skills, ability to set and meet deadlines, knowledge of
equipment needed for meetings, some knowledge of conference contracts, and ability to
communicate regularly with the state board and any involved committees.
Major duties:
• Serve on the AAUW of Oregon Board of Directors.
• Research and prepare a list of venues around the state that meet needs of the organization.
• Recommend to the president the time and place for state meetings.
• Develop a suitable standard contract from which to negotiate with venues.
• Negotiate contract with venue for state president to sign.
• Serve as contact person with the facility for all needs related to the meeting, including the
number of sleeping rooms and meeting rooms, audio visual equipment and meals.
• Be responsible for on-site liaisons with the facility during the meeting to help solve problems
related to the facility.
• Arrange for display of the state banners and on-site signage.
• Delegate to assistants as needed.
• Recommend appointment of an on-site facilities coordinator, if one is desired.
• Verify terms of agreement for the meeting rooms, ensuring that rooms are appropriate for the
planned programs.
• Determine technology needs for each meeting room and arrange for equipment.
• Identify and meet with individual(s) who will provide experienced technical assistance.
• Determine and have rooms set up according to program needs.
• Provide a diagram for each session so facility will have written set-up requirements.
• Business session equipment: three microphones, one for podium and two floor microphones;
projector and laptop computer; screen; skirted head table on a raised platform that seats
at least three.
• Arrange for other equipment as requested for program needs, such as easels, chart paper and
markers.
• Verify terms of agreement for the sleeping rooms, ensuring that rooms are reserved per
contract.
• Verify complimentary sleeping room(s) and reservations.
• Store the equipment owned and used by the state at state meetings, which may include
microphone/speaker systems, a digital projector, plus a cart, and cases for everything.
• Perform such other duties as may be assigned by the president, the executive committee or
the board of directors, or as specified in the bylaws.
Time commitment: Attendance at all state board meetings, and major time commitment in the 3-4 months before the state annual meeting.
Resources: Consult with your predecessor; AAUW of Oregon State Annual Meeting Guide;
AAUW of Oregon Bylaws and Policies; AAUW of Oregon Directory; state website: aauwor.aauw.net; national website: AAUW.org. | https://aauw-or.aauw.net/position-descriptions/position-description-events-coordinator/ |
Under the Fair Labor Standards Act (FLSA), employees who work more than 40 hours per week must be paid overtime unless they fall under one of several exemptions. Recently, the U.S. Department of Labor’s (DOL) Wage and Hour Division issued a new fact sheet that discusses the applicability of the “white collar” exemptions of the FLSA to jobs that are common in higher education institutions. These positions include teachers, coaches, professional employees, administrative employees, graduate teaching assistants, research assistants, and student residential assistants.
Student-employees are addressed at length in the fact sheet. The DOL notes that many working students are hourly, nonexempt workers who generally do not work more than 40 hours a week. Some students, however, receive a salary or other non-hourly compensation while working as teaching assistants, research assistants, or residential assistants. The fact sheet notes that teaching assistants whose primary duties are teaching qualify for the teacher exemption. The DOL does not generally consider research and residential assistants to be employees when they are enrolled in educational programs, and therefore they are usually not entitled to minimum wages or overtime compensation.
This fact sheet emphasizes the distinctive nature of many higher education positions and the need for a tailored analysis of higher education employees’ duties to determine whether or not they are exempt from the FLSA’s overtime and minimum wage provisions.
Client Tip: The misclassification of employees is a commonly-litigated area of employment law, and violations can result in severe financial penalties. Therefore, it is important to evaluate an employee’s actual job duties, qualifications, and education for determining whether an employee is exempt under any of the “white collar” exemptions. Additionally, the employee must also meet the salary basis test, currently a salary at a rate not less than $455 per week. Additional fact sheets addressing other wage and hour topics can be found on the DOL’s website. | https://campuscounselnewengland.com/2018/06/12/department-of-labor-fact-sheet-higher-education-institutions-and-overtime-pay-under-the-flsa/ |
DESCRIPTION An immaculately presented two bedroom SEMI-DETACHED home located within a cul-de-sac close to the centre of Storrington village. Internally the property has been subject to complete modernisation to a high standard throughout. Features include: sitting room with open fireplace, superb RE-FITTED KITCHEN/BREAKFAST ROOM, utility room, two bedrooms, re-fitted bathroom suite, newly installed uPVC double glazed windows and gas central heating system. Outside there are attractive gardens with off-road parking leading to a DETACHED GARAGE.
SITTING ROOM 12' 3" x 12' 0" (3.73m x 3.66m) Radiator, uPVC double glazed windows, feature open fireplace with stone hearth and mantel over, recessed built-in display shelving with built-in storage cupboards under, wood style flooring.
UTILITY ROOM 7' 0" x 6' 8" (2.13m x 2.03m) Space and plumbing for washing machine and wall-mounted combination boiler, radiator.
FIRST FLOOR LANDING Access to loft space, double glazed window.
BEDROOM ONE 15' 0" x 10' 4" (4.57m x 3.15m) uPVC double glazed windows, built-in wardrobe cupboards.
BEDROOM TWO 10' 0" x 7' 6" (3.05m x 2.29m) uPVC double glazed windows, built-in wardrobe cupboards.
RE-FITTED BATHROOM Fitted suite comprising: inset bath with central taps and shower attachment with fitted independent shower unit with folding Perspex and chrome screen, inset wash hand basin with toiletries cupboards under, low level flush w.c., extractor, uPVC double glazed windows, concealed spot lighting, heated chrome towel rail. | http://www.fowlersonline.co.uk/residential-sales/100074003539 |
Modern neural language models (LMs) have grabbed much attention in recent years, due in part to their massive sizes and the resources (time, money, data) required to derive them, and in part to their unprecedented performance on language understanding and generation tasks. It is clear that massive LMs are a required component of any future natural language system, are critical to improving existing applications such as search, and enable new applications previously beyond the reach of technology.
Modern LMs also make incredibly silly mistakes, that no five-year-old would ever make. One take on this is that all limitations will fade away as model sizes, training data and training time increase, as they surely will. An alternative take is that this is wishful thinking, and that the models require thoughtful guidance in order for them to approach human-level linguistic performance. This talk discusses this latter perspective.
Web search has transformed how we access all kinds of information and has become a core fabric of everyday life. It is used to find information, buy things, plan travel, understand medical conditions, monitor events, etc. Search in other domains has not received nearly the same attention so our experiences in web search shape our thinking about search more generally even when the scenarios are quite different. This is especially true for email search. Although email was initially designed to facilitate asynchronous communication, it has also become a large repository of personal information. The volume of email continues to grow in both consumer and enterprise settings, and search plays a key role in getting back to needed information. Email search is, however, very different than Web search on many dimensions -- the content being sought is personal and private, metadata such as who sent it or when it was sent is plentiful and important, search intentions are different, people know a lot about what they are looking for, etc. Given these differences, new approaches are required. In this talk I will summarize research we have done using large-scale behavioral logs and complementary qualitative methods to characterize what are people looking for, what they know about what they are looking for, and how this interacts with email management practices. I will then describe several opportunities to help people articulate their information needs and design interfaces and interaction techniques to support this. Finally, I will conclude by pointing to new frontiers in email management and search.
The recent availability of diverse health data resources on large cohorts of human individuals presents many challenges and opportunities. I will present our work aimed at developing machine learning algorithms for predicting future onset of disease and identifying causal drivers of disease based on nationwide electronic health record data as well as data from high-throughput omics profiling technologies such as genetics, microbiome, and metabolomics. Our models provide novel insights into potential drivers of obesity, diabetes, and heart disease, and identify hundreds of novel markers at the microbiome, metabolite, and immune system level. Overall, our predictive models can be translated into personalized disease prevention and treatment plans, and to the development of new therapeutic modalities based on metabolites and the microbiome.
Most work to date on mitigating the COVID-19 pandemic is focused urgently on biomedicine and epidemiology. Yet, pandemic-related policy decisions cannot be made on health information alone. Decisions need to consider the broader impacts on people and their needs. Quantifying human needs across the population is challenging as it requires high geo-temporal granularity, high coverage across the population, and appropriate adjustment for seasonal and other external effects. Here, we propose a computational methodology, building on Maslow's hierarchy of needs, that can capture a holistic view of relative changes in needs following the pandemic through a difference-in-differences approach that corrects for seasonality and volume variations. We apply this approach to characterize changes in human needs across physiological, socioeconomic, and psychological realms in the US, based on more than 35 billion search interactions spanning over 36,000 ZIP codes over a period of 14 months. The analyses reveal that the expression of basic human needs has increased exponentially while higher-level aspirations declined during the pandemic in comparison to the pre-pandemic period. In exploring the timing and variations in statewide policies, we find that the durations of shelter-in-place mandates have influenced social and emotional needs significantly. We demonstrate that potential barriers to addressing critical needs, such as support for unemployment and domestic violence, can be identified through web search interactions. Our approach and results suggest that population-scale monitoring of shifts in human needs can inform policies and recovery efforts for current and anticipated needs.
Political campaigns are increasingly turning to targeted advertising platforms to inform and mobilize potential voters. The appeal of these platforms stems from their promise to empower advertisers to select (or "target") users who see their messages with great precision, including through inferences about those users' interests and political affiliations. However, prior work has shown that the targeting may not work as intended, as platforms' ad delivery algorithms play a crucial role in selecting which subgroups of the targeted users see the ads. In particular, the platforms can selectively deliver ads to subgroups within the target audiences selected by advertisers in ways that can lead to demographic skews along race and gender lines, and do so without the advertiser's knowledge. In this work we demonstrate that ad delivery algorithms used by Facebook, the most advanced targeted advertising platform, shape the political ad delivery in ways that may not be beneficial to the political campaigns and to societal discourse. In particular, the ad delivery algorithms lead to political messages on Facebook being shown predominantly to people who Facebook thinks already agree with the ad campaign's message even if the political advertiser targets an ideologically diverse audience. Furthermore, an advertiser determined to reach ideologically non-aligned users is non-transparently charged a high premium compared to their more aligned competitor, a difference from traditional broadcast media. Our results demonstrate that Facebook exercises control over who sees which political messages beyond the control of those who pay for them or those who are exposed to them. Taken together, our findings suggest that the political discourse's increased reliance on profit-optimized, non-transparent algorithmic systems comes at a cost of diversity of political views that voters are exposed to. Thus, the work raises important questions of fairness and accountability desiderata for ad delivery algorithms applied to political ads.
The rising ubiquity of social media presents a platform for individuals to express suicide ideation, instead of traditional, formal clinical settings. While neural methods for assessing suicide risk on social media have shown promise, a crippling limitation of existing solutions is that they ignore the inherent ordinal nature across fine-grain levels of suicide risk. To this end, we reformulate suicide risk assessment as an Ordinal Regression problem, over the Columbia-Suicide Severity Scale. We propose SISMO, a hierarchical attention model optimized to factor in the graded nature of increasing suicide risk levels, through soft probability distribution since not all wrong risk-levels are equally wrong. We establish the face value of SISMO for preliminary suicide risk assessment on real-world Reddit data annotated by clinical experts. We conclude by discussing the empirical, practical, and ethical considerations pertaining to SISMO in a larger picture, as a human-in-the-loop framework
Scalability and accuracy are well recognized challenges in deep extreme multi-label learning where the objective is to train architectures for automatically annotating a data point with the most relevant subset of labels from an extremely large label set. This paper develops the DeepXML framework that addresses these challenges by decomposing the deep extreme multi-label task into four simpler sub-tasks each of which can be trained accurately and efficiently. Choosing different components for the four sub-tasks allows DeepXML to generate a family of algorithms with varying trade-offs between accuracy and scalability. In particular, DeepXML yields the Astec algorithm that could be 2-12% more accurate and 5-30x faster to train than leading deep extreme classifiers on publically available short text datasets. Astec could also efficiently train on Bing short text datasets containing up to 62 million labels while making predictions for billions of users and data points per day on commodity hardware. This allowed Astec to be deployed on the Bing search engine for a number of short text applications ranging from matching user queries to advertiser bid phrases to showing personalized ads where it yielded significant gains in click-through-rates, coverage, revenue and other online metrics over state-of-the-art techniques currently in production. DeepXML's code is available at https://github.com/Extreme-classification/deepxml.
We present a neural semi-supervised learning model termed Self-Pretraining. Our model is inspired by the classic self-training algorithm. However, as opposed to self-training, Self-Pretraining is threshold-free, it can potentially update its belief about previously labeled documents, and can cope with the semantic drift problem. Self-Pretraining is iterative and consists of two classifiers. In each iteration, one classifier draws a random set of unlabeled documents and labels them. This set is used to initialize the second classifier, to be further trained by the set of labeled documents. The algorithm proceeds to the next iteration and the classifiers' roles are reversed. To improve the flow of information across the iterations and also to cope with the semantic drift problem, Self-Pretraining employs an iterative distillation process, transfers hypotheses across the iterations, utilizes a two-stage training model, uses an efficient learning rate schedule, and employs a pseudo-label transformation heuristic. We have evaluated our model in three publicly available social media datasets. Our experiments show that Self-Pretraining outperforms the existing state-of-the-art semi-supervised classifiers across multiple settings. Our code is available at https://github.com/p-karisani/self-pretraining .
Extreme multi-label classification (XML) involves tagging a data point with its most relevant subset of labels from an extremely large label set, with several applications such as product-to-product recommendation with millions of products. Although leading XML algorithms scale to millions of labels, they largely ignore label metadata such as textual descriptions of the labels. On the other hand, classical techniques that can utilize label metadata via representation learning using deep networks struggle in extreme settings. This paper develops the DECAF algorithm that addresses these challenges by learning models enriched by label metadata that jointly learn model parameters and feature representations using deep networks and offer accurate classification at the scale of millions of labels. DECAF makes specific contributions to model architecture design, initialization, and training, enabling it to offer up to 2-6% more accurate prediction than leading extreme classifiers on publicly available benchmark product-to-product recommendation datasets, such as LF-AmazonTitles-1.3M. At the same time, DECAF was found to be up to 22x faster at inference than leading deep extreme classifiers, which makes it suitable for real-time applications that require predictions within a few milliseconds. The code for DECAF is available at the following URL: https://github.com/Extreme-classification/DECAF
Product query classification is the basic component for query understanding, which aims to classify the user queries into multiple categories under a predefined product category taxonomy for the E-commerce search engine. It is a challenging task due to the tremendous amount of product categories. And a slight modification to a query will change its corresponding categories entirely, e.g., appending the "button" to the query "shirt". The problem is more severe for the tail queries which lack enough supervision information from customers. Motivated by this phenomenon, this paper proposes to model the contrasting/similar relationships between such similar queries. Our framework is composed of a base model and an across-context attention module. The across-context attention module plays the role of deriving and extracting external information from these variant queries by predicting their categories. We conduct both offline and online experiments on the real-world E-commerce search engine. Experimental results demonstrate the effectiveness of our across-context attention module.
This paper explores and offers guidance on a specific and relevant problem in task design for crowdsourcing: how to formulate a complex question used to classify a set of items. In micro-task markets, classification is still among the most popular tasks. We situate our work in the context of information retrieval and multi-predicate classification, i.e., classifying a set of items based on a set of conditions. Our experiments cover a wide range of tasks and domains, and also consider crowd workers alone and in tandem with machine learning classifiers. We provide empirical evidence into how the resulting classification performance is affected by different predicate formulation strategies, emphasizing the importance of predicate formulation as a task design dimension in crowdsourcing.
Despite deep neural network (DNN)'s impressive prediction performance in various domains, it is well known now that a set of DNN models trained with the same model specification and the exact same training data could produce very different prediction results. People have relied on the state-of-the-art ensemble method to estimate prediction uncertainty. However, ensembles are expensive to train and serve for web-scale traffic systems.
In this paper, we seek to advance the understanding of prediction variation estimated by the ensemble method. Through empirical experiments on two widely used benchmark datasets Movielens and Criteo in recommender systems, we observe that prediction variations come from various randomness sources, including training data shuffling, and random initialization. When we add more randomness sources to ensemble members, we see higher prediction variations among these ensemble members, and more accurate mean prediction. Moreover, we propose to infer prediction variation from neuron activation strength and demonstrate its strong prediction power. Our approach provides a simple way for prediction variation estimation, and opens up new opportunities for future work in many interesting areas (e.g., model-based reinforcement learning) without relying on serving expensive ensemble models.
This paper connects equal opportunity to popularity bias in implicit recommenders to introduce the problem of popularity-opportunity bias. That is, conditioned on user preferences that a user likes both items, the more popular item is more likely to be recommended (or ranked higher) to the user than the less popular one. This type of bias is harmful, exerting negative effects on the engagement of both users and item providers. Thus, we conduct a three-part study: (i) By a comprehensive empirical study, we identify the existence of the popularity-opportunity bias in fundamental matrix factorization models on four datasets; (ii) coupled with this empirical study, our theoretical study shows that matrix factorization models inherently produce the bias; and (iii) we demonstrate the potential of alleviating this bias by both in-processing and post-processing algorithms. Extensive experiments on four datasets show the effective debiasing performance of these proposed methods compared with baselines designed for conventional popularity bias.
Due to the advances in deep learning, visually-aware recommender systems (RS) have recently attracted increased research interest. Such systems combine collaborative signals with images, usually represented as feature vectors outputted by pre-trained image models. Since item catalogs can be huge, recommendation service providers often rely on images that are supplied by the item providers. In this work, we show that relying on such external sources can make an RS vulnerable to attacks, where the goal of the attacker is to unfairly promote certain pushed items. Specifically, we demonstrate how a new visual attack model can effectively influence the item scores and rankings in a black-box approach, i.e., without knowing the parameters of the model. The main underlying idea is to systematically create small human-imperceptible perturbations of the pushed item image and to devise appropriate gradient approximation methods to incrementally raise the pushed item's score. Experimental evaluations on two datasets show that the novel attack model is effective even when the contribution of the visual features to the overall performance of the recommender system is modest.
In a collaborative-filtering recommendation scenario, biases in the data will likely propagate in the learned recommendations. In this paper we focus on the so-called mainstream bias: the tendency of a recommender system to provide better recommendations to users who have a mainstream taste, as opposed to non-mainstream users. We propose NAECF, a conceptually simple but effective idea to address this bias. The idea consists of adding an autoencoder (AE) layer when learning user and item representations with text-based Convolutional Neural Networks. The AEs, one for the users and one for the items, serve as adversaries to the process of minimizing the rating prediction error when learning how to recommend. They enforce that the specific unique properties of all users and items are sufficiently well incorporated and preserved in the learned representations. These representations, extracted as the bottlenecks of the corresponding AEs, are expected to be less biased towards mainstream users, and to provide more balanced recommendation utility across all users. Our experimental results confirm these expectations, significantly improving the recommendations for non-mainstream users while maintaining the recommendation quality for mainstream users. Our results emphasize the importance of deploying extensive content-based features, such as online reviews, in order to better represent users and items to maximize the de-biasing effect.
Users of recommendation systems usually focus on one topic at a time. When finishing reading an item, users may want to access more relevant items related to the last read one as extended reading. However, conventional recommendation systems are hard to provide the continuous extended reading function of these relevant items, since the main recommendation results should be diversified. In this paper, we propose a new task named recommendation suggestion, which aims to (1) predict whether users want extended reading, and (2) provide appropriate relevant items as suggestions. These recommended relevant items are arranged in a relevant box and instantly inserted below the clicked item in the main feed. The challenge of recommendation suggestion on relevant items is that it should further consider semantic relevance and information gain besides CTR-related factors. Moreover, the real-time relevant box insertion may also harm the overall performance when users do not want extended reading. To address these issues, we propose a novel Real-time relevant recommendation suggestion (R3S) framework, which consists of an Item recommender and a Box trigger. We extract features from multiple aspects including feature interaction, semantic similarity and information gain as different experts, and propose a new Multi-critic multi-gate mixture-of-experts (M3oE) strategy to jointly consider different experts with multi-head critics. In experiments, we conduct both offline and online evaluations on a real-world recommendation system with detailed ablation tests. The significant improvements in item/box related metrics verify the effectiveness of R3S. Moreover, we have deployed R3S on WeChat Top Stories, which affects millions of users. The source codes are in https://github.com/modriczhang/R3S.
Reinforcement Learning (RL) techniques have been sought after as the next-generation tools to further advance the field of recommendation research. Different from classic applications of RL, recommender agents, especially those deployed on commercial recommendation platforms, have to operate in extremely large state and action spaces, serving a dynamic user base in the order of billions, and a long-tail item corpus in the order of millions or billions. The (positive) user feedback available to train such agents is extremely scarce in retrospect. Improving the sample efficiency of RL algorithms is thus of paramount importance when developing RL agents for recommender systems. In this work, we present a general framework to augment the training of model-free RL agents with auxiliary tasks for improved sample efficiency. More specifically, we opt to add additional tasks that predict users' immediate responses (positive or negative) toward recommendations, i.e., user response modeling, to enhance the learning of the state and action representations for the recommender agents. We also introduce a tool based on gradient correlation analysis to guide the model design. We showcase the efficacy of our method in offline experiments, learning and evaluating agent policies over hundreds of millions of user trajectories. We also conduct live experiments on an industrial recommendation platform serving billions of users and tens of millions of items to verify its benefit.
Personalized recommender systems rely on knowledge of user preferences to produce recommendations. While those preferences are often obtained from past user interactions with the recommendation catalog, in some situations such observations are insufficient or unavailable. The most widely studied case is with new users, although other similar situations arise where explicit preference elicitation is valuable. At the same time, a seemingly disparate challenge is that there is a well-known popularity bias in many algorithmic approaches to recommender systems. The most common way of addressing this challenge is diversification, which tends to be applied to the output of a recommender algorithm, prior to items being presented to users. We tie these two problems together, showing a tight relationship. Our results show that popularity bias in preference elicitation contributes to popularity bias in recommendation. In particular, most elicitation methods directly optimize only for the relevance of recommendations that would result from collected preferences. This focus on recommendation accuracy biases the preferences collected. We demonstrate how diversification can instead be applied directly at elicitation time. Our model diversifies the preferences elicited using Multi-Armed Bandits, a classical exploration-exploitation framework from reinforcement learning. This leads to a broader understanding of users' preferences, and improved diversity and serendipity of recommendations, without necessitating post-hoc debiasing corrections.
The topology of the hyperlink graph among pages expressing different opinions may influence the exposure of readers to diverse content. Structural bias may trap a reader in a 'polarized' bubble with no access to other opinions. We model readers' behavior as random walks. A node is in a 'polarized' bubble if the expected length of a random walk from it to a page of different opinion is large. The structural bias of a graph is the sum of the radii of highly-polarized bubbles. We study the problem of decreasing the structural bias through edge insertions. 'Healing' all nodes with high polarized bubble radius is hard to approximate within a logarithmic factor, so we focus on finding the best k edges to insert to maximally reduce the structural bias. We present RePBubLik, an algorithm that leverages a variant of the random walk closeness centrality to select the edges to insert. RePBubLik obtains, under mild conditions, a constant-factor approximation. It reduces the structural bias faster than existing edge-recommendation methods, including some designed to reduce the polarization of a graph.
Graph Neural Networks (GNNs) have achieved tremendous success in various real-world applications due to their strong ability in graph representation learning. GNNs explore the graph structure and node features by aggregating and transforming information within node neighborhoods. However, through theoretical and empirical analysis, we reveal that the aggregation process of GNNs tends to destroy node similarity in the original feature space. There are many scenarios where node similarity plays a crucial role. Thus, it has motivated the proposed framework SimP-GCN that can effectively and efficiently preserve node similarity while exploiting graph structure. Specifically, to balance information from graph structure and node features, we propose a feature similarity preserving aggregation which adaptively integrates graph structure and node features. Furthermore, we employ self-supervised learning to explicitly capture the complex feature similarity and dissimilarity relations between nodes. We validate the effectiveness of SimP-GCN on seven benchmark datasets including three assortative and four disassorative graphs. The results demonstrate that SimP-GCN outperforms representative baselines. Further probe shows various advantages of the proposed framework. The implementation of SimP-GCN is available at https://github.com/ChandlerBang/SimP-GCN.
Graph convolutional networks (GCNs), aiming to obtain node embeddings by integrating high-order neighborhood information through stacked graph convolution layers, have demonstrated great power in many network analysis tasks such as node classification and link prediction. However, a fundamental weakness of GCNs, that is, topological limitations, including over-smoothing and local homophily of topology, limits their ability to represent networks. Existing studies for solving these topological limitations typically focus only on the convolution of features on network topology, which inevitably relies heavily on network structure. Moreover, most networks are text-rich, so it is important to integrate not only document-level information, but also the local text information which is particularly significant while often ignored by the existing methods. To solve these limitations, we propose BiTe-GCN, a novel GCN architecture modeling via bidirectional convolution of topology and features on text-rich networks. Specifically, we first transform the original text-rich network into an augmented bi-typed heterogeneous network, capturing both the global document-level information and the local text-sequence information from texts. We then introduce discriminative convolution mechanisms, which performs convolution on this augmented bi-typed network, realizing the convolutions of topology and features altogether in the same system, and learning different contributions of these two parts (i.e., network part and text part), automatically for the given learning objectives. Extensive experiments on text-rich networks demonstrate that our new architecture outperforms the state-of-the-arts by a breakout improvement. Moreover, this architecture can also be applied to several e-commerce search scenes such as JD searching, and experiments on JD dataset show the superiority of the proposed architecture over the related methods.
One fundamental problem in causal inference is to learn the individual treatment effects (ITE) -- assessing the causal effects of a certain treatment (e.g., prescription of medicine) on an important outcome (e.g., cure of a disease) for each data instance, but the effectiveness of most existing methods is often limited due to the existence of hidden confounders. Recent studies have shown that the auxiliary relational information among data can be utilized to mitigate the confounding bias. However, these works assume that the observational data and the relations among them are static, while in reality, both of them will continuously evolve over time and we refer such data as time-evolving networked observational data.
In this paper, we make an initial investigation of ITE estimation on such data. The problem remains difficult due to the following challenges: (1) modeling the evolution patterns of time-evolving networked observational data; (2) controlling the hidden confounders with current data and historical information; (3) alleviating the discrepancy between the control group and the treated group. To tackle these challenges, we propose a novel ITE estimation framework Dynamic Networked Observational Data Deconfounder (\mymodel) which aims to learn representations of hidden confounders over time by leveraging both current networked observational data and historical information. Additionally, a novel adversarial learning based representation balancing method is incorporated toward unbiased ITE estimation. Extensive experiments validate the superiority of our framework when measured against state-of-the-art baselines. The implementation can be accessed in \hrefhttps://github.com/jma712/DNDC https://github.com/jma712/DNDC.
The goal of influence maximization is to select a set of seed users that will optimally diffuse information through a network. In this paper, we study how applying traditional influence maximization algorithms affects the balance between different audience categories (e.g., gender breakdown) who will eventually be exposed to a message. More specifically, we investigate how structural homophily (i.e., the tendency to connect to similar others) and influence diffusion homophily (i.e., the tendency to be influenced by similar others) affect the balance among the activated nodes. We find that even under mild levels of homophily, the balance among the exposed nodes is significantly worse than the balance among the overall population, resulting in a significant disadvantage for one group. To address this challenge, we propose an algorithm that jointly maximizes the influence and balance among nodes while still preserving the attractive theoretical guarantees of the traditional influence maximization algorithms. We run a series of experiments on multiple synthetic and four real-world datasets to demonstrate the effectiveness of the proposed algorithm in improving the balance between different categories of exposed nodes.
Heterogeneous information networks consist of multiple types of edges and nodes, which have a strong ability to represent the rich semantics underpinning network structures. Recently, the dynamics of networks has been studied in many tasks such as social media analysis and recommender systems. However, existing methods mainly focus on the static networks or dynamic homogeneous networks, which are incapable or inefficient in modeling dynamic heterogeneous information networks. In this paper, we propose a method named Dynamic Heterogeneous Information Network Embedding (DyHINE), which can update embeddings when the network evolves. The method contains two key designs: (1) A dynamic time-series embedding module which employs a hierarchical attention mechanism to aggregate neighbor features and temporal random walks to capture dynamic interactions; (2) An online real-time updating module which efficiently updates the computed embeddings via a dynamic operator. Experiments on three real-world datasets demonstrate the effectiveness of our model compared with state-of-the-art methods on the task of temporal link prediction.
A/B tests have been widely adopted across industries as the golden rule that guides decision making. However, the long-term true north metrics we ultimately want to drive through A/B test may take a long time to mature. In these situations, a surrogate metric which predicts the long-term metric is often used instead to conclude whether the treatment is effective. However, because the surrogate rarely predicts the true north perfectly, a regular A/B test based on surrogate metrics tends to have high false positive rate and the treatment variant deemed favorable from the test may not be the winning one. In this paper, we discuss how to adjust the A/B testing comparison to ensure experiment results are trustworthy. We also provide practical guidelines on the choice of good surrogate metrics. To provide a concrete example of how to leverage surrogate metrics for fast decision making, we present a case study on developing and evaluating the predicted confirmed hire surrogate metric in LinkedIn job marketplace.
In many industry settings, online controlled experimentation (A/B test) has been broadly adopted as the gold standard to measure product or feature impacts. Most research has primarily focused on user engagement type metrics, specifically measuring treatment effects at mean (average treatment effects, ATE), and only a few have been focusing on performance metrics (e.g. latency), where treatment effects are measured at quantiles. Measuring quantile treatment effects (QTE) is challenging due to the myriad difficulties such as dependency introduced by clustered samples, scalability issues, density bandwidth choices, etc. In addition, previous literature has mainly focused on QTE at some pre-defined locations, such as P50 or P90, which doesn't always convey the full picture. In this paper, we propose a novel scalable non-parametric solution, which can provide a continuous range of QTE with point-wise confidence intervals while circumventing the density estimation altogether. Numerical results show high consistency with traditional methods utilizing asymptotic normality. An end-to-end pipeline has been implemented at Snap Inc., providing daily insights on key performance metrics at a distributional level.
It is often critical for prediction models to be robust to distributional shifts between training and testing data. From a causal perspective, the challenge is to distinguish the stable causal relationships from the unstable spurious correlations across shifts. We describe a causal transfer random forest (CTRF) that combines existing training data with a small amount of data from a randomized experiment to train a model which is robust to the feature shifts and therefore transfers to a new targeting distribution. Theoretically, we justify the robustness of the approach against feature shifts with the knowledge from causal learning. Empirically, we evaluate the CTRF using both synthetic data experiments and real-world experiments in the Bing Ads platform, including a click prediction task and in the context of an end-to-end counterfactual optimization system. The proposed CTRF produces robust predictions and outperforms most baseline methods compared in the presence of feature shifts.
E-commerce business is revolutionizing our shopping experiences by providing convenient and straightforward services. One of the most fundamental problems is how to balance the demand and supply in market segments to build an efficient platform. While conventional machine learning models have achieved great success on data-sufficient segments, it may fail in a large-portion of segments in E-commerce platforms, where there are not sufficient records to learn well-trained models. In this paper, we tackle this problem in the context of market segment demand prediction. The goal is to facilitate the learning process in the target segments by leveraging the learned knowledge from data-sufficient source segments. Specifically, we propose a novel algorithm, RMLDP, to incorporate a multi-pattern fusion network (MPFN) with a meta-learning paradigm. The multi-pattern fusion network considers both local and seasonal temporal patterns for segment demand prediction. In the meta-learning paradigm, transferable knowledge is regarded as the model parameter initialization of MPFN, which are learned from diverse source segments. Furthermore, we capture the segment relations by combining data-driven segment representation and segment knowledge graph representation and tailor the segment-specific relations to customize transferable model parameter initialization. Thus, even with limited data, the target segment can quickly find the most relevant transferred knowledge and adapt to the optimal parameters. We conduct extensive experiments on two large-scale industrial datasets. The results justify that our RMLDP outperforms a set of state-of-the-art baselines. Besides, RMLDP has been deployed in Taobao, a real-world E-commerce platform. The online A/B testing results further demonstrate the practicality of RMLDP.
Consumer lending service is escalating in E-Commerce platforms due to its capability in enhancing buyers' purchasing power, improving average order value, and increasing revenue of the platforms. Credit risk forecasting and credit limits setting are two fundamental problems in E-Commerce/online consumer lending services. Currently, the majority of institutes rely on two-separate-step methods to resolve. First, build a rating model to evaluate credit risk, and then design heuristic strategies to set credit limits, which requires a large amount of prior knowledge and lacks theoretical justifications. In this paper, we propose an end-to-end multi-view and multi-task learning based approach named MvMoE (Multi-view-aware Mixture-of-Experts network) to solve these two problems simultaneously. First, a multi-view network with a hierarchical attention mechanism is constructed to distill users' heterogeneous financial information into shared hidden representations. Then, we jointly train these two tasks with a view-aware multi-gate mixture-of-experts network and a subsequent progressive network to improve their performances. With the real-world dataset contained 5.44 million users, we investigate the effectiveness of MvMoE. Experimental results exhibit that the proposed model is able to improve AP over 5.60% on credit risk forecasting and MAE over 9.52% on credit limits setting compared with conventional methods. Meanwhile, MvMoE has good interpretability, which better underpins the imperative demands in financial industries.
Algorithmic recommendations shape music consumption at scale, and understanding the impact of various algorithmic models on how content is consumed is a central question for music streaming platforms. The ability to shift consumption towards less popular content and towards content different from user's typical historic tastes not only affords the platform ways of handling issues such as filter bubbles and popularity bias, but also contributes to maintaining a healthy and sustainable consumption patterns necessary for overall platform success.
In this work, we view diversity as an enabler for shifting consumption and consider two notions of music diversity, based on taste similarity and popularity, and investigate how four different recommendation approaches optimized for user satisfaction, fare on diversity metrics. To investigate how the ranker complexity influences diversity, we use two well-known rankers and propose two new models of increased complexity: a feedback aware neural ranker and a reinforcement learning (RL) based ranker. We demonstrate that our models lead to gains in satisfaction, but at the cost of diversity. Such trade-off between model complexity and diversity necessitates the need for explicitly encoding diversity in the modeling process, for which we consider four types of approaches: interleaving based, submodularity based, interpolation, and RL reward modeling based. We find that our reward modeling based RL approach achieves the best trade-off between optimizing the satisfaction metric and surfacing diverse content, thereby enabling consumption shifting at scale. Our findings have implications for the design and deployment of practical approaches for music diversification, which we discuss at length.
Spectral clustering is widely used in modern data analysis. Spectral clustering methods speed up the computation and keep useful information by reducing dimensionality. Recently, graph signal filtering (GSF) has been introduced to further speed up the dimensionality reduction process by avoiding solving eigenvectors.
In this work, we first prove that the non-ideal filter not only affects the accuracy of GSF, but also the calculation speed. Then we further propose a modified Kernel Polynomial Method (KPM) to help GSF set the filter properly and effectively. We combine the main steps of KPM and GSF, and propose a novel spectral clustering method: Chebyshev Accelerated Spectral Clustering (CASC). In CASC, we take advantages of the excellent properties of Chebyshev polynomials: Compared with other spectral clustering methods using GSF, CASC spends negligible time on estimating the eigenvalues, and achieves the same accuracy with less computation. The experiments on artificial and real-world data prove that CASC is accurate and fast.
Product embeddings have been heavily investigated in the past few years, serving as the cornerstone for a broad range of machine learning applications in e-commerce. Despite the empirical success of product embeddings, little is known on how and why they work from the theoretical standpoint. Analogous results from the natural language processing (NLP) often rely on domain-specific properties that are not transferable to the e-commerce setting, and the downstream tasks often focus on different aspects of the embeddings. We take an e-commerce-oriented view of the product embeddings and reveal a complete theoretical view from both the representation learning and the learning theory perspective. We prove that product embeddings trained by the widely-adopted skip-gram negative sampling algorithm and its variants are sufficient dimension reduction regarding a critical product relatedness measure. The generalization performance in the downstream machine learning task is controlled by the alignment between the embeddings and the product relatedness measure. Following the theoretical discoveries, we conduct exploratory experiments that supports our theoretical insights for the product embeddings.
Cold-start problem is a fundamental challenge for recommendation tasks. Despite the recent advances on Graph Neural Networks (GNNs) incorporate the high-order collaborative signal to alleviate the problem, the embeddings of the cold-start users and items aren't explicitly optimized, and the cold-start neighbors are not dealt with during the graph convolution in GNNs. This paper proposes to pre-train a GNN model before applying it for recommendation. Unlike the goal of recommendation, the pre-training GNN simulates the cold-start scenarios from the users/items with sufficient interactions and takes the embedding reconstruction as the pretext task, such that it can directly improve the embedding quality and can be easily adapted to the new cold-start users/items. To further reduce the impact from the cold-start neighbors, we incorporate a self-attention-based meta aggregator to enhance the aggregation ability of each graph convolution step, and an adaptive neighbor sampler to select the effective neighbors according to the feedbacks from the pre-training GNN model. Experiments on three public recommendation datasets show the superiority of our pre-training GNN model against the original GNN models on user/item embedding inference and the recommendation task.
There are many scenarios where short- and long-term causal effects of an intervention are different. For example, low-quality ads may increase short-term ad clicks but decrease the long-term revenue via reduced clicks. This work, therefore, studies the the problem of long-term effect where the outcome of primary interest, orprimary outcome, takes months or even years to accumulate. The observational study of long-term effect presents unique challenges. First, the confounding bias causes large estimation error and variance, which can further accumulate towards the prediction of primary outcomes. Second, short-term outcomes are often directly used as the proxy of the primary outcome, i.e., thesurrogate. Nevertheless, this method entails the strong surrogacy assumption that is often impractical. To tackle these challenges, we propose to build connections between long-term causal inference and sequential models in machine learning. This enables us to learnsurrogate representations that account for thetemporal unconfoundedness and circumvent the stringent surrogacy assumption by conditioning on the inferred time-varying confounders. Experimental results show that the proposed framework outperforms the state-of-the-art.
Recently pre-trained language representation models such as BERT have shown great success when fine-tuned on downstream tasks including information retrieval (IR). However, pre-training objectives tailored for ad-hoc retrieval have not been well explored. In this paper, we propose Pre-training with Representative wOrds Prediction (PROP) for ad-hoc retrieval. PROP is inspired by the classical statistical language model for IR, specifically the query likelihood model, which assumes that the query is generated as the piece of text representative of the "ideal" document. Based on this idea, we construct the representative words prediction (ROP) task for pre-training. Given an input document, we sample a pair of word sets according to the document language model, where the set with higher likelihood is deemed as more representative of the document. We then pre-train the Transformer model to predict the pairwise preference between the two word sets, jointly with the Masked Language Model (MLM) objective. By further fine-tuning on a variety of representative downstream ad-hoc retrieval tasks, PROP achieves significant improvements over baselines without pre-training or with other pre-training methods. We also show that PROP can achieve exciting performance under both the zero- and low-resource IR settings.
Preference data is a form of dyadic data, with measurements associated with pairs of elements arising from two discrete sets of objects. These are users and items, as well as their interactions, e.g., ratings. We are interested in learning representations for both sets of objects, i.e., users and items, to predict unknown pairwise interactions. Motivated by the recent successes of deep latent variable models, we propose Bilateral Variational Autoencoder (BiVAE), which arises from a combination of a generative model of dyadic data with two inference models, user- and item-based, parameterized by neural networks. Interestingly, our model can take the form of a Bayesian variational autoencoder either on the user or item side. As opposed to the vanilla VAE model, BiVAE is "bilateral'', in that users and items are treated similarly, making it more apt for two-way or dyadic data. While theoretically sound, we formally show that, similarly to VAE, our model might suffer from an over-regularized latent space. This issue, known as posterior collapse in the VAE literature, may appear due to assuming an over-simplified prior (isotropic Gaussian) over the latent space. Hence, we further propose a mitigation of this issue by introducing constrained adaptive prior (CAP) for learning user- and item-dependent prior distributions. Empirical results on several real-world datasets show that the proposed model outperforms conventional VAE and other comparative collaborative filtering models in terms of item recommendation. Moreover, the proposed CAP further boosts the performance of BiVAE. An implementation of BiVAE is available on Cornac recommender library.
Large generative language models such as GPT-2 are well-known for their ability to generate text as well as their utility in supervised downstream tasks via fine-tuning. Its prevalence on the web, however, is still not well understood - if we run GPT-2 detectors across the web, what will we find? Our work is twofold: firstly we demonstrate via human evaluation that classifiers trained to discriminate between human and machine-generated text emerge as unsupervised predictors of "page quality", able to detect low quality content without any training. This enables fast bootstrapping of quality indicators in a low-resource setting. Secondly, curious to understand the prevalence and nature of low quality pages in the wild, we conduct extensive qualitative and quantitative analysis over 500 million web articles, making this the largest-scale study ever conducted on the topic.
Product reviews play a key role in e-commerce platforms. Studies show that many users read product reviews before purchase and trust them as much as personal recommendations. However, in many cases, the number of reviews per product is large and finding useful information becomes a challenging task. A few websites have recently added an option to post tips - short, concise, practical, and self-contained pieces of advice about products. These tips are complementary to the reviews and usually add a new non-trivial insight about the product, beyond its title, attributes, and description. Yet, most if not all major e-commerce platforms lack the notion of a tip as a first class citizen and customers typically express their advice through other means, such as reviews. In this work, we propose an extractive method for tip generation from product reviews. We focus on five popular e-commerce domains whose reviews tend to contain useful non-trivial tips that are beneficial for potential customers. We formally define the task of tip extraction in e-commerce by providing the list of tip types, tip timing (before and/or after the purchase), and connection to the surrounding context sentences. To extract the tips, we propose a supervised approach and provide a labeled dataset, annotated by human editors, over 14,000 product reviews using a dedicated tool. To demonstrate the potential of our approach, we compare different tip generation methods and evaluate them both manually and over the labeled set. Our approach demonstrates especially high performance for popular products in the Baby, Home Improvement and Sports & Outdoors domains, with precision of over 95% for the top 3 tips per product.
The transparency issue of sponsorship disclosure in advertising posts has become a significant problem in influencer marketing. Although influencers are urged to comply with the regulations governing sponsorship disclosure, a considerable number of influencers fail to disclose sponsorship properly in paid advertisements. In this paper, we propose a learning-to-rank based model, Sponsored Post Detector (SPoD), to detect undisclosed sponsorship of social media posts by learning various aspects of the posts such as text, image, and the social relationship among influencers and brands. More precisely, we exploit image objects and contextualized information to obtain the representations of the posts and also utilize Graph Convolutional Networks (GCNs) on a network which consists of influencers, brands, and posts with embed social media attributes. We further optimize the model by conducting manifold regularization based on temporal information and mentioned brands in posts. The extensive studies and experiments are conducted on sampled real-world Instagram datasets containing 1,601,074 posts, which mention 26,910 brands, published over 6 years by 38,113 influencers. Our experimental results demonstrate that SPoD significantly outperforms the existing baseline methods in discovering sponsored posts on social media.
We present Quotebank, an open corpus of 178 million quotations attributed to the speakers who uttered them, extracted from 162 million English news articles published between 2008 and 2020. In order to produce this Web-scale corpus, while at the same time benefiting from the performance of modern neural models, we introduce Quobert, a minimally supervised framework for extracting and attributing quotations from massive corpora. Quobert avoids the necessity of manually labeled input and instead exploits the redundancy of the corpus by bootstrapping from a single seed pattern to extract training data for fine-tuning a BERT-based model. Quobert is language- and corpus agnostic and correctly attributes 86.9% of quotations in our experiments. Quotebank and Quobert are publicly available at https://doi.org/10.5281/zenodo.4277311.
In e-commerce, opinion tags refer to a ranked list of tags provided by the e-commerce platform that reflect characteristics of reviews of an item. To assist consumers to quickly grasp a large number of reviews about an item, opinion tags are increasingly being applied by e-commerce platforms. Current mechanisms for generating opinion tags rely on either manual labelling or heuristic methods, which is time-consuming and ineffective. In this paper, we propose the abstractive opinion tagging task, where systems have to automatically generate a ranked list of opinion tags that are based on, but need not occur in, a given set of user-generated reviews. The abstractive opinion tagging task comes with three main challenges: the noisy nature of reviews; the formal nature of opinion tags vs. the colloquial language usage in reviews; and the need to distinguish between different items with very similar aspects. To address these challenges, we propose an abstractive opinion tagging framework, named AOT-Net, to generate a ranked list of opinion tags given a large number of reviews. First, a sentence-level salience estimation component estimates each review's salience score. Next, a review clustering and ranking component ranks reviews in two steps: first, reviews are grouped into clusters and ranked by cluster size; then, reviews within each cluster are ranked by their distance to the cluster center. Finally, given the ranked reviews, a rank-aware opinion tagging component incorporates an alignment feature and alignment loss to generate a ranked list of opinion tags. To facilitate the study of this task, we create and release a large-scale dataset, called eComTag, crawled from real-world e-commerce websites. Extensive experiments conducted on the eComTag dataset verify the effectiveness of the proposed AOT-Net in terms of various evaluation metrics.
Trends are those keywords, phrases, or names that are mentioned most often on social media or in news in a particular timeframe.They are an effective way for human news readers to both discover and stay focused on the most relevant information of the day. In this work, we consider trends that correspond to an entity in a knowledge graph and introduce the new and as-yet unexplored task of identifying other entities that may help explain the "why" an entity is trending. We refer to these retrieved entities as contextual entities. Some of them are more important than others in the context of the trending entity and we thus determine a ranking of entities according to how useful they are in contextualizing the trend.We propose two solutions for ranking contextual entities. The first one is fully unsupervised and based on Personalized PageRank, calculated over a trending entity-specific graph of other entities where the edges encode a notion of directional similarity based on embedded background knowledge. Our second method is based on learning to rank and combines the intuitions behind the unsupervised model with signals derived from hand-crafted features in a supervised setting. We compare our models on this novel task by using a new, purpose-built test collection created using crowdsourcing. Our methods improve over the strongest baseline in terms ofPrecision at 1 by 7% (unsupervised) and 13% (supervised). We find that the salience of a contextual entity and how coherent it is with respect to the news story are strong indicators of relevance in both unsupervised and supervised settings.
Conversational question answering (QA) requires the ability to correctly interpret a question in the context of previous conversation turns. We address the conversational QA task by decomposing it into question rewriting and question answering subtasks. The question rewriting (QR) subtask is specifically designed to reformulate ambiguous questions, which depend on the conversational context, into unambiguous questions that can be correctly interpreted outside of the conversational context. We introduce a conversational QA architecture that sets the new state of the art on the TREC CAsT 2019 passage retrieval dataset. Moreover, we show that the same QR model improves QA performance on the QuAC dataset with respect to answer span extraction, which is the next step in QA after passage retrieval. Our evaluation results indicate that the QR model we proposed achieves near human-level performance on both datasets and the gap in performance on the end-to-end conversational QA task is attributed mostly to the errors in QA.
This paper concerns user preference estimation in multi-round conversational recommender systems (CRS), which interacts with users by asking questions about attributes and recommending items multiple times in one conversation. Multi-round CRS such as EAR have been proposed in which the user's online feedback at both attribute level and item level can be utilized to estimate user preference and make recommendations. Though preliminary success has been shown, existing user preference models in CRS usually use the online feedback information as independent features or training instances, overlooking the relation between attribute-level and item-level feedback signals. The relation can be used to more precisely identify the reasons (e.g., some certain attributes) that trigger the rejection of an item, leading to more fine-grained utilization of the feedback information. To address aforementioned issue, this paper proposes a novel preference estimation model tailored for multi-round CRS, called Feedback-guided Preference Adaptation Network (FPAN). In FPAN, two gating modules are designed to respectively adapt the original user embedding and item-level feedback, both according to the online attribute-level feedback. The gating modules utilize the fine-grained attribute-level feedback to revise the user embedding and coarse-grained item-level feedback, achieving more accurate user preference estimation by considering the relation between feedback. Experimental results on two benchmarks showed that FPAN outperformed the state-of-the-art user preference models in CRS, and the multi-round CRS can also be enhanced by using FPAN as its recommender component.
The ubiquity of implicit feedback makes them the default choice to build online recommender systems. While the large volume of implicit feedback alleviates the data sparsity issue, the downside is that they are not as clean in reflecting the actual satisfaction of users. For example, in E-commerce, a large portion of clicks do not translate to purchases, and many purchases end up with negative reviews. As such, it is of critical importance to account for the inevitable noises in implicit feedback for recommender training. However, little work on recommendation has taken the noisy nature of implicit feedback into consideration.
In this work, we explore the central theme of denoising implicit feedback for recommender training. We find serious negative impacts of noisy implicit feedback, i.e., fitting the noisy data hinders the recommender from learning the actual user preference. Our target is to identify and prune the noisy interactions, so as to improve the efficacy of recommender training. By observing the process of normal recommender training, we find that noisy feedback typically has large loss values in the early stages. Inspired by this observation, we propose a new training strategy named Adaptive Denoising Training (ADT), which adaptively prunes noisy interactions during training. Specifically, we devise two paradigms for adaptive loss formulation: Truncated Loss that discards the large-loss samples with a dynamic threshold in each iteration; and Reweighted Loss that adaptively lowers the weights of large-loss samples. We instantiate the two paradigms on the widely used binary cross-entropy loss and test the proposed ADT strategies on three representative recommenders. Extensive experiments on three benchmarks demonstrate that ADT significantly improves the quality of recommendation over normal training.
Next destination recommendation is an important task in the transportation domain of taxi and ride-hailing services, where users are recommended with personalized destinations given their current origin location. However, recent recommendation works do not satisfy this origin-awareness property, and only consider learning from historical destination locations, without origin information. Thus, the resulting approaches are unable to learn and predict origin-aware recommendations based on the user's current location, leading to sub-optimal performance and poor real-world practicality. Hence, in this work, we study the origin-aware next destination recommendation task. We propose the Spatial-Temporal Origin-Destination Personalized Preference Attention (STOD-PPA) encoder-decoder model to learn origin-origin (OO), destination-destination (DD), and origin-destination (OD) relationships by first encoding both origin and destination sequences with spatial and temporal factors in local and global views, then decoding them through personalized preference attention to predict the next destination. Experimental results on seven real-world user trajectory taxi datasets show that our model significantly outperforms baseline and state-of-the-art methods.
A significant number of event-related queries are issued in Web search. In this paper, we seek to improve retrieval performance by leveraging events and specifically target the classic task of query expansion. We propose a method to expand an event-related query by first detecting the events related to it. Then, we derive the candidates for expansion as terms semantically related to both the query and the events. To identify the candidates, we utilize a novel mechanism to simultaneously embed words and events in the same vector space. We show that our proposed method of leveraging events improves query expansion performance significantly compared with state-of-the-art methods on various newswire TREC datasets.
In many applications of session-based recommendation, social networks are usually available. Since users' interests are influenced by their friends, recommender systems can leverage social networks to better understand their users' preferences and thus provide more accurate recommendations. However, existing methods for session-based social recommendation are not efficient. To predict the next item of a user's ongoing session, the methods need to process many additional sessions of the user's friends to capture social influences, while non-social-aware methods (i.e., those without using social networks) only need to process one single session. To solve the efficiency issue, we propose an efficient framework for session-based social recommendation. In the framework, first, a heterogeneous graph neural network is used to learn user and item representations that integrate the knowledge from social networks. Then, to generate predictions, only the user and item representations relevant to the current session are passed to a non-social-aware model. During inference, since the user and item representations can be precomputed, the overall model runs as fast as the original non-social-aware model, while it can achieve better performance by leveraging the knowledge from social networks. Apart from being efficient, our framework has two additional advantages. First, the framework is flexible because it is compatible with any existing non-social-aware models and can easily incorporate more knowledge other than social networks. Second, our framework can capture cross-session item transitions while existing methods can only capture intra-session item transitions. Extensive experiments conducted on three public datasets demonstrate the effectiveness and efficiency of the proposed framework. Our code is available at https://github.com/twchen/SEFrame.
For many kinds of interventions, such as a new advertisement, marketing intervention, or feature recommendation, it is important to target a specific subset of people for maximizing its benefits at minimum cost or potential harm. However, a key challenge is that no data is available about the effect of such a prospective intervention since it has not been deployed yet. In this work, we propose a split-treatment analysis that ranks the individuals most likely to be positively affected by a prospective intervention using past observational data. Unlike standard causal inference methods, the split-treatment method does not need any observations of the target treatments themselves. Instead it relies on observations of a proxy treatment that is caused by the target treatment. Under reasonable assumptions, we show that the ranking of heterogeneous causal effect based on the proxy treatment is the same as the ranking based on the target treatment's effect. In the absence of any interventional data for cross-validation, Split-Treatment uses sensitivity analyses for unobserved confounding to eliminate unreliable models. We apply Split-Treatment to simulated data and a large-scale, real-world targeting task and validate our discovered rankings via a randomized experiment for the latter.
A desirable property of learning systems is to be both effective and interpretable. Towards this goal, recent models have been proposed that first generate an extractive explanation from the input text and then generate a prediction on just the explanation called explain-then-predict models. These models primarily consider the task input as a supervision signal in learning an extractive explanation and do not effectively integrate rationales data as an additional inductive bias to improve task performance. We propose a novel yet simple approach ExPred, which uses multi-task learning in the explanation generation phase effectively trading-off explanation and prediction losses. Next, we use another prediction network on just the extracted explanations for optimizing the task performance. We conduct an extensive evaluation of our approach on three diverse language datasets -- sentiment classification, fact-checking, and question answering -- and find that we substantially outperform existing approaches.
Recommendation datasets are prone to selection biases due to self-selection behavior of users and item selection process of systems. This makes explicitly combating selection biases an essential problem in training recommender systems. Most previous studies assume no unbiased data available for training. We relax this assumption and assume that a small subset of training data is unbiased. Then, we propose a novel objective that utilizes the unbiased data to adaptively assign propensity weights to biased training ratings. This objective, combined with unbiased performance estimators, alleviates the effects of selection biases on the training of recommender systems. To optimize the objective, we propose an efficient algorithm that minimizes the variance of propensity estimates for better generalized recommender systems. Extensive experiments on two real-world datasets confirm the advantages of our approach in significantly reducing both the error of rating prediction and the variance of propensity estimation.
How can we build recommender systems to take into account fairness? Real-world recommender systems are often composed of multiple models, built by multiple teams. However, most research on fairness focuses on improving fairness in a single model. Further, recent research on classification fairness has shown that combining multiple "fair" classifiers can still result in an "unfair" classification system. This presents a significant challenge: how do we understand and improve fairness in recommender systems composed of multiple components?
In this paper, we study the compositionality of recommender fairness. We consider two recently proposed fairness ranking metrics: equality of exposure and pairwise ranking accuracy. While we show that fairness in recommendation is not guaranteed to compose, we provide theory for a set of conditions under which fairness of individual models does compose. We then present an analytical framework for both understanding whether a real system's signals can achieve compositional fairness, and improving which component would have the greatest impact on the fairness of the overall system. In addition to the theoretical results, we find on multiple datasets---including a large-scale real-world recommender system---that the overall system's end-to-end fairness is largely achievable by improving fairness in individual components.
As Recommender Systems (RS) influence more and more people in their daily life, the issue of fairness in recommendation is becoming more and more important. Most of the prior approaches to fairness-aware recommendation have been situated in a static or one-shot setting, where the protected groups of items are fixed, and the model provides a one-time fairness solution based on fairness-constrained optimization. This fails to consider the dynamic nature of the recommender systems, where attributes such as item popularity may change over time due to the recommendation policy and user engagement. For example, products that were once popular may become no longer popular, and vice versa. As a result, the system that aims to maintain long-term fairness on the item exposure in different popularity groups must accommodate this change in a timely fashion.
Novel to this work, we explore the problem of long-term fairness in recommendation and accomplish the problem through dynamic fairness learning. We focus on the fairness of exposure of items in different groups, while the division of the groups is based on item popularity, which dynamically changes over time in the recommendation process. We tackle this problem by proposing a fairness-constrained reinforcement learning algorithm for recommendation, which models the recommendation problem as a Constrained Markov Decision Process (CMDP), so that the model can dynamically adjust its recommendation policy to make sure the fairness requirement is always satisfied when the environment changes. Experiments on several real-world datasets verify our framework's superiority in terms of recommendation performance, short-term fairness, and long-term fairness.
We consider the problem of utility maximization in online ranking applications while also satisfying a pre-defined fairness constraint. We consider batches of items which arrive over time, already ranked using an existing ranking model. We propose online post-processing for re-ranking these batches to enforce adherence to the pre-defined fairness constraint, while maximizing a specific notion of utility. To achieve this goal, we propose two deterministic re-ranking policies. In addition, we learn a re-ranking policy based on a novel variation of learning to search. Extensive experiments on real world and synthetic datasets demonstrate the effectiveness of our proposed policies both in terms of adherence to the fairness constraint and utility maximization. Furthermore, our analysis shows that the performance of the proposed policies depends on the original data distribution w.r.t the fairness constraint and the notion of utility.
Optimizing ranking systems based on user interactions is a well-studied problem. State-of-the-art methods for optimizing ranking systems based on user interactions are divided into online approaches - that learn by directly interacting with users - and counterfactual approaches - that learn from historical interactions. Existing online methods are hindered without online interventions and thus should not be applied counterfactually. Conversely, counterfactual methods cannot directly benefit from online interventions.
We propose a novel intervention-aware estimator for both counterfactual and online Learning to Rank (LTR). With the introduction of the intervention-aware estimator, we aim to bridge the online/counterfactual LTR division as it is shown to be highly effective in both online and counterfactual scenarios. The estimator corrects for the effect of position bias, trust bias, and item-selection bias by using corrections based on the behavior of the logging policy and on online interventions: changes to the logging policy made during the gathering of click data. Our experimental results, conducted in a semi-synthetic experimental setup, show that, unlike existing counterfactual LTR methods, the intervention-aware estimator can greatly benefit from online interventions.
In machine learning and statistics, bias and variance of supervised learning models are well studied concepts. In this work, we try to better understand how these concepts apply in the context of ranking of items (e.g., for web-search or recommender systems). We define notions of bias and variance directly on pairwise ordering of items. We show that ranking disagreements between true orderings and a ranking function can be decomposed into bias and variance components akin to the similar decomposition for the squared loss and other losses that have been previously studied. The popular ranking metric, the area under the curve (AUC), is shown to admit a similar decomposition. We demonstrate the utility of the framework in understanding differences between models. Further, directly looking at the decomposition of the ranking loss rather than the loss used for model fitting can reveal surprising insights. Specifically, we show examples where model training achieves low variance in the traditional sense, yet results in high variance (and high error) on the ranking task.
Recent advances in unbiased learning to rank (LTR) count on Inverse Propensity Scoring (IPS) to eliminate bias in implicit feedback. Though theoretically sound in correcting the bias introduced by treating clicked documents as relevant, IPS ignores the bias caused by (implicitly) treating non-clicked ones as irrelevant. In this work, we first rigorously prove that such use of click data leads to unnecessary pairwise comparisons between relevant documents, which prevent unbiased ranker optimization.
Based on the proof, we derive a simple yet well justified new weighting scheme, called Propensity Ratio Scoring (PRS), which provides treatments on both clicks and non-clicks. Besides correcting the bias in clicks, PRS avoids relevant-relevant document comparisons in LTR training and enjoys a lower variability. Our extensive empirical evaluations confirm that PRS ensures a more effective use of click data and improved performance in both synthetic data from a set of LTR benchmarks, as well as in the real-world large-scale data from GMail search.
In feeds recommendation, users are able to constantly browse items generated by never-ending feeds using mobile phones. The implicit feedback from users is an important resource for learning to rank, however, building ranking functions from such observed data is recognized to be biased. The presentation of the items will influence the user's judgements and therefore introduces biases. Most previous works in the unbiased learning to rank literature focus on position bias (i.e., an item ranked higher has more chances of being examined and interacted with). By analyzing user behaviors in product feeds recommendation, in this paper, we identify and introduce context bias, which refers to the probability that a user interacting with an item is biased by its surroundings, to unbiased learning to rank. We propose an Unbiased Learning to Rank with Combinational Propensity (ULTR-CP) framework to remove the inherent biases jointly caused by multiple factors. Under this framework, a context-aware position bias model is instantiated to estimate the unified bias considering both position and context biases. In addition to evaluating propensity score estimation approaches by the ranking metrics, we also discuss the evaluation of the propensities directly by checking their balancing properties. Extensive experiments performed on a real e-commerce data set collected from JD.com verify the effectiveness of context bias and illustrate the superiority of ULTR-CP against the state-of-the-art methods.
Interpretability of ranking models is a crucial yet relatively under-examined research area. Recent progress on this area largely focuses on generating post-hoc explanations for existing black-box ranking models. Though promising, such post-hoc methods cannot provide sufficiently accurate explanations in general, which makes them infeasible in many high-stakes scenarios, especially the ones with legal or policy constraints. Thus, building an intrinsically interpretable ranking model with transparent, self-explainable structure becomes necessary, but this remains less explored in the learning-to-rank setting.
In this paper, we lay the groundwork for intrinsically interpretable learning-to-rank by introducing generalized additive models (GAMs) into ranking tasks. Generalized additive models (GAMs) are intrinsically interpretable machine learning models and have been extensively studied on regression and classification tasks. We study how to extend GAMs into ranking models which can handle both item-level and list-level features and propose a novel formulation of ranking GAMs. To instantiate ranking GAMs, we employ neural networks instead of traditional splines or regression trees. We also show that our neural ranking GAMs can be distilled into a set of simple and compact piece-wise linear functions that are much more efficient to evaluate with little accuracy loss. We conduct experiments on three data sets and show that our proposed neural ranking GAMs can outperform other traditional GAM baselines while maintaining similar interpretability.
Cloud-based file storage platforms such as Google Drive are widely used as a means for storing, editing and sharing personal and organizational documents. In this paper, we improve search ranking quality for cloud storage platforms by utilizing user activity logs. Different from search logs, activity logs capture general document usage activity beyond search, such as opening, editing and sharing documents. We propose to automatically learn text embeddings that are effective for search ranking from activity logs. We develop a novel co-access signal, i.e., whether two documents were accessed by a user around the same time, to train deep semantic matching models that are useful for improving the search ranking quality. We confirm that activity-trained semantic matching models can improve ranking by conducting extensive offline experimentation using Google Drive search and activity logs. To the best of our knowledge, this is the first work to examine the benefits of leveraging document usage activity at large scale for cloud storage search; as such it can shed light on using such activity in scenarios where direct collection of search-specific interactions (e.g., query and click logs) may be expensive or infeasible.
Knowledge tracing (KT) aims to model students' knowledge level based on their historical performance, which plays an important role in computer-assisted education and adaptive learning. Recent studies try to take temporal effects of past interactions into consideration, such as the forgetting behavior. However, existing work mainly relies on time-related features or a global decay function to model the time-sensitive effects. Fine-grained temporal dynamics of different cross-skill impacts have not been well studied (named as temporal cross-effects). For example, cross-effects on some difficult skills may drop quickly, and the effects caused by distinct previous interactions may also have different temporal evolutions, which cannot be captured in a global way. In this work, we investigate fine-grained temporal cross-effects between different skills in KT. We first validate the existence of temporal cross-effects in real-world datasets through empirical studies. Then, a novel model, HawkesKT, is proposed to explicitly model the temporal cross-effects inspired by the point process, where each previous interaction will have different time-sensitive impacts on the mastery of the target skill. HawkesKT adopts two components to model temporal cross-effects: 1) mutual excitation represents the degree of cross-effects and 2) kernel function controls the adaptive temporal evolution. To the best of our knowledge, we are the first to introduce Hawkes process to model temporal cross-effects in KT. Extensive experiments on three benchmark datasets show that HawkesKT is superior to state-of-the-art KT methods. Remarkably, our method also exhibits excellent interpretability and shows significant advantages in training efficiency, which makes it more applicable in real-world large-scale educational settings.
In recent years, a plethora of fact checking and fact verification techniques have been developed to detect the veracity or factuality of online information text for various applications. However, limited efforts have been undertaken to understand the interpretability of such veracity detection, i.e. explaining why a particular piece of text is factually correct or incorrect. In this work, we seek to bridge this gap by proposing a technique, FACE-KEG, to automatically perform explainable fact checking. Given an input fact or claim, our proposed model constructs a relevant knowledge graph for it from a large-scale structured knowledge base. This graph is encoded via a novel graph transforming encoder. Our model also simultaneously retrieves and encodes relevant textual context about the input text from the knowledge base. FACE-KEG then jointly exploits both the concept-relationship structure of the knowledge graph as well as semantic contextual cues in order to (i) detect the veracity of an input fact, and (ii) generate a human-comprehensible natural language explanation justifying the fact's veracity. We conduct extensive experiments on three large-scale datasets, and demonstrate the effectiveness of FACE-KEG while performing fact checking. Automatic and human evaluations further show that FACE-KEG significantly outperforms competitive baselines in learning concise, coherent and informative explanations for the input facts.
Representation learning for temporal knowledge graphs has attracted increasing attention in recent years. In this paper, we study the problem of learning dynamic embeddings for temporal knowledge graphs. We address this problem by proposing a Dynamic Bayesian Knowledge Graphs Embedding model (DBKGE), which is able to dynamically track the semantic representations of entities over time in a joint metric space and make predictions for the future. Unlike other temporal knowledge graph embedding methods, DBKGE is a novel probabilistic representation learning method that aims at inferring dynamic embeddings of entities in a streaming scenario. To obtain high-quality embeddings and model their uncertainty, our DBKGE embeds entities with means and variances of Gaussian distributions. Based on amortized inference, an online inference algorithm is proposed to jointly learn the latent representations of entities and smooth their changes across time. Experiments on Yago and Wiki datasets demonstrate that our proposed algorithm outperforms the state-of-the-art static and temporal knowledge graph embedding models.
Food recommendation has become an important means to help guide users to adopt healthy dietary habits. Previous works on food recommendation either i) fail to consider users' explicit requirements, ii) ignore crucial health factors (e.g., allergies and nutrition needs), or iii) do not utilize the rich food knowledge for recommending healthy recipes. To address these limitations, we propose a novel problem formulation for food recommendation, modeling this task as constrained question answering over a large-scale food knowledge base/graph (KBQA). Besides the requirements from the user query, personalized requirements from the user's dietary preferences and health guidelines are handled in a unified way as additional constraints to the QA system. To validate this idea, we create a QA style dataset for personalized food recommendation based on a large-scale food knowledge graph and health guidelines. Furthermore, we propose a KBQA-based personalized food recommendation framework which is equipped with novel techniques for handling negations and numerical comparisons in the queries. Experimental results on the benchmark show that our approach significantly outperforms non-personalized counterparts (average 59.7% absolute improvement across various evaluation metrics), and is able to recommend more relevant and healthier recipes.
Multi-hop Knowledge Base Question Answering (KBQA) aims to find the answer entities that are multiple hops away in the Knowl- edge Base (KB) from the entities in the question. A major challenge is the lack of supervision signals at intermediate steps. Therefore, multi-hop KBQA algorithms can only receive the feedback from the final answer, which makes the learning unstable or ineffective. To address this challenge, we propose a novel teacher-student approach for the multi-hop KBQA task. In our approach, the stu- dent network aims to find the correct answer to the query, while the teacher network tries to learn intermediate supervision signals for improving the reasoning capacity of the student network. The major novelty lies in the design of the teacher network, where we utilize both forward and backward reasoning to enhance the learning of intermediate entity distributions. By considering bidi- rectional reasoning, the teacher network can produce more reliable intermediate supervision signals, which can alleviate the issue of spurious reasoning. Extensive experiments on three benchmark datasets have demonstrated the effectiveness of our approach on the KBQA task.
Community Question Answering (CQA) sites such as Yahoo! Answers and Baidu Knows have emerged as rich knowledge resources for information seekers. However, answers posted to CQA sites often vary a lot in their qualities. User votes from the community may partially reflect the overall quality of the answer, but they are often missing. Hence, automatic selection of "good'' answers becomes a practical research problem that will help us manage the quality of accumulated knowledge. Without loss of generality, a good answer should deliver not only relevant but also trustworthy information that can help resolve the information needs of the posted question, but the latter has received less investigation in the past. In this paper, we propose a novel matching-verification framework for automatic answer selection. The matching component assesses the relevance of a candidate answer to a given question as conventional QA methods. The major enhancement is the verification component, which aims to leverage the wisdom of crowds, e.g., some big information repository, for trustworthiness measurement. Given a question, we take the top retrieved results from the information repository as the supporting evidences to distill the consensus representation. A major challenge is that there is no guarantee that one can always obtain reliable consensus from the wisdom of crowds for a question due to the noisy nature and the limitation of the existing search technology.Therefore, we decompose the trustworthiness measurement into two parts, i.e., a verification score which measures the consistency between a candidate answer and the consensus representation, and a confidence score which measures the reliability of the consensus itself. Empirical studies on three real-world CQA data collections, i.e. YahooQA, QuoraQA and AmazonQA, show that our approach can significantly outperform the state-of-the-art methods on the answer selection task.
In recent years, marked temporal point processes (MTPPs) have emerged as a powerful modeling machinery to characterize asynchronous events in a wide variety of applications. MTPPs have demonstrated significant potential in predicting event-timings, especially for events arriving in near future. However, due to current design choices, MTPPs often show poor predictive performance at forecasting event arrivals in distant future. To ameliorate this limitation, in this paper, we design DualTPP which is specifically well-suited to long horizon event forecasting. DualTPP has two components. The first component is an intensity free MTPP model, which captures microscopic event dynamics by modeling the time of future events. The second component takes a different dual perspective of modeling aggregated counts of events in a given time-window, thus encapsulating macroscopic event dynamics. Then we develop a novel inference framework jointly over the two models by solving a sequence of constrained quadratic optimization problems. Experiments with a diverse set of real datasets show that DualTPP outperforms existing MTPP methods on long horizon forecasting by substantial margins, achieving almost an order of magnitude reduction in Wasserstein distance between actual events and forecasts. The code and the datasets can be found at the following URL: https://github.com/pratham16cse/DualTPP
The accurate and interpretable prediction of future events in time-series data often requires the capturing of representative patterns (or referred to as states) underpinning the observed data. To this end, most existing studies focus on the representation and recognition of states, but ignore the changing transitional relations among them. In this paper, we present evolutionary state graph, a dynamic graph structure designed to systematically represent the evolving relations (edges) among states (nodes) along time. We conduct analysis on the dynamic graphs constructed from the time-series data and show that changes on the graph structures (e.g., edges connecting certain state nodes) can inform the occurrences of events (i.e., time-series fluctuation). Inspired by this, we propose a novel graph neural network model, Evolutionary State Graph Network (EvoNet), to encode the evolutionary state graph for accurate and interpretable time-series event prediction. Specifically, EvoNet models both the node-level (state-to-state) and graph-level (segment-to-segment) propagation, and captures the node-graph (state-to-segment) interactions over time. Experimental results based on five real-world datasets show that our approach not only achieves clear improvements compared with 11 baselines, but also provides more insights towards explaining the results of event predictions.
Edge streams are commonly used to capture interactions in dynamic networks, such as email, social, or computer networks. The problem of detecting anomalies or rare events in edge streams has a wide range of applications. However, it presents many challenges due to lack of labels, a highly dynamic nature of interactions, and the entanglement of temporal and structural changes in the network. Current methods are limited in their ability to address the above challenges and to efficiently process a large number of interactions. Here, we propose F-FADE, a new approach for detection of anomalies in edge streams, which uses a novel frequency-factorization technique to efficiently model the time-evolving distributions of frequencies of interactions between node-pairs. The anomalies are then determined based on the likelihood of the observed frequency of each incoming interaction. F-FADE is able to handle in an online streaming setting a broad variety of anomalies with temporal and structural changes, while requiring only constant memory. Our experiments on one synthetic and six real-world dynamic networks show that F-FADE achieves state of the art performance and may detect anomalies that previous methods are unable to find.
Recent methods in sequential recommendation focus on learning an overall embedding vector from a user's behavior sequence for the next-item recommendation. However, from empirical analysis, we discovered that a user's behavior sequence often contains multiple conceptually distinct items, while a unified embedding vector is primarily affected by one's most recent frequent actions. Thus, it may fail to infer the next preferred item if conceptually similar items are not dominant in recent interactions. To this end, an alternative solution is to represent each user with multiple embedding vectors encoding different aspects of the user's intentions. Nevertheless, recent work on multi-interest embedding usually considers a small number of concepts discovered via clustering, which may not be comparable to the large pool of item categories in real systems. It is a non-trivial task to effectively model a large number of diverse conceptual prototypes, as items are often not conceptually well clustered in fine granularity. Besides, an individual usually interacts with only a sparse set of concepts. In light of this, we propose a novel Sparse Interest NEtwork (SINE) for sequential recommendation. Our sparse-interest module can adaptively infer a sparse set of concepts for each user from the large concept pool and output multiple embeddings accordingly. Given multiple interest embeddings, we develop an interest aggregation module to actively predict the user's current intention and then use it to explicitly model multiple interests for next-item prediction. Empirical results on several public benchmark datasets and one large-scale industrial dataset demonstrate that SINE can achieve substantial improvement over state-of-the-art methods.
Many real-world applications, e.g., healthcare, present multi-variate time series prediction problems. In such settings, in addition to the predictive accuracy of the models, model transparency and explainability are paramount. We consider the problem of building explainable classifiers from multi-variate time series data. A key criterion to understand such predictive models involves elucidating and quantifying the contribution of time varying input variables to the classification. Hence, we introduce a novel, modular, convolution-based feature extraction and attention mechanism that simultaneously identifies the variables as well as time intervals which determine the classifier output. We present results of extensive experiments with several benchmark data sets that show that the proposed method outperforms the state-of-the-art baseline methods on multi-variate time series classification task. The results of our case studies demonstrate that the variables and time intervals identified by the proposed method make sense relative to available domain knowledge.
Air pollution is an important environmental issue of increasing concern, which impacts human health. Accurate air quality prediction is crucial for avoiding people suffering from serious air pollution. Most of the prior works focus on capturing the temporal trend of air quality for each monitoring station. Recent deep learning based methods also model spatial dependencies among neighboring stations. However, we observe that besides geospatially adjacent stations, the stations which share similar functionalities or consistent temporal patterns could also have strong dependencies. In this paper, we propose an Attentive Temporal Graph Convolutional Network (ATGCN) to model diverse inter-station relationships for air quality prediction of citywide stations. Specifically, we first encode three types of relationships among stations including spatial adjacency, functional similarity, and temporal pattern similarity into graphs. Then we design parallel encoding modules, which respectively incorporate attentive graph convolution operations into the Gated Recurrent Units (GRUs) to iteratively aggregate features from related stations with different graphs. Furthermore, augmented with an attention-based fusion unit, decoding modules with a similar structure to the encoding modules are designed to generate multi-step predictions for all stations. The experiments on two real-world datasets demonstrate the superior performance of our model beyond state-of-the-art methods.
Bipartite graph embedding has recently attracted much attention due to the fact that bipartite graphs are widely used in various application domains. Most previous methods, which adopt random walk-based or reconstruction-based objectives, are typically effective to learn local graph structures. However, the global properties of bipartite graph, including community structures of homogeneous nodes and long-range dependencies of heterogeneous nodes, are not well preserved. In this paper, we propose a bipartite graph embedding called BiGI to capture such global properties by introducing a novel local-global infomax objective. Specifically, BiGI first generates a global representation which is composed of two prototype representations. BiGI then encodes sampled edges as local representations via the proposed subgraph-level attention mechanism. Through maximizing the mutual information between local and global representations, BiGI enables nodes in bipartite graph to be globally relevant. Our model is evaluated on various benchmark datasets for the tasks of top-K recommendation and link prediction. Extensive experiments demonstrate that BiGI achieves consistent and significant improvements over state-of-the-art baselines. Detailed analyses verify the high effectiveness of modeling the global properties of bipartite graph.
Graph centrality measures use the structure of a network to quantify central or "important" nodes, with applications in web search, social media analysis, and graphical data mining generally. Traditional centrality measures such as the well known PageRank interpret a directed edge as a vote in favor of the importance of the linked node. We study the case where nodes may belong to diverse communities or interests and investigate centrality measures that can identify nodes that are simultaneously important to many such diverse communities. We propose a family of diverse centrality measures formed as fixed point solutions to a generalized nonlinear eigenvalue problem. Our measure can be efficiently computed on large graphs by iterated best response and we study its normative properties on both random graph models and real-world data. We find that we are consistently and efficiently able to identify the most important diverse nodes of a graph, that is, those that are simultaneously central to multiple communities.
Accelerating Deep Convolutional Neural Networks(CNNs) has recently received ever-increasing research focus. Among various approaches proposed in the literature, filter pruning has been regarded as a promising solution, which is due to its advantage in significant speedup and memory reduction of both network model and intermediate feature maps. Previous works utilized "smaller-norm-less-important" criterion to prune filters with smaller lp-norm values by pruning and retraining alternately. However, they ignore the effects of $feedback: most current approaches that prune filters only consider the statistics of the filters (e.g., prune filter with small lp-norm values), without considering the performance of the pruned model as an important feedback signal in the next iteration of filter pruning. To solve the problem of non-feedback, we propose a novel filter pruning method, namely Filter Pruning via Probabilistic Model-based Optimization (FPPMO). FPPMO solves the problem of non-feedback by pruning filters in a probabilistic manner. We introduce a pruning probability for each filter, and pruning is guided by sampling from the pruning probability distribution. An optimization method is proposed to update the pruning probability based on the performance of the pruned model in the pruning process. When applied to two image classification benchmarks, the effectiveness of our FPPMO is validated. Notably, on CIFAR-10, our FPPMO reduces more than 57% FLOPs on ResNet-110 with even 0.08% relative accuracy improvement. Moreover, on ILSVRC-2012, our FPPMO reduces more than 50% FLOPs on ResNet-101 without top-5 accuracy drop. Which proving that our FPPMO outperforms the state-of-the-art filter pruning method.
Knowledge tracing is a fundamental task in intelligent education for tracking the knowledge states of students on necessary concepts. In recent years, Deep Knowledge Tracing (DKT) utilizes recurrent neural networks to model student learning sequences. This approach has achieved significant success and has been widely used in many educational applications. However, in practical scenarios, it tends to suffer from the following critical problems due to data isolation: 1) Data scarcity. Educational data, which is usually distributed across different silos (e.g., schools), is difficult to gather. 2) Different data quality. Students in different silos have different learning schedules, which results in unbalanced learning records, meaning that it is necessary to evaluate the learning data quality independently for different silos. 3) Data incomparability. It is difficult to compare the knowledge states of students with different learning processes from different silos. Inspired by federated learning, in this paper, we propose a novel Federated Deep Knowledge Tracing (FDKT) framework to collectively train high-quality DKT models for multiple silos. In this framework, each client takes charge of training a distributed DKT model and evaluating data quality by leveraging its own local data, while a center server is responsible for aggregating models and updating the parameters for all the clients. In particular, in the client part, we evaluate data quality incorporating different education measurement theories, and we construct two quality-oriented implementations based on FDKT, i.e., FDKTCTT and FDKTIRT-where the means of data quality evaluation follow Classical Test Theory and Item Response Theory, respectively. Moreover, in the server part, we adopt hierarchical model interpolation to uptake local effects for model personalization. Extensive experiments on real-world datasets demonstrate the effectiveness and superiority of the FDKT framework.
A lot of research has focused on the efficiency of search engine query processing, and in particular on disjunctive top-k queries that return the highest scoring k results that contain at least one of the query terms. Disjunctive top-k queries over simple ranking functions are commonly used to retrieve an initial set of candidate results that are then reranked by more complex, often machine-learned rankers. Many optimized top-k algorithms have been proposed, including MaxScore, WAND, BMW, and JASS. While the fastest methods achieve impressive results on top-10 and top-100 queries, they tend to become much slower for the larger k commonly used for candidate generation. In this paper, we focus on disjunctive top-k queries for larger k. We propose new algorithms that achieve much faster query processing for values of k up to thousands or tens of thousands. Our algorithms build on top of the live-block filtering approach of Dimopoulos et al, and exploit the SIMD capabilities of modern CPUs. We also perform a detailed experimental comparison of our methods with the fastest known approaches, and release a full model implementation of our methods and of the underlying live-block mechanism, which will allows others to design and experiment with additional methods under the live-block approach.
Graph neural networks (GNNs) have shown great power in modeling graph structured data. However, similar to other machine learning models, GNNs may make predictions biased on protected sensitive attributes, e.g., skin color and gender. Because machine learning algorithms including GNNs are trained to reflect the distribution of the training data which often contains historical bias towards sensitive attributes. In addition, the discrimination in GNNs can be magnified by graph structures and the message-passing mechanism. As a result, the applications of GNNs in sensitive domains such as crime rate prediction would be largely limited. Though extensive studies of fair classification have been conducted on i.i.d data, methods to address the problem of discrimination on non-i.i.d data are rather limited. Furthermore, the practical scenario of sparse annotations in sensitive attributes is rarely considered in existing works. Therefore, we study the novel and important problem of learning fair GNNs with limited sensitive attribute information. FairGNN is proposed to eliminate the bias of GNNs whilst maintaining high node classification accuracy by leveraging graph structures and limited sensitive information. Our theoretical analysis shows that FairGNN can ensure the fairness of GNNs under mild conditions given limited nodes with known sensitive attributes. Extensive experiments on real-world datasets also demonstrate the effectiveness of FairGNN in debiasing and keeping high accuracy.
Finding dense regions of graphs is fundamental in graph mining. We focus on the computation of dense hierarchies and regions with graph nuclei---a generalization of k-cores and trusses. Static computation of nuclei, namely through variants of 'peeling', are easy to understand and implement. However, many practically important graphs undergo continuous change. Dynamic algorithms, maintaining nucleus computations on dynamic graph streams, are nuanced and require significant effort to port between nuclei, e.g., from k-cores to trusses.
We propose a unifying framework to maintain nuclei in dynamic graph streams. First, we show no dynamic algorithm can asymptotically beat re-computation, highlighting the need to experimentally understand variability. Next, we prove equivalence between k-cores on a special hypergraph and nuclei. Our algorithm splits the problem into maintaining the special hypergraph and maintaining k-cores on it. We implement our algorithm and experimentally demonstrate improvements up to 108 x over re-computation. We show algorithmic improvements on k-cores apply to trusses and outperform truss-specific implementations.
Despite achieving strong performance in semi-supervised node classification task, graph neural networks (GNNs) are vulnerable to adversarial attacks, similar to other deep learning models. Existing researches focus on developing either robust GNN models or attack detection methods against adversarial attacks on graphs. However, little research attention is paid to the potential and practice of immunization to adversarial attacks on graphs. In this paper, we propose and formulate the graph adversarial immunization problem, i.e., vaccinating an affordable fraction of node pairs, connected or unconnected, to improve the certifiable robustness of graph against any admissible adversarial attack. We further propose an effective algorithm, called AdvImmune, which optimizes with meta-gradient in a discrete way to circumvent the computationally expensive combinatorial optimization when solving the adversarial immunization problem. Experiments are conducted on two citation networks and one social network. Experimental results demonstrate that the proposed AdvImmune method remarkably improves the ratio of robust nodes by 12%, 42%, 65%, with an affordable immune budget of only 5% edges.
Sponsored search auction is a crucial component of modern search engines. It requires a set of candidate bidwords that advertisers can place bids on. Existing methods generate bidwords from search queries or advertisement content. However, they suffer from the data noise in (query, bidword) and (advertisement, bidword) pairs. In this paper, we propose a triangular bidword generation model (TRIDENT), which takes the high-quality data of paired (query, advertisement) as a supervision signal to indirectly guide the bidword generation process. Our proposed model is simple yet effective: by using bidword as the bridge between search query and advertisement, the generation of search query, advertisement and bidword can be jointly learned in the triangular training framework. This alleviates the problem that the training data of bidword may be noisy. Experimental results, including automatic and human evaluations, show that our proposed TRIDENT can generate relevant and diverse bidwords for both search queries and advertisements. Our evaluation on online real data validates the effectiveness of the TRIDENT's generated bidwords for product search.
Modeling user interests is crucial in real-world recommender systems. In this paper, we present a new user interest representation model for personalized recommendation. Specifically, the key novelty behind our model is that it explicitly models user interests as a hypercuboid instead of a point in the space. In our approach, the recommendation score is learned by calculating a compositional distance between the user hypercuboid and the item. This helps to alleviate the potential geometric inflexibility of existing collaborative filtering approaches, enabling a greater extent of modeling capability. Furthermore, we present two variants of hypercuboids to enhance the capability in capturing the diversities of user interests. A neural architecture is also proposed to facilitate user hypercuboid learning by capturing the activity sequences (e.g., buy and rate) of users. We demonstrate the effectiveness of our proposed model via extensive experiments on both public and commercial datasets. Empirical results show that our approach achieves very promising results, outperforming existing state-of-the-art.
Recently, graph neural networks have been widely used for network embedding because of their prominent performance in pairwise relationship learning. In the real world, a more natural and common situation is the coexistence of pairwise relationships and complex non-pairwise relationships, which is, however, rarely studied. In light of this, we propose a graph neural network-based representation learning framework for heterogeneous hypergraphs, an extension of conventional graphs, which can well characterize multiple non-pairwise relations. Our framework first projects the heterogeneous hypergraph into a series of snapshots and then we take the Wavelet basis to perform localized hypergraph convolution. Since the Wavelet basis is usually much sparser than the Fourier basis, we develop an efficient polynomial approximation to the basis to replace the time-consuming Laplacian decomposition. Extensive evaluations have been conducted and the experimental results show the superiority of our method. In addition to the standard tasks of network embedding evaluation such as node classification, we also apply our method to the task of spammers detection and the superior performance of our framework shows that relationships beyond pairwise are also advantageous in the spammer detection. To make our experiment repeatable, source codes and related datasets are available at https://xiangguosun.mystrikingly.com
This work presents a generalized local factor model, namely Local Collaborative Autoencoders (LOCA). To our knowledge, it is the first generalized framework under the local low-rank assumption that builds on the neural recommendation models. We explore a large number of local models by adopting a generalized framework with different weight schemes for training and aggregating them. Besides, we develop a novel method of discovering a sub-community to maximize the coverage of local models. Our experimental results demonstrate that LOCA is highly scalable, achieving state-of-the-art results by outperforming existing AE-based and local latent factor models on several large-scale public benchmarks.
Given an undirected graph, the Densest-k-Subgraph problem (DkS) seeks to find a subset of k vertices such that the sum of the edge weights in the corresponding subgraph is maximized. The problem is known to be NP-hard, and is also very difficult to approximate, in the worst-case. In this paper, we present a new convex relaxation for the problem. Our key idea is to reformulate DkS as minimizing a submodular function subject to a cardinality constraint. Exploiting the fact that submodular functions possess a convex, continuous extension (known as the Lovasz extension), we propose to minimize the Lovasz extension over the convex hull of the cardinality constraints. Although the Lovasz extension of a submodular function does not admit an analytical form in general, for DkS we show that it does. We leverage this result to develop a highly scalable algorithm based on the Alternating Direction Method of Multipliers (ADMM) for solving the relaxed problem. Coupled with a pair of fortuitously simple rounding schemes, we demonstrate that our approach outperforms existing baselines on real-world graphs and can yield high quality sub-optimal solutions which typically are a posteriori no worse than65-80%of the optimal density.
In signed networks, each edge is labeled as either positive or negative. The edge sign captures the polarity of a relationship. Balance of signed networks is a well-studied property in graph theory. In a balanced (sub)graph, the vertices can be partitioned into two subsets with negative edges present only across the partitions. Balanced portions of a graph have been shown to increase coherence among its members and lead to better performance. While existing works have focused primarily on finding the largest balanced subgraph inside a graph, we study the network design problem of maximizing balance of a target community (subgraph). In particular, given a budget b and a community of interest within the signed network, we aim to make the community as close to being balanced as possible by deleting up to b edges. Besides establishing NP-hardness, we also show that the problem is non-monotone and non-submodular. To overcome these computational challenges, we propose heuristics based on the spectral relation of balance with the Laplacian spectrum of the network. Since the spectral approach lacks approximation guarantees, we further design a greedy algorithm, and its randomized version, with provable bounds on the approximation quality. The bounds are derived by exploiting pseudo-submodularity of the balance maximization function. Empirical evaluation on eight real-world signed networks establishes that the proposed algorithms are effective, efficient, and scalable to graphs with millions of edges.
Influence diffusion estimation is a crucial problem in social network analysis. Most prior works mainly focus on predicting the total influence spread, i.e., the expected number of influenced nodes given an initial set of active nodes (aka. seeds). However, accurate estimation of susceptibility, i.e., the probability of being influenced for each individual, is more appealing and valuable in real-world applications. Previous methods generally adopt Monte Carlo simulation or heuristic rules to estimate the influence, resulting in high computational cost or unsatisfactory estimation error when these methods are used to estimate susceptibility. In this work, we propose to leverage graph neural networks (GNNs) for predicting susceptibility. As GNNs aggregate multi-hop neighbor information and could generate over-smoothed representations, the prediction quality for susceptibility is undesirable. To address the shortcomings of GNNs for susceptibility estimation, we propose a novel DeepIS model with a two-step approach: (1) a coarse-grained step where we estimate each node's susceptibility coarsely; (2) a fine-grained step where we aggregate neighbors' coarse-grained susceptibility estimations to compute the fine-grained estimate for each node. The two modules are trained in an end-to-end manner. We conduct extensive experiments and show that on average DeepIS achieves five times smaller estimation error than state-of-the-art GNN approaches and two magnitudes faster than Monte Carlo simulation.
Categorizing documents into a given label hierarchy is intuitively appealing due to the ubiquity of hierarchical topic structures in massive text corpora. Although related studies have achieved satisfying performance in fully supervised hierarchical document classification, they usually require massive human-annotated training data and only utilize text information. However, in many domains, (1) annotations are quite expensive where very few training samples can be acquired; (2) documents are accompanied by metadata information. Hence, this paper studies how to integrate the label hierarchy, metadata, and text signals for document categorization under weak supervision. We develop HiMeCat, an embedding-based generative framework for our task. Specifically, we propose a novel joint representation learning module that allows simultaneous modeling of category dependencies, metadata information and textual semantics, and we introduce a data augmentation module that hierarchically synthesizes training documents to complement the original, small-scale training set. Our experiments demonstrate a consistent improvement of HiMeCat over competitive baselines and validate the contribution of our representation learning and data augmentation modules.
Graph Neural Networks (GNNs) have shown to be powerful tools for graph analytics. The key idea is to recursively propagate and aggregate information along the edges of the given graph. Despite their success, however, the existing GNNs are usually sensitive to the quality of the input graph. Real-world graphs are often noisy and contain task-irrelevant edges, which may lead to suboptimal generalization performance in the learned GNN models. In this paper, we propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of GNNs by learning to drop task-irrelevant edges. PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks. To take into consideration the topology of the entire graph, the nuclear norm regularization is applied to impose the low-rank constraint on the resulting sparsified graph for better generalization. PTDNet can be used as a key component in GNN models to improve their performances on various tasks, such as node classification and link prediction. Experimental studies on both synthetic and benchmark datasets show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
Citing comprehensive and correct related work is crucial in academic writing. It can not only support the author's claims but also help readers trace other related research papers. Nowadays, with the rapid increase in the number of scientific literatures, it has become increasingly challenging to search for high-quality citations and write the manuscript. In this paper, we present an automatic writing assistant model, AutoCite, which not only infers potentially related work but also automatically generates the citation context at the same time. Specifically, AutoCite involves a novel multi-modal encoder and a multi-task decoder architecture. Based on the multi-modal inputs, the encoder in AutoCite learns paper representations with both citation network structure and textual contexts. The multi-task decoder in AutoCite couples and jointly learns citation prediction and context generation in a unified manner. To effectively join the encoder and decoder, we introduce a novel representation fusion component, i.e., gated neural fusion, which feeds the multi-modal representation inputs from the encoder and creates outputs for the downstream multi-task decoder adaptively. Extensive experiments on five real-world citation network datasets validate the effectiveness of our model.
Attributed network embedding aims to learn low dimensional node representations by combining both the network's topological structure and node attributes. Most of the existing methods either propagate the attributes over the network structure or learn the node representations by an encoder-decoder framework. However, propagation based methods tend to prefer network structure to node attributes, whereas encoder-decoder methods tend to ignore the longer connections beyond the immediate neighbors. In order to address these limitations while enjoying the best of the two worlds, we design cross fusion layers for unsupervised attributed network embedding. Specifically, we first construct two separate views to handle network structure and node attributes, and then design cross fusion layers to allow flexible information exchange and integration between the two views. The key design goals of the cross fusion layers are three-fold: 1) allowing critical information to be propagated along the network structure, 2) encoding the heterogeneity in the local neighborhood of each node during propagation, and 3) incorporating an additional node attribute channel so that the attribute information will not be overshadowed by the structure view. Extensive experiments on three datasets and three downstream tasks demonstrate the effectiveness of the proposed method.
Predicting crowd flows is crucial for urban planning, traffic management and public safety. However, predicting crowd flows is not trivial because of three challenges: 1) highly heterogeneous mobility data collected by various services; 2) complex spatio-temporal correlations of crowd flows, including multi-scale spatial correlations along with non-linear temporal correlations. 3) diversity in long-term temporal patterns. To tackle these challenges, we proposed an end-to-end architecture, called pyramid dilated spatial-temporal network (PDSTN), to effectively learn spatial-temporal representations of crowd flows with a novel attention mechanism. Specifically, PDSTN employs the ConvLSTM structure to identify complex features that capture spatial-temporal correlations simultaneously, and then stacks multiple ConvLSTM units for deeper feature extraction. For further improving the spatial learning ability, a pyramid dilated residual network is introduced by adopting several dilated residual ConvLSTM networks to extract multi-scale spatial information. In addition, a novel attention mechanism, which considers both long-term periodicity and the shift in periodicity, is designed to study diverse temporal patterns. Extensive experiments were conducted on three highly heterogeneous real-world mobility datasets to illustrate the effectiveness of PDSTN beyond the state-of-the-art methods. Moreover, PDSTN provides intuitive interpretation into the prediction.
We generalize triadic closure, along with previous generalizations of triadic closure, under an intuitive umbrella generalization: the Subgraph-to-Subgraph Transition (SST). We present algorithms and code to model graph evolution in terms of collections of these SSTs. We then use the SST framework to create link prediction models for both static and temporal, directed and undirected graphs which produce highly interpretable results. Quantitatively, our models match out-of-the-box performance of state of the art graph neural network models, thereby validating the correctness and meaningfulness of our interpretable results.
Anomaly detection in time series is a research area of increasing importance. In order to safeguard the availability and stability of services, large companies need to monitor various time-series data to detect anomalies in real time for troubleshooting, thereby reducing potential economic losses. However, in many practical applications, time-series anomaly detection is still an intractable problem due to the huge amount of data, complex data patterns, and limited computational resources. SPOT is an efficient streaming algorithm for anomaly detection, but it is only sensitive to extreme values in the whole data distribution. In this paper, we propose FluxEV, a fast and effective unsupervised anomaly detection framework. By converting the non-extreme anomalies to extreme values, our framework addresses the limitation of SPOT and achieves a huge improvement in the detection accuracy. Moreover, Method of Moments is adopted to speed up the parameter estimation in the automatic thresholding. Extensive experiments show that FluxEV greatly outperforms the state-of-the-art baselines on two large public datasets while ensuring high efficiency.
Node classification is an important research topic in graph learning. Graph neural networks (GNNs) have achieved state-of-the-art performance of node classification. However, existing GNNs address the problem where node samples for different classes are balanced; while for many real-world scenarios, some classes may have much fewer instances than others. Directly training a GNN classifier in this case would under-represent samples from those minority classes and result in sub-optimal performance. Therefore, it is very important to develop GNNs for imbalanced node classification. However, the work on this is rather limited. Hence, we seek to extend previous imbalanced learning techniques for i.i.d data to the imbalanced node classification task to facilitate GNN classifiers. In particular, we choose to adopt synthetic minority over-sampling algorithms, as they are found to be the most effective and stable. This task is non-trivial, as previous synthetic minority over-sampling algorithms fail to provide relation information for newly synthesized samples, which is vital for learning on graphs. Moreover, node attributes are high-dimensional. Directly over-sampling in the original input domain could generates out-of-domain samples, which may impair the accuracy of the classifier. We propose a novel framework, \method, in which an embedding space is constructed to encode the similarity among the nodes. New samples are synthesize in this space to assure genuineness. In addition, an edge generator is trained simultaneously to model the relation information, and provide it for those new samples. This framework is general and can be easily extended into different variations. The proposed framework is evaluated using three different datasets, and it outperforms all baselines with a large margin.
Lack of training data in low-resource languages presents huge challenges to sequence labeling tasks such as named entity recognition (NER) and machine reading comprehension (MRC). One major obstacle is the errors on the boundary of predicted answers. To tackle this problem, we propose CalibreNet, which predicts answers in two steps. In the first step, any existing sequence labeling method can be adopted as a base model to generate an initial answer. In the second step, CalibreNet refines the boundary of the initial answer. To tackle the challenge of lack of training data in low-resource languages, we dedicatedly develop a novel unsupervised phrase boundary recovery pre-training task to enhance the multilingual boundary detection capability of CalibreNet. Experiments on two cross-lingual benchmark datasets show that the proposed approach achieves SOTA results on zero-shot cross-lingual NER and MRC tasks.
Predicting pairwise relationships between nodes in graphs is a fundamental task in data mining with many real-world applications, such as link prediction on social networks, relation prediction on knowledge graphs, etc. A dominating methodology is to first use advanced graph representation methods to learn generic node representations and then build a pairwise prediction classifier with the target nodes' vectors concatenated as input. However, such methods suffer from low interpretability, as it is difficult to explain why certain relationships are predicted only based on their prediction scores. In this paper, we propose to model the pairwise interactions between neighboring nodes (i.e., contexts) of target pairs. The new formulation enables us to build more appropriate representations for node pairs and gain better model interpretability (by highlighting meaningful interactions). To this end, we introduce a unified framework with two general perspectives, node-centric and pair-centric, about how to model context pair interactions. We also propose a novel pair-centric context interaction model and a new pre-trained embedding, which represents the pair semantics and shows many attractive properties. We test our models on two common pairwise prediction tasks: link prediction task and relation prediction task, and compare them with graph feature-based, embedding-based, and Graph Neural Network (GNN)-based baselines. Our experimental results show the superior performance of the pre-trained pair embeddings and that the pair-centric interaction model outperforms all baselines by a large margin.
We consider the problem of learning efficient and inductive graph convolutional networks for text classification with a large number of examples and features. Existing state-of-the-art graph embedding based methods such as predictive text embedding (PTE) and TextGCN have shortcomings in terms of predictive performance, scalability and inductive capability. To address these limitations, we propose a heterogeneous graph convolutional network (HeteGCN) modeling approach that unites the best aspects of PTE and TextGCN together. The main idea is to learn feature embeddings and derive document embeddings using a HeteGCN architecture with different graphs used across layers. We simplify TextGCN by dissecting into several HeteGCN models which (a) helps to study the usefulness of individual models and (b) offers flexibility in fusing learned embeddings from different models. In effect, the number of model parameters is reduced significantly, enabling faster training and improving performance in small labeled training set scenario. Our detailed experimental studies demonstrate the efficacy of the proposed approach.
This paper proposes a scalable multilevel framework for the spectral embedding of large undirected graphs. The proposed method first computes much smaller yet sparse graphs while preserving the key spectral (structural) properties of the original graph, by exploiting a nearly-linear time spectral graph coarsening approach. Then, the resultant spectrally-coarsened graphs are leveraged for the development of much faster algorithms for multilevel spectral graph embedding (clustering) as well as visualization of large data sets. We conducted extensive experiments using a variety of large graphs and datasets and obtained very promising results. For instance, we are able to coarsen the "coPapersCiteseer" graph with 0.43 million nodes and 16 million edges into a much smaller graph with only 13K (32X fewer) nodes and 17K (950X fewer) edges in about 16 seconds; the spectrally-coarsened graphs allow us to achieve up to 1,100X speedup for multilevel spectral graph embedding (clustering) and up to 60X speedup for t-SNE visualization of large data sets.
We present an online Item Intelligent Publishing System used in large scale Consumer to Consumer (C2C) transaction platform, named as I2PS, it is designed for personal seller to publish their items in an automatic way with the uploaded images. Our I2PS is deployed in Xianyu mobile App, the largest second-hand item shopping platform in China. The proposed system not only can guide seller how to photograph the publishing items with more details based on category recognition module, but also can intelligently tell the seller exactly what this product is and which attributes it has based on various recognition methods. The seller does not need to input extra product information, so that the item's publish process in Xianyu can be performed without any difficulty. In this paper, we introduce several techniques we used to develop the I2PS for product understanding, including product's category recognition, Standard Product Unit (SPU) recognition, multi-label attribute recognition and their corresponding pre-processing technologies. Our system deployed in Xianyu can help tens of millions personal sellers to publish their items, and improves publishing success rate by more than 15% and reduces publishing duration by more than 20%. The demo video is available at https://youtu.be/3NRx2hECIHc.
This paper demonstrates FinSense, a system that improves the working efficiency of financial information processing. Given the draft of a financial news story, FinSense extracts the explicit-mentioned stocks and further infers the implicit stocks, providing insightful information for decision making. We propose a novel graph convolutional network model that performs implicit financial instrument inference toward the in-domain data. In addition, FinSense generates candidate headlines for the draft, reducing a significant amount of time in journalism production. The proposed system also provides assistance to investors to sort out the information in the financial news articles.
In past years several works have noted that Twitter data are essential in diverse fields and may have a lot of applications. Nevertheless, the API proposed by Twitter sternly restricts access to public data generated by users. These restrictions have the consequences of greatly slowing down the contributions of researchers and of limiting their scope. In this paper we introduce TwiScraper, a collaborative project to enhance Twitter data collection by scraping methods. We present a module allowing user-centered data collection: Twi-FFN.
In this technical demonstration, we showcase the World's first personality-driven marketing content generation platform, called SoMin.ai. The platform combines deep multi-view personality profiling framework and style generative adversarial networks facilitating the automatic creation of content that appeals to different human personality types. The platform can be used for enhancement of the social networking user experience as well as for content marketing routines. Guided by the MBTI personality type, automatically derived from a user social network content, SoMin.ai generates new social media content based on the preferences of other users with a similar personality type aiming at enhancing the user experience on social networking venues as well diversifying the efforts of marketers when crafting new content for digital marketing campaigns. The real-time user feedback to the platform via the platform's GUI fine-tunes the content generation model and the evaluation results demonstrate the promising performance of the proposed multi-view personality profiling framework when being applied in the content generation scenario. By leveraging content generation at a large scale, marketers will be able to execute more effective digital marketing campaigns at a lower cost.
Recommendation systems help to predict user demand and improve the quality of services offered. While the performance of a recommendation system depends on the quality and quantity of feedback from users, the two major approaches to feedback sacrifice quality for quantity or vice versa; implicit feedback is more abundant but less reliable, while explicit feedback is more credible but harder to collect. Although a hybrid approach has the potential to combine the strengths of both kinds of feedback, the existing approaches using explicit feedback are not suitable for such a combination. In this study, we design a novel feedback suitable for the hybrid approach and use it improve the performance of a recommendation system. The system enables us to collect more varied and less biased feedback from users. It improves performance without requiring major changes to the inference model. It also provides a unique and rich source of information of the model itself. We demonstrate an application of Colorful Feedback showing how it can improve an existing recommendation model.
With the outbreak of COVID-19, it is urgent and necessary to design a system that can access to information from COVID-19 related documents. Current methods fail to do so since the knowledge about COVID-19, an emerging disease, keeps changing and growing. In this study, we design a dynamic document-based question answering system, namely Web Understanding and Learning with AI (WULAI-QA). WULAI-QA employs feature engineering and online learning to adapt to the non-stationary environment and maintains good and steady performance. We evaluate WULAI-QA's performance on a public question answering (https://www.datafountain.cn/competitions/424) and rank first. We demonstrate that WULAI-QA can learn from user feedback and is easy to use. We believe that WULAI-QA will definitely help people understand COVID-19 and play an important role to fight against the pandemic.
E-commerce platforms often use a predefined structured hierarchy of product categories. Apart from helping buyers sort between different product types, listing categorization is also critical for multiple downstream tasks, including the platform's main listing search. Traditionally, when creating a new listing, sellers need to assign the product they sell to a single category. However, the high diversity of product types in the platform, along with the hierarchy's low level of granularity result in tens of thousands of different possible categories that sellers need to pick from. This, in turn, creates a unique classification challenge, especially for sellers with a large number of listings. Moreover, the expected cost of making a category classification error is high, as it can impact the likelihood that their listing will get discovered by relevant buyers, and eventually sold.
To help with the challenge of category recognition we present CatReComm - an interactive real-time system that is generalized to provide category recommendations in different e-commerce scenarios. We present results from using the system for two main sub-tasks - listing and search-query category recognition, and demonstrate an end-to-end scenario of the former one. The system uses a convolutional sequence-to-sequence approach, and to the best of our knowledge, is the first to use this approach for category recognition. We define a new metric for evaluating this model which captures the hierarchical characteristics of the data and supports displaying multiple classification results. Finally, our experimental results show the effectiveness and efficiency on real-world data.
Modern search engines retrieve results mainly based on the keyword matching techniques, and thus fail to answer analytical queries like "apps with more than 1 billion monthly active users" or "population growth of the US from 2015 to 2019", which requires numerical reasoning or aggregating results from multiple web pages. Such analytical queries are very common in the data analysis area, the expected results would be structured tables or charts. In most cases, these structured results are not available or accessible, they scatter in various text sources. In this work, we build AnaSearch, a search system to support analytical queries, and return structured results that can be visualized in the form of tables or charts. We collect and build structured quantitative data from the unstructured text on the web automatically. With AnaSearch, data analysts could easily derive insights for decision making with keyword or natural language queries. Specifically, we build AnaSearch under the COVID-19 news data, which makes it easy to compare with manually collected structured data.
Federated Learning (FL) allows to collaboratively build machine learning models between different entities without the need for sharing or gathering the data. In FL, typically there is a global server and a set of clients (stakeholders) to build shared machine learning models. In contrast to distributed machine learning, the controller of the training process (here the global server) never sees the data of the stakeholders participating in FL. Every stakeholder owns his own data and doesn't share it. During the training and learning process, only the model updates (e.g. gradients) are shared. To our best of knowledge, we did not find a publicly available practical federated learning framework for stakeholders. We have built a framework that enables FL for a small number of stakeholders. In the paper, we describe the framework architecture, communication protocol, and algorithms. Our framework is open-sourced and it is easy to set up for stakeholders and ensures that no private information is leaked during the training process.
In this paper, we demonstrate our Global Personalized Recommender (GPR) system for restaurants. GPR does not use any explicit reviews, ratings, or domain-specific metadata but rather leverages over 3 billion anonymized payment transactions to learn user and restaurant behavior patterns. The design and development of GPR have been challenging, primarily due to the scale and skew of the data. Our system supports over 450M cardholders from over 200 countries and 2.5M restaurants in over 35K cities worldwide, respectively. Additionally, GPR being a global recommender system, needs to account for the regional variations in people's food choices and habits. We address the challenges by combining three different recommendation algorithms instead of using a single revolutionary model in the backend. The individual recommendation models are scalable and adapt to varying data skew challenges to ensure high-quality personalized recommendations for any user anywhere in the world.
The impact of online social media on societal events and institutions is profound, and with the rapid increases in user uptake, we are just starting to understand its ramifications. Social scientists and practitioners who model online discourse as a proxy for real-world behavior often curate large social media datasets. A lack of available tooling aimed at non-data science experts frequently leaves this data (and the insights it holds) underutilized. Here, we propose birdspotter -- a tool to analyze and label Twitter users --, and birdspotter.ml -- an exploratory visualizer for the computed metrics. birdspotter provides an end-to-end analysis pipeline, from the processing of pre-collected Twitter data to general-purpose labeling of users and estimating their social influence, within a few lines of code. The package features tutorials and detailed documentation. We also illustrate how to train birdspotter into a fully-fledged bot detector that achieves better than state-of-the-art performances without making Twitter API calls, and we showcase its usage in an exploratory analysis of a topical COVID-19 dataset.
Click-through rate (CTR) prediction is a crucial task in recommender systems and online advertising. The embedding-based neural networks have been proposed to learn both explicit feature interactions through a shallow component and deep feature interactions by a deep neural network (DNN) component. These sophisticated models, however, slow down the prediction inference by at least hundreds of times. To address the issue of significantly increased serving latency and high memory usage for real-time serving in production, this paper presents DeepLight: a framework to accelerate the CTR predictions in three aspects: 1) accelerate the model inference via explicitly searching informative feature interactions in the shallow component; 2) prune redundant parameters at the inter-layer level in the DNN component; 3) prune the dense embedding vectors to make them sparse in the embedding matrix. By combining the above efforts, the proposed approach accelerates the model inference by 46X on Criteo dataset and 27X on Avazu dataset without any loss on the prediction accuracy. This paves the way for successfully deploying complicated embedding-based neural networks in real-world serving systems.
Solving cold-start problems is indispensable to provide meaningful recommendation results for new users and items. Under sparsely observed data, unobserved user-item pairs are also a vital source for distilling latent users' information needs. Most present works leverage unobserved samples for extracting negative signals. However, such an optimisation strategy can lead to biased results toward already popular items by frequently handling new items as negative instances. In this study, we tackle the cold-start problems for new users/items by appropriately leveraging unobserved samples. We propose a knowledge graph (KG)-aware recommender based on graph neural networks, which augments labelled samples through pseudo-labelling. Our approach aggressively employs unobserved samples as positive instances and brings new items into the spotlight. To avoid exhaustive label assignments to all possible pairs of users and items, we exploit a KG for selecting probably positive items for each user. We also utilise an improved negative sampling strategy and thereby suppress the exacerbation of popularity biases. Through experiments, we demonstrate that our approach achieves improvements over the state-of-the-art KG-aware recommenders in a variety of scenarios; in particular, our methodology successfully improves recommendation performance for cold-start users/items.
Modern machine learning applications should be able to address the intrinsic challenges arising over inference on massive real-world datasets, including scalability and robustness to outliers. Despite the multiple benefits of Bayesian methods (such as uncertainty-aware predictions, incorporation of experts knowledge, and hierarchical modeling), the quality of classic Bayesian inference depends critically on whether observations conform with the assumed data generating model, which is impossible to guarantee in practice. In this work, we propose a variational inference method that, in a principled way, can simultaneously scale to large datasets, and robustify the inferred posterior with respect to the existence of outliers in the observed data. Reformulating Bayes theorem via the β-divergence, we posit a robustified generalized Bayesian posterior as the target of inference. Moreover, relying on the recent formulations of Riemannian coresets for scalable Bayesian inference, we propose a sparse variational approximation of the robustified posterior and an efficient stochastic black-box algorithm to construct it. Overall our method allows releasing cleansed data summaries that can be applied broadly in scenarios involving structured and unstructured data contamination. We illustrate the applicability of our approach in diverse simulated and real datasets, and various statistical models, including Gaussian mean inference, logistic and neural linear regression, demonstrating its superiority to existing Bayesian summarization methods in the presence of outliers.
Effectively measuring, understanding, and improving mobile app performance is of paramount importance for mobile app developers. Across the mobile Internet landscape, companies run online controlled experiments (A/B tests) with thousands of performance metrics in order to understand how app performance causally impacts user retention and to guard against service or app regressions that degrade user experiences. To capture certain characteristics particular to performance metrics, such as enormous observation volume and high skewness in distribution, an industry-standard practice is to construct a performance metric as a quantile over all performance events in control or treatment buckets in A/B tests. In our experience with thousands of A/B tests provided by Snap, we have discovered some pitfalls in this industry-standard way of calculating performance metrics that can lead to unexplained movements in performance metrics and unexpected misalignment with user engagement metrics. In this paper, we discuss two major pitfalls in this industry-standard practice of measuring performance for mobile apps. One arises from strong heterogeneity in both mobile devices and user engagement, and the other arises from self-selection bias caused by post-treatment user engagement changes. To remedy these two pitfalls, we introduce several scalable methods including user-level performance metric calculation and imputation and matching for missing metric values. We have extensively evaluated these methods on both simulation data and real A/B tests, and have deployed them into Snap's in-house experimentation platform.
Representation learning is the keystone for collaborative filtering. The learned representations should reflect both explicit factors that are revealed by extrinsic attributes such as movies' genres, books' authors, and implicit factors that are implicated in the collaborative signal. Existing methods fail to decompose these two types of factors, making it difficult to infer the deep motivations behind user behaviors, and thus suffer from sub-optimal solutions. In this paper, we propose Decomposed Collaborative Filtering (DCF) to address the above problems. For the explicit representation learning, we devise a user-specific relation aggregator to aggregate the most important attributes. For the implicit part, we propose Decomposed Graph Convolutional Network (DGCN), which decomposes users and items into multiple factor-level representations, then utilizes factor-level attention and attentive relation aggregation to model implicit factors behind collaborative signals in fine-grained level. Moreover, to reflect more diverse implicit factors, we augment the model with disagreement regularization. We conduct experiments on three public accessible datasets and the results demonstrate the significant improvement of our method over several state-of-the-art baselines. Further studies verify the efficacy and interpretability benefits bought from the fine-grained implicit relation modeling. Our Code is available on https://github.com/cmaxhao/DCF.
To aid users in choice-making, explainable recommendation models seek to provide not only accurate recommendations but also accompanying explanations that help to make sense of those recommendations. Most of the previous approaches rely on evaluative explanations, assessing the quality of an individual item along some aspects of interest to the user. In this work, we are interested in comparative explanations, the less studied problem of assessing a recommended item in comparison to another reference item.
In particular, we propose to anchor reference items on the previously adopted items in a user's history. Not only do we aim at providing comparative explanations involving such items, but we also formulate comparative constraints involving aspect-level comparisons between the target item and the reference items. The framework allows us to incorporate these constraints and integrate them with recommendation objectives involving both types of subjective and objective aspect-level quality assumptions. Experiments on public datasets of several product categories showcase the efficacies of our methodology as compared to baselines at attaining better recommendation accuracies and intuitive explanations.
Online Travel Platforms are virtual two-sided marketplaces where guests search for accommodations and accommodation providers list their properties such as hotels and vacation rentals. The large majority of hotels are rated by official institutions with a number of stars indicating the quality of service they provide. It is a simple and effective mechanism that contributes to match supply with demand by helping guests to find options meeting their criteria and accommodation suppliers to market their product to the right segment directly impacting the number of transactions on the platform. Unfortunately, no similar rating system exists for the large majority of vacation rentals, making it difficult for guests to search and compare options and hard for vacation rentals suppliers to market their product effectively. In this work we describe a machine learned quality rating system for vacation rentals. The problem is challenging, mainly due to explainability requirements and the lack of ground truth. We present techniques to address these challenges and empirical evidence of their efficacy. Our system was successfully deployed and validated through Online Controlled Experiments performed in Booking.com, a large Online Travel Platform, and running for more than one year, impacting more than a million accommodations and millions of guests.
In the Click-Through Rate (CTR) prediction scenario, user's sequential behaviors are well utilized to capture the user interest in the recent literature. However, despite being extensively studied, these sequential methods still suffer from three limitations. First, existing methods mostly utilize attention on the behavior of users, which is not always suitable for CTR prediction, because users often click on new products that are irrelevant to any historical behaviors. Second, in the real scenario, there are numerous users that have operations a long time ago, but turn relatively inactive in recent times. Thus, it is hard to precisely capture user's current preferences through early behaviors. Third, multiple representations of user's historical behaviors in different feature subspaces are largely ignored. To remedy these issues, we propose a Multi-Interactive Attention Network (MIAN) to comprehensively extract the latent relationship among all kinds of fine-grained features (e.g., gender, age and occupation in user-profile). Specifically, MIAN contains a Multi-Interactive Layer (MIL) that integrates three local interaction modules to capture multiple representations of user preference through sequential behaviors and simultaneously utilize the fine-grained user-specific as well as context information. In addition, we design a Global Interaction Module (GIM) to learn the high-order interactions and balance the different impacts of multiple features. Finally, Offline experiment results from three datasets, together with an Online A/B test in a large-scale recommendation system, demonstrate the effectiveness of our proposed approach.
In e-commerce advertising, the ad platform usually relies on auction mechanisms to optimize different performance metrics, such as user experience, advertiser utility, and platform revenue. However, most of the state-of-the-art auction mechanisms only focus on optimizing a single performance metric, e.g., either social welfare or revenue, and are not suitable for e-commerce advertising with various, dynamic, difficult to estimate, and even conflicting performance metrics. In this paper, we propose a new mechanism called Deep GSP auction, which leverages deep learning to design new rank score functions within the celebrated GSP auction framework. These new rank score functions are implemented via deep neural network models under the constraints of monotone allocation and smooth transition. The requirement of monotone allocation ensures Deep GSP auction nice game theoretical properties, while the requirement of smooth transition guarantees the advertiser utilities would not fluctuate too much when the auction mechanism switches among candidate mechanisms to achieve different optimization objectives. We deployed the proposed mechanisms in a leading e-commerce ad platform and conducted comprehensive experimental evaluations with both offline simulations and online A/B tests. The results demonstrated the effectiveness of the Deep GSP auction compared to the state-of-the-art auction mechanisms.
Recommender models trained on historical observational data alone can be brittle when domain experts subject them to counterfactual evaluation. In many domains, experts can articulate common, high-level mappings or rules between categories of inputs (user's history) and categories of outputs (preferred recommendations). One challenge is to determine how to train recommender models to adhere to these rules. In this work, we introduce the goal of domain-specific concordance: the expectation that a recommender model follow a set of expert-defined categorical rules. We propose a regularization-based approach that optimizes for robustness on rule-based input perturbations. To test the effectiveness of this method, we apply it in a medication recommender model over diagnosis-medicine categories, and in movie and music recommender models, on rules over categories based on movie tags and song genres. We demonstrate that we can increase the category-based robustness distance by up to 126% without degrading accuracy, but rather increasing it by up to 12% compared to baseline models in the popular MIMIC-III, MovieLens-20M and Last.fm Million Song datasets.
Query autocompletion is an essential feature in search engines that predicts and suggests query completions to a user's incomplete prefix input, a critical feature to enhance the user experience. While a generic lookup-based system can provide completions with great efficiency, it is unable to address prefixes not seen in the past. On the other hand, a generative system can complete unseen queries with superior accuracy but requires substantial computational overhead at runtime, making it costly for a large-scale system. Here, we present an efficient, fully-generative query autocompletion framework. Our framework employs an n-gram language model at a subword-level and exploits the n-gram model's inherent data structure to precompute completions prior to runtime. Evaluation results on public dataset show that our framework is not only as effective as previous systems with neural language models, but also reduces computational overhead at runtime, expediting the speed by more than two orders of magnitude. The goal of this work is to showcase a generative query completion system that is an attractive choice for large-scale deployments.
Textual explanations have proved to help improve user satisfaction on machine-made recommendations. However, current mainstream solutions loosely connect the learning of explanation with the learning of recommendation: for example, they are often separately modeled as rating prediction and content generation tasks. In this work, we propose to strengthen their connection by enforcing the idea of sentiment alignment between a recommendation and its corresponding explanation. At training time, the two learning tasks are joined by a latent sentiment vector, which is encoded by the recommendation module and used to make word choices for explanation generation. At both training and inference time, the explanation module is required to generate explanation text that matches sentiment predicted by the recommendation module. Extensive experiments demonstrate our solution outperforms a rich set of baselines in both recommendation and explanation tasks, especially on the improved quality of its generated explanations. More importantly, our user studies confirm our generated explanations help users better recognize the differences between recommended items and understand why an item is recommended.
Sharing recommendation is becoming ubiquitous at almost every e-commerce website, where a user will be recommended a list of users when he wants to share something with others. With the tremendous growth of online shopping users, sharing recommendation confronts several distinct difficulties: 1) how to establish a unified recommender model for large numbers of sharing scenarios; 2) how to handle with long-tail even cold start scenarios with limited training data; 3) how to incorporate social influence in order to make more accurate recommendations.
To tackle with the above challenges, we firstly build multiple expert networks to integrate different scenarios. During model training one specific scenario can learn to differentiate importance of each expert network automatically based on corresponding context information. With respect to the long-tail issue, we propose to maintain a complete scenario tree such that each scenario can utilize context knowledge from root node to leaf node to select the expert networks. At the same time, making use of the tree-based full path message contributes to alleviating training data sparsity problem. Moreover, we construct a large-scale heterogeneous user-to-user graph which is derived from various social behaviors at e-commerce websites. Then a novel scenario-aware multi-view graph attention network is leveraged to augment user representations socially. In addition, an auxiliary inconsistency loss is applied to balance the load of expert networks, along with main click-through rate (CTR) prediction loss, the whole framework is trained in an end-to-end fashion. Both offline experiments and online A/B test results demonstrate the superiority of proposed approach over a bunch of state-of-the-art models.
Given a collection of items to display, such as news, videos, or products, how can we optimize their presentation order to maximize user engagements, such as click-through rate, viewing time, and the number of purchases? The problem becomes more complicated when the items are displayed in a grid-based, 2-dimensional presentation on a widescreen. For example, many E-Commerce websites such as Amazon and Etsy are displaying their products in a grid-like format, and so are streaming services like Youtube and Netflix. Unlike 1-dimensional space, where products can be naturally ranked in a vertical order, the presentation in 2-dimensional space poses a novel challenge about how to find the best presentation order - should we put the best listing on the top left corner, or the central position on the second row? We are aware that many traditional methods can be applied to solve the problem, such as conducting an attention heatmap web test, or a randomization experiment by shuffling positions of listings. However, both tests are costly to perform and they may downgrade the quality of users' search experience. By contrast, we focus on utilizing existing search log data to reveal propensity of positions, which is readily available and ubiquitously abundant.
In a nutshell, the study presents how we find an optimal way of presentation in a grid-based environment - more relevant content should be placed in a more noticeable position. The position noticeability is further quantified to help ranking models better understand the signal of relevance manifested in user feedbacks. Our investigation paves the way for an end-to-end item presentation framework that learns the optimal layout for optimizing user engagements. Experimental results based on real-world data show the superiority of the proposed approach over state-of-the-art methods.
Recent advances in path-based explainable recommendation systems have attracted increasing attention thanks to the rich information provided by knowledge graphs. Most existing explainable recommendation only utilizes static knowledge graph and ignores the dynamic user-item evolutions, leading to less convincing and inaccurate explanations. Although there are some works that realize that modelling user's temporal sequential behaviour could boost the performance and explainability of the recommender systems, most of them either only focus on modelling user's sequential interactions within a path or independently and separately of the recommendation mechanism. In this paper, we propose a novel Temporal Meta-path Guided Explainable Recommendation (TMER), which utilizes well-designed item-item path modelling between consecutive items with attention mechanisms to sequentially model dynamic user-item evolutions on dynamic knowledge graph for explainable recommendations. Compared with existing works that use heavy recurrent neural networks to model temporal information, we propose simple but effective neural networks to capture users' historical item features and path-based context to characterise next purchased item. Extensive evaluations of TMER on three real-world benchmark datasets show state-of-the-art performance compared against recent strong baselines.
Studying information propagation in social media is an important task with plenty of applications for business and science. Generating realistic synthetic information cascades can help the research community in developing new methods and applications, testing sociological hypotheses and different what-if scenarios by simply changing few parameters. We demonstrate womg, a synthetic data generator which combines topic modeling and a topic-aware propagation model to create realistic information-rich cascades, whose shape depends on many factors, including the topic of the item and its virality, the homophily of the social network, the interests of its users and their social influence.
To address the increasingly significant issue of fake news, we develop a news reading platform in which we propose an implicit approach to reduce people's belief in fake news. Specifically, we leverage reinforcement learning to learn an intervention module on top of a recommender system (RS) such that the module is activated to replace RS to recommend news toward the verification once users touch the fake news. To examine the effect of the proposed method, we conduct a comprehensive evaluation with 89 human subjects and check the effective rate of change in belief but without their other limitations. Moreover, 84% participants indicate the proposed platform can help them defeat fake news. The demo video is available on YouTube https://youtu.be/wKI6nuXu_SM.
We present Community Connect, a custom social media platform for conducting controlled experiments of human behavior. The key distinguishing factor of Community Connect is the ability to control the visibility of user posts based on the groups they belong to, allowing careful and controlled investigation into how information propagates through a social network. We release this platform as a resource to the broader community, to facilitate research on data collected through controlled experiments on social networks.
Incorporating users' personal facts enhances the quality of many downstream services. Automated extraction of such personal knowledge has recently received considerable attention. However, often the operation of extraction models is not exposed to the user, making predictions inexplicable. In this work we present a web demonstration platform showcasing a recent personal knowledge extraction model, CHARM, which provides information on how the prediction was made and which data was decisive for it. Our demonstration explores two potential sources of input data: conversational transcripts and social media submissions.
Patients with progressive neurological disorders such as Parkinson's disease, Huntington's disease, and Amyotrophic Lateral Sclerosis (ALS) suffer both chronic and episodic difficulties with locomotion. Real-time assessment and visualization of sensor data can be valuable to physicians monitoring the progression of these conditions. We present a system that utilizes the attention based bi-directional recurrent neural network (RNN) presented in to evaluate foot pressure sensor data streamed directly from a pair of sensors attached to a patient. The demonstration also supports indirect streaming from recorded sessions, such as those stored in a FHIR enabled electronic medical records repository, for post-hoc evaluation and comparison of a patient's gait over time. The system evaluates and visualizes the streamed gait in a real time web interface to provide a personalized normality rating that highlights the strengths and weaknesses of a patient's gait.
The collective attention on online items such as web pages, search terms, and videos reflects trends that are of social, cultural, and economic interest. Moreover, attention trends of different items exhibit mutual influence via mechanisms such as hyperlinks or recommendations. Many visualisation tools exist for time series, network evolution, or network influence; however, few systems connect all three. In this work, we present AttentionFlow, a new system to visualise networks of time series and the dynamic influence they have on one another. Centred around an ego node, our system simultaneously presents the time series on each node using two visual encodings: a tree ring for an overview and a line chart for details. AttentionFlow supports interactions such as overlaying time series of influence, and filtering neighbours by time or flux. We demonstrate AttentionFlow using two real-world datasets, VevoMusic and WikiTraffic. We show that attention spikes in songs can be explained by external events such as major awards, or changes in the network such as the release of a new song. Separate case studies also demonstrate how an artist's influence changes over their career, and that correlated Wikipedia traffic is driven by cultural interests. More broadly, AttentionFlow can be generalised to visualise networks of time series on physical infrastructures such as road networks, or natural phenomena such as weather and geological measurements.
Machine learning predictors have been increasingly applied in production settings, including in one of the world's largest hiring platforms, Hired, to provide a better candidate and recruiter experience. The ability to provide actionable feedback is desirable for candidates to improve their chances of achieving success in the marketplace. Until recently, however, methods aimed at providing actionable feedback have been limited in terms of realism and latency. In this work, we demonstrate how, by applying a newly introduced method based on Generative Adversarial Networks (GANs), we are able to overcome these limitations and provide actionable feedback in real-time to candidates in production settings. Our experimental results highlight the significant benefits of utilizing a GAN-based approach on our dataset relative to two other state-of-the-art approaches (including over 1000x latency gains). We also illustrate the potential impact of this approach in detail on two real candidate profile examples.
Leakage of personal information in conversations raises serious privacy concerns. Malicious people or bots could pry into sensitive personal information of vulnerable people, such as juveniles, through conversations with them or their digital personal assistants. To address the problem, we present a privacy-leakage warning system that monitors conversations in social media and intercepts the outgoing text messages from a user or a digital assistant, if they impose potential privacy leakage risks. Such messages are redirected to authorized users for approval, before they are sent out. We demonstrate how our system is deployed and used on a social media conversation platform, e.g., Facebook Messenger.
Modeling online discourse dynamics is a core activity in understanding the spread of information, both offline and online, and emergent online behavior. There is currently a disconnect between the practitioners of online social media analysis --- usually social, political and communication scientists --- and the accessibility to tools capable of examining online discussions of users. Here we present evently, a tool for modeling online reshare cascades, and particularly retweet cascades, using self-exciting processes. It provides a comprehensive set of functionalities for processing raw data from Twitter public APIs, modeling the temporal dynamics of processed retweet cascades and characterizing online users with a wide range of diffusion measures. This tool is designed for researchers with a wide range of computer expertise, and it includes tutorials and detailed documentation. We illustrate the usage of evently with an end-to-end analysis of online user behavior on a topical dataset relating to COVID-19. We show that, by characterizing users solely based on how their content spreads online, we can disentangle influential users and online bots.
Web archiving is the process of gathering data from the Web, storing it and ensuring the data is preserved in an archive for future explorations. Despite the increasing number of web archives, the absence of meaningful exploration methods remains a major hurdle in the way of turning them into a useful information source. With the creation of profiles describing metadata information about the archived documents it is possible to offer a more exploitable environment that goes beyond the simple keyword-based search. By exploring the expressive power of SPARQL language and providing a user friendly web-based search interface, users can run sophisticated queries searching for documents that meet their information needs.
The need for improved segmentation, targeting, personalization fuel the practice of data sharing among companies. Concurrently, data sharing faces the headwind of new laws emphasizing users' privacy in data. Under the premise that sharing of data occurs from a provider to a recipient, we propose a practicable approach of generating representational data for sharing that achieves value-addition for the recipient's tasks while preserving privacy of users. Prior art shows that the mechanism to improve value-addition inevitably weakens privacy in the generated data. In a first of a kind contribution, our system offers tunable controls to adjust the extent of privacy desired by the provider and the extent of value-addition expected by the recipient. Our experiments on a public data show that under common organizational practice of data-sharing, data generation for value-addition is achievable while preserving privacy. Our demonstration starkly shows the trade-off between privacy-protection and value addition, through user-controlled knobs and offers a prototype of a platform for data sharing which is mindful of this trade-off.
Automatic hate speech detection has become a crucial task nowadays, due the increase of hate on the Internet and its negative consequences. Therefore, in our PhD we propose the design and implementation of methods for the automatic processing of hate messages. The study is focused on the hate messages on Twitter. The hypothesis on which the research is based is that the prediction of hate speech, considering textual content, can be improved by the combination of features such as the activity and communities of users, as well as the images that can be shared with the tweets. In this way, we intend to develop strategies for the automatic detection of hate with multimodal and also multilingual (both in English and Spanish) approaches. Furthermore, our research includes the study of counter-narrative as an alternative to mitigate the effects of hate speech. To address the problem, we employ deep learning techniques, deepening the study of approaches based on representation with graphs.
Recommendation Systems (RS) are designed to assist users in decision making by recommending the most appropriate information or products for them. Nonetheless, many RS suffer from limitations such as data sparsity and cold-start. Side information (SI) can be integrated into a recommender system to tackle these limitations. In my Ph.D. research, I seek to build on and extend the use of SI for RS. Specifically, I propose new types and representations of SI and develop new methods to integrate SI into RS to boost its performance. This paper presents the conceptual foundation and motivation of my Ph.D. research.
There is a diverse variety of demographic data that can be analyzed with modern methods of data mining to achieve better results. On the one hand, the main chosen task is to compare different methods for the next event prediction and gender prediction, on the other hand, we pay special attention to interpretable patterns describing demographic behavior in the studied problems. There were considered interpretable methods as decision trees and their ensembles and semi- or non-interpretable methods, such as the SVM method with different customized kernels tailored for demographers' needs and neural networks, respectively. The best accuracy results were obtained with two-channel Convolutional Neural Networks.
Multimodal machine learning deals with building models that can process information from multiple modalities (i.e., ways of doing or experiencing something). Experiments involving humans are used to guarantee drug safety in the complex task of drug development. Drug-related data is readily available and comes in various modalities. The proposed study aims to develop novel methods for multimodal machine learning that can be used to process the diverse multimodal data used in drug development and other challenging tasks that could benefit from the use of multimodal data. We present a series of drug-related tasks which are used to both evaluate the models proposed in this ongoing study and discover new drug knowledge. This research will make far-reaching contributions to the field of machine learning, as well as practical contributions in the medical domain.
Fairness is a critical system-level objective in recommender systems that has been the subject of extensive recent research. It is especially important in multi-sided recommendation platforms where it may be important to optimize utilities not just for the end user, but also for other entities such as item sellers or producers who desire a fair representation of their items. Existing solutions either lack the multi-sided nature of fairness in recommendations, or do not properly address various aspects of multi-sided fairness in recommendations. In this thesis, we aim at first investigating the impact of unfair recommendations on the system and how it can negatively affect major entities in the system. Then, we seek to propose a general graph-based solution that works as a post processing approach after recommendation generation to tackle the unfairness of recommendations. We plan to perform extensive experiments to evaluate the effectiveness of the proposed approach.
Graphs are ubiquitous data structures in various fields, such as social media, transportation, linguistics and chemistry. To solve downstream graph-related tasks, it is of great significance to learn effective representations for graphs. My research strives to help meet this demand; due to the huge success of deep learning methods, especially graph neural networks, in graph-related problems, my emphasis has primarily been on improving their power for graph representation learning. More specifically, my research spans across the following three main areas: (1) robustness of graph neural networks, where we seek to study the performance of them under random noise and carefully-crafted attacks; (2) self-supervised learning in graph neural networks, where we aim to alleviate their need for costly annotated data by constructing self-supervision to help them fully exploit unlabeled data; and (3) applications of graph neural networks, where my work is to apply graph neural networks in various applications such as traffic flow prediction. This research statement, 'Graph Mining with Graph Neural Networks', is focused on my research endeavors specifically related to the aforementioned three topics.
User intention is an important factor to be considered for recommender systems. Different from inherent user preference addressed in traditional recommendation algorithms, which is generally static and consistent, user intention always changes dynamically in different contexts. Recent studies (represented by sequential recommendation) begin to focus on predicting what users want beyond what users like, which can better capture dynamic user intention and have attracted a surge of interest. However, user intention modeling is non-trivial because it is generally influenced by various factors, such as repeat consumption behavior, item relation, temporal dynamics, etc. To better capture dynamic user intention in sequential recommendation, we plan to investigate the influential factors and construct corresponding models to improve the performance. We also want to develop an adaptive way to model temporal evolutions of the effects caused by different factors. Based on the above investigations, we further plan to integrate these factors to deal with extremely long history sequences, where long-term user preference and short-term user demand should be carefully balanced.
Personalization is one of the key applications in machine learning with widespread usage across e-commerce, entertainment, production, healthcare and many other industries. While various machine learning techniques present novel state-of-the-art advances and super-human performance year-over-year, personalization and recommender-systems applications are often late-adopters of novel solutions due to problem hardness and implementation complexity. This tutorial presents recent advances across the personalization industry and demonstrates their practical applications in real case-studies of world-leading online platforms. Key trends such as deep learning, causality and active exploration with bandits are depicted with real examples and demonstrated alongside their business considerations and implementation challenges.Rising topics like explainability, fairness, natural interfaces and content generation are covered, touching on aspects of both technology and user experience. Our tutorial relies on recent advances in the field and on work conducted at Booking.com, where we implement personalization models on one of the world's leading online travel platform.
In this tutorial we aim to present a comprehensive survey of the advances in deep learning techniques specifically designed for anomaly detection (deep anomaly detection for short). Deep learning has gained tremendous success in transforming many data mining and machine learning tasks, but popular deep learning techniques are inapplicable to anomaly detection due to some unique characteristics of anomalies, e.g., rarity, heterogeneity, boundless nature, and prohibitively high cost of collecting large-scale anomaly data. Through this tutorial, audiences would gain a systematic overview of this area, learn the key intuitions, objective functions, underlying assumptions, advantages and disadvantages of different categories of state-of-the-art deep anomaly detection methods, and recognize its broad real-world applicability in diverse domains. We also discuss what challenges the current deep anomaly detection methods can address and envision this area from multiple different perspectives. Any audience who may be interested in deep learning, anomaly/outlier/novelty detection, out-of-distribution detection, representation learning with limited labeled data, and self-supervised representation learning would find it very helpful in attending this tutorial. Researchers and practitioners in finance, cybersecurity, healthcare would also find the tutorial helpful in practice.
Peer review is the backbone of scientific research. Yet peer review is called "biased," "broken," and "unscientific" in many scientific disciplines. This problem is further compounded with the near-exponentially growing number of submissions in various computer science conferences. Due to the prevalence of "Matthew effect'' of rich getting richer in academia, any source of unfairness in the peer review system, such as those discussed in this tutorial, can considerably affect the entire career trajectory of (young) researchers.
This tutorial will discuss a number of systemic challenges in peer review such as biases, subjectivity, miscalibration, dishonest behavior, and noise. For each issue, the tutorial will first present insightful experiments to understand the issue. Then the tutorial will present computational techniques designed to address these challenges. Many open problems will be highlighted which are envisaged to be exciting to the WSDM audience, and will lead to significant impact if solved.
Recent years have witnessed the emerging of conversational systems, including both physical devices and mobile-based applications. Both the research community and industry believe that conversational systems will have a major impact on human-computer interaction, and specifically, the IR/DM/RecSys communities have begun to explore Conversational Recommendation Systems. Conversational recommendation aims at finding or recommending the most relevant information (e.g., web pages, answers, movies, products) for users based on textual- or spoken-dialogs, through which users can communicate with the system more efficiently using natural language conversations. Due to users' constant need to look for information to support both work and daily life, conversational recommendation system will be one of the key techniques towards an intelligent web. The tutorial focuses on the foundations and algorithms for conversational recommendation, as well as their applications in real-world systems such as search engine, e-commerce and social networks. The tutorial aims at introducing and communicating conversational recommendation methods to the community, as well as gathering researchers and practitioners interested in this research direction for discussions, idea communications, and research promotions.
Probability Ranking Principle (PRP), which assumes that each document has a unique and independent probability to satisfy a particular information need, is one of the fundamental principles for ranking. Traditionally, heuristic ranking features and well-known learning-to-rank approaches have been designed by following the PRP principle. Recently, neural IR models, which adopt deep learning to enhance the ranking performances, also obey the PRP principle. Though it has been widely used for nearly five decades, in-depth analysis shows that PRP is not an optimal principle for ranking, due to its independent assumption that each document should be independent of the rest candidates. Counter examples include pseudo relevance feedback, interactive information retrieval, search result diversification, etc. To solve the problem, researchers recently proposed to model the dependencies among the documents during the designing of ranking models. A number of ranking models have been proposed and state-of-the-art ranking performances have been achieved. This tutorial aims to give a comprehensive survey on these recently developed ranking models that go beyond the PRP principle. The tutorial tries to categorize these models based on their intrinsic assumptions: assuming that the documents are independent, sequentially dependent, or globally dependent. In this way, we expect the researchers focusing on ranking in search and recommendation can have a novel angle of view on the designing of ranking models, and therefore can stimulate new ideas on developing novel ranking models. The material of this tutorial can be found in https://github.com/pl8787/wsdm2021-beyond-prp-tutorial.
Learning from graph and relational data plays a major role in many applications including social network analysis, marketing, e-commerce, information retrieval, knowledge modeling, medical and biological sciences, engineering, and others. Recently, Graph Neural Networks (GNNs) have emerged as a promising new learning framework capable of bringing the power of deep representation learning to graph and relational data. This ever-growing body of research has shown that GNNs achieve state-of-the-art performance for problems such as link prediction, fraud detection, target-ligand binding activity prediction, knowledge-graph completion, and product recommendations. In practice, many of the real-world graphs are very large. It is urgent to have scalable solutions to train GNN on large graphs efficiently.
The objective of this tutorial is twofold. First, it will provide an overview of the theory behind GNNs, discuss the types of problems that GNNs are well suited for, and introduce some of the most widely used GNN model architectures and problems/applications that are designed to solve. Second, it will introduce the Deep Graph Library (DGL), a scalable GNN framework that simplifies the development of efficient GNN-based training and inference programs at a large scale. To make things concrete, the tutorial will cover state-of-the-art training methods to scale GNN to large graphs and provide hands-on sessions to show how to use DGL to perform scalable training in different settings (multi-GPU training and distributed training). This hands-on part will start with basic graph applications (e.g., node classification and link prediction) to set up the context and move on to train GNNs on large graphs. It will provide tutorials to demonstrate how to apply the techniques in DGL to train GNNs for real-world applications.
Commonsense knowledge is a foundational cornerstone of artificial intelligence applications. Whereas information extraction and knowledge base construction for instance-oriented assertions, such as Brad Pitt's birth date, or Angelina Jolie's movie awards, has received much attention, commonsense knowledge on general concepts (politicians, bicycles, printers) and activities (eating pizza, fixing printers) has only been tackled recently. In this tutorial we present state-of-the-art methodologies towards the compilation and consolidation of such commonsense knowledge (CSK). We cover text-extraction-based, multi-modal and Transformer-based techniques, with special focus on the issues of web search and ranking, as of relevance to the WSDM community.
The goal of this tutorial is to provide the WSDM community with recent advances on the assessment and mitigation of data and algorithmic bias in recommender systems. We first introduce conceptual foundations, by presenting the state of the art and describing real-world examples of how bias can impact on recommendation algorithms from several perspectives (e.g., ethical and system objectives). The tutorial continues with a systematic showcase of algorithmic countermeasures to uncover, assess, and reduce bias along the recommendation design process. A practical part then provides attendees with implementations of pre-, in-, and post-processing bias mitigation algorithms, leveraging open-source tools and public datasets; in this part, tutorial participants are engaged in the design of bias countermeasures and in articulating impacts on stakeholders. We conclude the tutorial by analyzing emerging open issues and future directions in this rapidly evolving research area (Website: https://biasinrecsys.github.io/wsdm2021).
We present Neural Structured Learning (NSL) in TensorFlow , a new learning paradigm to train neural networks by leveraging structured signals in addition to feature inputs. Structure can be explicit as represented by a graph, or implicit, either induced by adversarial perturbation or inferred using techniques like embedding learning. NSL is open-sourced as part of the TensorFlow ecosystem and is widely used in Google across many products and services. In this tutorial, we provide an overview of the NSL framework including various libraries, tools, and APIs as well as demonstrate the practical use of NSL in different applications. The NSL website is hosted at www.tensorflow.org/neural_structured_learning, which includes details about the theoretical foundations of the technology, extensive API documentation, and hands-on tutorials.
The goal of text ranking is to generate an ordered list of texts retrieved from a corpus in response to a query. Although the most common formulation of text ranking is search, instances of the task can also be found in many natural language processing applications. This tutorial, based on a forthcoming book, provides an overview of text ranking with neural network architectures known as transformers, of which BERT is the best-known example. The combination of transformers and self-supervised pretraining has, without exaggeration, revolutionized the fields of natural language processing (NLP), information retrieval (IR), and beyond. We provide a synthesis of existing work as a single point of entry for both researchers and practitioners. Our coverage is grouped into two categories: transformer models that perform reranking in multi-stage ranking architectures and learned dense representations that perform ranking directly. Two themes pervade our treatment: techniques for handling long documents and techniques for addressing the tradeoff between effectiveness (result quality) and efficiency (query latency). Although transformer architectures and pretraining techniques are recent innovations, many aspects of their application are well understood. Nevertheless, there remain many open research questions, and thus in addition to laying out the foundations of pretrained transformers for text ranking, we also attempt to prognosticate the future.
Over the years, the Web has become a premier source of information in almost every area we can think about. When considering tourism, the Web became the primary source of information for travelers. When planning trips, people search for information about destinations, accommodations, attractions, means of transportation, in short, everything related to their future trip. Once done searching they reserve almost everything online. The blessing of the easily accessible information comes with the curse of information overload. This brings Web search techniques and recommendation systems come into play. This is especially true recently with the appearance of COVID-19 and the uncertainty and transformative power it brings to travelling. WebTour 2021 brings together researchers and practitioners working on developing and improving tools and techniques for improving users ability to better find relevant information that matches their needs.
The second Workshop on Integrity in Social Networks and Media is held in conjunction with the 14th ACM Conference on Web Search and Data Mining (WSDM) in Jerusalem, Israel. The goal of the workshop is to bring together researchers and practitioners to discuss content and interaction integrity challenges in social networks and social media platforms.
Recent years have witnessed the success of machine learning and especially deep learning in many research areas such as Vision and Language Processing, Information Retrieval and Recommender Systems, Social Networks and Conversational Agents. Though various learning approaches have demonstrated satisfying performance in perceptual tasks such as associative learning and matching by extracting useful similarity patterns from data, the area still sees a large amount of research needed to advance the ability of reasoning towards cognitive intelligence in the coming years. This includes but is not limited to neural logical reasoning, neural-symbolic reasoning, causal reasoning, knowledge reasoning and commonsense reasoning. The workshop focuses on the research of machine reasoning techniques and their application in various intelligent tasks. It will gather researchers as well as practitioners in the field for discussions, idea communications, and research promotions. It will also generate insightful debates about the recent progress in machine intelligence to a broader community, including but not limited to CV, IR, NLP, ML, DM, AI and beyond.
The workshop on Supporting and Understanding of (multi-party) conversational Dialogues (SUD) seeks to encourage researchers to investigate automated methods to analyze and understand conversations, and also explore methodologies for proactively providing assistance to the communicating parties during conversations, ranging from summarizing the minutes of meetings to automatically keeping track of action items etc. The workshop will have (1) a regular research paper track, and a more focused (2) data challenge track, inviting papers on a specific task of contextualizing entities of interest from conversation dialogues.
By offering courses and resources, learning platforms on the Web have been attracting lots of participants, and the interactions with these systems have generated a vast amount of learning-related data. Their collection, processing and analysis have promoted a significant growth of learning analytics and have opened up new opportunities for supporting and assessing educational experiences. To provide all the stakeholders involved in the educational process with a timely guidance, being able to understand student's behavior and enable models which provide data-driven decisions pertaining to the learning domain is a primary property of online platforms, aiming at maximizing learning outcomes. In this workshop, we focus on collecting new contributions in this emerging area and on providing a common ground for researchers and practitioners (Website: https://mirkomarras.github.io/l2d-wsdm2021). | https://sigweb.org/toc/wsdm21.html |
Q:
A real subspace of a complex space
Let $\Gamma=\langle (1,0),(0,1),(i,i\alpha)\rangle_{\mathbb Z}\subset \mathbb C^2$ where $\alpha\in \mathbb R$. Let $V_\Gamma:=span_\mathbb R(\Gamma)$ and $W_\Gamma:=V_\Gamma\cap iV_\Gamma$.
Now, since $V_\Gamma$ is isomorphic as a vector space to $\mathbb R^3$ consider the map $\sigma:\mathbb C \rightarrow \mathbb R^3$ by $w\mapsto w\cdot(1,\alpha)$
I want to visualize and understand this situation as a subspace of $\mathbb R^3$ like in the image I attached.
My questions are: How can we see that $V_\Gamma/\Gamma=S^1\times S^1\times S^1$ and when $\alpha \in \mathbb R\setminus\mathbb Q$ then $\sigma(\mathbb C)$ is dense in $V_\Gamma/\Gamma$? What can we say about $W_\Gamma$ and its action?
A:
$\newcommand{\Reals}{\mathbf{R}}\newcommand{\Basis}{\mathbf{e}}$Let $B = (\Basis_{j})_{j=1}^{3}$ be an ordered basis in some real vector space $V$, and let $\Gamma$ be the integer lattice spanned by $B$. The parallelepiped
$$
K = \{t_{1} \Basis_{1} + t_{2} \Basis_{2} + t_{3} \Basis_{3} : 0 \leq t_{j} \leq 1\}
$$
is a fundamental domain for $\Gamma$ acting by translation (i.e., by addition) in $V$. The quotient $V/\Gamma$ is obtained from $K$ by "gluing opposite faces of $K$", namely by identifying
\begin{gather*}
t_{2} \Basis_{2} + t_{3} \Basis_{3} \sim \Basis_{1} + t_{2} \Basis_{2} + t_{3} \Basis_{3}, \\
t_{1} \Basis_{1} + t_{3} \Basis_{3} \sim \Basis_{2} + t_{1} \Basis_{1} + t_{3} \Basis_{3}, \\
t_{1} \Basis_{1} + t_{2} \Basis_{2} \sim \Basis_{3} + t_{1} \Basis_{1} + t_{2} \Basis_{2}
\end{gather*}
for all $t_{1}$, $t_{2}$, $t_{3}$ in $[0, 1]$. This is the direct three-dimensional analogue of the construction of a torus from a parallelogram by gluing opposite edges. (A bit more formally, let $(t_{1}, t_{2}, t_{3})$ be Cartesian coordinates in $\Reals^{3}$, and observe that $\Gamma \leftrightarrow \mathbf{Z}^{3}$, so $V/\Gamma \simeq \Reals^{3}/\mathbf{Z}^{3} \simeq S^{1} \times S^{1} \times S^{1}$.)
To phrase your second question in real terms, fix an irrational number $\alpha$ and consider the map $\sigma:\Reals^{2} \to V/\Gamma$ defined by
$$
\sigma(s, t) = s(\Basis_{1} + \alpha \Basis_{2}) + t \Basis_{3}.
$$
To see the image of $\sigma$ is dense, note that the curve $C$ where $t = 0$ is an irrational winding, hence is dense in the $2$-torus $T = \{t_{3} = 0\} \subset V/\Gamma$. The image of $\sigma$ is the "cylinder" $C \times S^{1} \subset T \times S^{1} = V/\Gamma$ over $C$.
| |
A dog bite victim is taking matters into her own hands in the Village of Forestdale in Massachusetts, reports Cape News.
Elizabeth Hiatt, a grandmother who bakes, crafts and weaves, suffered damage to her hands in a violent dog attack in late 2017. Her mission is now threefold: to work on improving her hands, change the way dogs are put up for adoption, and change how they are removed from circulation should they prove to be dangerous.
Hiatt has lived in a spacious neighborhood in the village for more than 30 years and has close bonds with her neighbors. A lover of dogs, the grandmother agreed to walk Bubba, a pit bull, for her elderly neighbors who live down the road, Edith and Cliff Gardener. She walked the dog without any incident for six weeks, but on the evening of September 5, that changed.
The start of the walk was typical. Bubba was waiting by his owners’ door, and Hiatt and two of her relatives took the dog by its leash down the road. Coming from the opposite direction was another dog, a Samoyed named Klondike, and his owner. Normally, the two dogs passed without acknowledging each other, but this time, Bubba went after Klondike with so much force that he broke his own collar. Klondike’s owner was able to separate the dogs, but Bubba went after Hiatt next, biting her face and right hand before clamping down on her left hand. At the time, Hiatt could feel her bones breaking and her tendons tearing. A passerby call 911, and Hiatt was taken to the hospital. She received a total of 100 stitches, 75 of which were just for her left hand.
After the attack on Hiatt, Bubba was not put down. In fact, after a hearing, he was allowed back into circulation. He would go on to attack a 22-year-old groomer at a pet store where he had been brought for grooming a few months later. The groomer suffered bites to her hand, wrist and upper body.
While Hiatt has decided not to sue the town, the dog’s owners or anyone else over her incident, she has spent months researching adoption policies, animal control laws and how dog bite cases are handled by local governments. Through her research, she has decided that several protocols need to be put into effect to prevent incidents like this from happening again. Her suggested changes include serious consideration of euthanasia when a dog causes a person serious injury and a dangerous dog hearing in all of those cases, background checks of animals that attack, the immediate impound of any attacking animal, and an evaluation of the dog’s home environment before it is returned to its owners.
Hiatt does not blame her neighbors, who did not know Bubba was dangerous. She just wants to stop other people from being bitten, and to this day, she’s still upset that what happened to the young groomer could have been prevented.
Dog bites can have serious, lasting consequences for victims. If you’ve been attacked by an animal, speak to an attorney, like a dog bite lawyer Denver CO trusts, today.
Thanks to our friends and contributors from Richard J. Banta, P.C. for their insight into dog bite cases. | https://www.lapil-law.com/2018/03/16/massachusetts-dog-bite-victim-calling-for-tougher-animal-control-laws/ |
Induction with abacavir/lamivudine/zidovudine plus efavirenz for 48 weeks followed by 48-week maintenance with abacavir/lamivudine/zidovudine alone in antiretroviral-naive HIV-1-infected patients.
The ESS40013 study tested 4-drug induction followed by 3-drug maintenance as initial antiretroviral therapy (ART) to reduce HIV RNA rapidly and then to simplify to an effective yet more convenient and tolerable regimen. Four hundred forty-eight antiretroviral-naive adults were treated with abacavir/lamivudine/zidovudine (ABC/3TC/ZDV) and efavirenz (EFV) for the 48-week induction phase. Two hundred eighty-two patients were randomized in a 1:1 ratio to continue ABC/3TC/ZDV+EFV or to simplify to ABC/3TC/ZDV for the 48-week maintenance phase. The baseline median HIV RNA level and CD4 cell count were 5.08 log10 copies/mL (56%>or=100,000 copies/mL) and 210 cells/mm (48% <200 cells/mm), respectively. No significant differences were noted between ABC/3TC/ZDV+EFV and ABC/3TC/ZDV for an HIV RNA level <50 copies/mL (79% vs. 77% [intent to treat (ITT), missing=failure]; P=0.697) or time to treatment failure (P=0.75) at week 96. Drug-related adverse events were more commonly reported for ABC/3TC/ZDV+EFV than for ABC/3TC/ZDV (15% vs. 6%). Improvements in total cholesterol, low-density lipoprotein cholesterol, and triglycerides were observed in the ABC/3TC/ZDV group. Virologic failure occurred in 22 patients during induction and in 24 patients (16 in ABC/3TC/ZDV group and 8 in ABC/3TC/ZDV+EFV group; P=0.134) during maintenance. A greater proportion of patients receiving ABC/3TC/ZDV than ABC/3TC/ZDV+EFV reported perfect adherence at week 96 (88.8% vs. 79.6%; P=0.057). After induction with ABC/3TC/ZDV+EFV, simplification to ABC/3TC/ZDV alone maintained virologic control and immunologic response, reduced fasting lipids and ART-associated adverse events, and improved adherence.
| |
Read facts you never knew about Greta Garbo:
Greta got her first job at the age of six selling newspapers
Her school records show that she was a ‘fair’ but lazy student, who was always daydreaming. The only subject that interested her was History.
Greta is known as a timeless beauty, but not everything came naturally. She once said that she had been on a diet since age fifteen. She also had a unique beauty routine, for instance she often brushed her teeth with salt before she went to bed.
Greta was prone to depressions and tried to feel better through Eastern philosophy and a health food regiment. However, she never gave up smoking and cocktails.
She reportedly requested Gary Cooper as her co-star for several of her films, but nothing ever materialized.
Greta received her biggest paycheck for Anna Karenina: 275.000 dollars.
She was Adolf Hitler’s favorite actress. The feeling was not mutual though: Greta helped British agents by identifying influential Nazi sympathizers in Stockholm and carrying messages.
Anne Frank had a picture of Greta on her wall in the ‘Secret Annex’ in Amsterdam where she and her family hid during WWII.
Greta refused to sign autographs, answer fan mail and rarely gave interviews. She never appeared at award ceremonies or premieres. Some thought this was done on purpose, to gain a mysterious image, but this was not the case. She was a shy introvert and shunned publicity all of her life. The studio eventually decided they could capitalize on this and subsequently created the image of ‘the woman of mystery’.
It was said that Marlene Dietrich was brought to the United States by Hollywood to compete with the popular Greta. People constantly compared the two during their respective careers. Greta did not mind, but it reportedly annoyed Marlene very much.
Greta was not a big spender and invested her money wisely. Although she had not worked for a long time, she was worth millions when she passed away.
Art collector Samuel Adams Green was a good friend of Greta and with her permission he taped many of his phone conversations with her. After his death in 2011, the tapes were finally made public when they were given to the film archives at the Wesleyan University.
The rumor that Greta was bisexual or lesbian is very persistent. A lot of biographers believe it to be true and silent film star Louise Brooks once stated that she had had an affair with Greta. Fuel is still added to the fire: in 2005 her friend Mimi Pollak released part of their correspondence and several of the letters indicate that Greta had romantic feelings towards Mimi.
Her grandniece Lian Gray Horan (granddaughter of Greta’s brother Sven) admitted that it took Greta a long time to warm up to the idea of new people. Lian had been with her husband for seven years and married for two years, before Greta wanted to meet him. Once she did, she immediately embraced him as part of the family, though.
Her nephew said in an interview that it is hard to take care of her legacy since ‘there is more myth than reality’ surrounding his aunt.
Read more about Greta Garbo’s life. | https://www.classichollywoodcentral.com/facts/facts-greta-garbo/ |
A stunning all day walk along the Roseland Peninsula following the coastal path from Portscatho to St Anthony’s Head.
This walk is a great day out in Cornwall and is easily accessible from Truro, Falmouth and St Mawes.
If travelling from Falmouth or Truro be sure to catch the King Harry Ferry to reduce your journey time and for a lovely trip across the Fal River.
Start the walk at Portscatho and take the coast path all the way to St Anthony’s Head where there are some of the most spectacular views Cornwall has to offer.
On your return, follow Military Road back to Porth Farm. From Porth Farm take the footpath through Rosteague before joining Treloan Lane which will take you back to Portscatho.
Starting Point: Portscatho
Distance: 9 miles
Duration: All day
Grading: Hard
End Point: Circular walk
On the way: Beautiful beaches are ever present on this walk. It's a long hike so make sure you take plenty of fluids, a packed lunch and high-energy snacks with you.
Walk Map
Detailed walk information
Starting from Portscatho with the sea on your left walk down past the view over the harbour (picture top left) to the cottages and pick up the path that will take you all the way to St Anthony’s head. Portscatho, meaning “harbour of boats” was an important fishing village with fish cellars around the walls of the harbour. It still has a small fishing fleet. The villages of Gerrans and Portscatho are interlinked with the church up at Gerrans the first to be mentioned in documentation as it goes back to Norman times. Dedicated to St Geraint who was a 9th or 10th Century Celtic saint, locally there are also sites in honour of King Geraint of Dumnonia who is associated with the legend of King Arthur and is supposedly buried around Veryan on Carne Beacon. The church high up on the hill is a valuable reference point for sailors.
It is a two-mile walk to Towan beach (picture top right). Once there the National Trust car park and toilets are 300 metres up the path to the right whilst the South West Coast Path continues ahead to St Anthony’s Head. This is a popular beach but never crowded, as it is a long stretch of coarse sand. The tall post encountered as one leaves the beach is a “wreck post” where a breeches buoy could be fired to the mast of a ship to begin rescuing people of stricken vessels. Often over the edge in the bays seals can be seen.
Walk around Killigerran Head and the approach to Porthbeor Beach. This is worth getting down the 100 or so steps, especially at low tide. There are caves to explore and although popular with the boating community in summer it remains a fairly quiet beach. If you want to shorten the walk, with your back to the sea and steps down to the beach, walk ahead up the path to the road which is 50 metres away and turn right heading back to Porth farm and take the signpost to Rosteague and Portscatho/Gerrans. Otherwise continue west along the coast path.
After 30 minutes you should be approaching St Anthony’s Head with the wonderful views over to Falmouth, down to the Lizard peninsula and up the Fal river to Truro. This is the gateway to the Carrick Roads (Fal River). Strategically St Anthony’s has been important in protecting the entrance to the Fal and various gun batteries were placed here which you can explore. There is also a bird hide lower down where one can see peregrine falcons nesting. There are toilets here and a seasonal café. Walk to the end of the car park and take the road downhill.
This takes you to the Military road T-junction. To the left downhill is Place and the ferry. To the right is the road back to Portscatho and the NT car park and Porth Farm, toilets and beach. Just after the entrance to the Porth Farm buildings there is a footpath sign and gate that indicates Rosteague and Gerrans. Take this and not the road and it will take you through farmland to a lane, eventually coming out at Gerrans. Visit the church and there is also a tearoom and pub here.
Portscatho is a short walk down the hill where there are numerous galleries, a convenience store and restaurants and a pub.
Public transport information
Bus 550/551 from Truro 1 hour to Portscatho or Ferry from St Mawes to Place walk up to the road and follow the instructions from Military Road t junction.
Nearest Toilets and Nearest Disabled Toilets
Toilets in Portscatho and on St Anthony’s head.
Nearest Car parks and Nearest Car Parks with disabled provision
In Portscatho there are public car parks.
Nearest refreshments
Pub and restaurants in Portscatho, Café on St Anthony’s Head –seasonal and pub and tea room in Gerrans. | https://www.falriver.co.uk/see-and-do/walks/roseland-beaches |
The king is the most important piece on the board. The entire game revolves around the king; more exactly, the purpose of the game is to checkmate the opponent's king.
The kings initial positions are as in the following picture.
The king can move one square, in any direction, as long as that square is not occupied by one of its own pieces or it's not in the range of action of an enemy piece. The king can capture a piece that is on a square that it can move on. Because the way the king moves - one square in any direction, the two kings must always be separated by a square - so they don't enter one in each others range of action.
You can see how the king can move in the picture below (the orange squares).
The position in the following image is illegal. The two kings must be separated by at least one square.
In the following example the white king can only move on certain squares. It is unable to move anywhere on the 6th rank because that rank is under the control of the black rook from e6. It can't move on the a7 and c7 either, because of the black knight at b5. So, the only squares on which it can move are a8, b8 and c8.
Watch the following example to see how the king can capture an enemy piece. The white king is unable to capture the black knight at g4 because that square is under the control of the black king. It can't capture the black pawn at e3 either, because the e3 square is under the control of the black knight. It can, however, capture the bishop at g3 by taking its place.
There is also a special move the king can do only one time in a game in relation with the rook and in certain conditions : castling. But I'll talk about that in one of the following tutorials.
For now let's take a closer look at how the games are recorded. | http://www.chessguru.net/chess_rules/chess_piece/king/ |
When originally explaining the "NEW Generations" Splaine used the example of Joseph and his Brothers.
He backed up the time difference between the original Anointed, and the New ones by using what he said was a BIG age difference between the youngest "Joseph" and his Oldest brothers.
THEN more recently in a JW Broadcast - he said that there was NOT a big age difference between Joseph and his Oldest brother.
Can any one help me find both those, please? | https://www.jehovahs-witness.com/topic/6478895266136064/need-help-reference-dealing-generations-splaine |
CROSS-REFERENCE TO RELATED APPLICATIONS
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
REFERENCE TO A MICROFICHE APPENDIX
BACKGROUND
SUMMARY
DETAILED DESCRIPTION
This application claims priority to U.S. provisional patent application No. 62/527,888 filed on Jun. 30, 2017 by Futurewei Technologies, Inc. and titled “Identifier-Based Resolution of Identities,” which is incorporated by reference.
Not applicable.
Not applicable.
Connectivity among entities such as users and their devices is becoming ubiquitous. In traditional IP networks, it may be difficult to maintain connectivity between mobile entities while having optimal routing paths and low latencies. IP networks, IONs, IENs, ID-LOC networks, and other networks attempt to address that issue in various ways.
In one embodiment, the disclosure includes an apparatus in an IP network, the apparatus comprising: a receiver configured to: obtain an identity of a first entity, obtain a first identifier of the identity, and obtain a second identifier of the identity; and a processor coupled to the receiver and configured to: create an association of the first identifier and the second identifier with the identity, and instruct storage of the association in a database. In some embodiments, the identity is a unique identification of the first entity at a given time; the first identifier and the second identifier are identifications of the identity; the first identifier and the second identifier are any combination of IP addresses, cryptographic hashes of IP addresses, LISP EIDs, or HIP HITs; the first identifier is a publicly known designated identifier, and wherein the second identifier is an ephemeral identifier used for anonymity or obfuscation of the identity; the apparatus is a service node; the receiver is further configured to receive a first message from a second entity, and wherein the first message comprises the first identifier and requests data associated with the identity; in response to the first message, the processor is further configured to: access the database to determine that the identity is associated with the first identifier; retrieve the data when a policy permits provision of the data to the second entity; and generate a second message comprising the data; the apparatus further comprises a transmitter coupled to the processor and configured to transmit the second message to the second entity.
In another embodiment, the disclosure includes a method implemented in an IP network, the method comprising: obtaining an identity of a first entity; obtaining a first identifier of the identity; obtaining a second identifier of the identity; creating an association of the first identifier and the second identifier with the identity; and instructing storage of the association in a database. In some embodiments, the identity is a unique identification of the first entity at a given time; the first identifier and the second identifier are any combination of IP addresses, cryptographic hashes of IP addresses, LISP EIDs, or HIP HITs; the first identifier is a publicly known designated identifier, and wherein the second identifier is an ephemeral identifier used for anonymity or obfuscation of the identity; the method further comprises receiving a first message from a second entity, wherein the first message comprises the first identifier and requests data associated with the identity; the method further comprises accessing, in response to the first message, the database to determine that the identity is associated with the first identifier; retrieving the data when a policy permits provision of the data to the second entity; generating a second message comprising the data; and transmitting the second message to the second entity.
In yet another embodiment, the disclosure includes a first entity in an IP network, the first entity comprising: a processor configured to generate a first message comprising a first identifier and requesting data associated with an identity of the first identifier, the identity is a unique identification of a second entity at a given time, and the first identifier is an identification of the identity; a transmitter coupled to the processor and configured to transmit the first message to a service node; and a receiver coupled to the processor and configured to receive a second message from the service node an in response to the first message, the second message comprises the data when the data are associated with a second identifier that is also an identification of the identity. In some embodiments, the data are one of metadata, a policy, and a location; the data comprise all identifiers associated with the identity; the second message omits the data when a policy associated with the identity prohibits provision of the data to the first entity.
In yet another embodiment, the disclosure includes a method implemented by a first entity, the method comprising: registering a first identity with a service node; registering a first identifier with the service node; transmitting to the service node a first message requesting a location associated with a second entity, the second entity is associated with a policy; and receiving, when the first message comprises a second identifier of the first identity and when the policy indicates that the first identifier or the first identity may not receive the location, a third message indicating that the second identifier or the first identity may not receive the location. In some embodiments, the method further comprises receiving, when the first message comprises the first identifier and when the policy indicates that the first identifier may receive the location, a second message comprising the location.
Any of the above embodiments may be combined with any of the other above embodiments to create a new embodiment. These and other features will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings and claims.
It should be understood at the outset that, although an illustrative implementation of one or more embodiments are provided below, the disclosed systems and/or methods may be implemented using any number of techniques, whether currently known or in existence. The disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary designs and implementations illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.
The following abbreviations and initialisms apply:
ASIC: application-specific integrated circuit
CPU: central processing unit
DSP: digital signal processor
EID: endpoint identifier
EO: electrical-to-optical
FPGA: field-programmable gate array
GPS: Global Positioning System
GQE: GRIDS quantum entanglement
GRIDS: generic identity and mapping services
HIP: Host Identity Protocol
HIT: host identity tag
ID: identifier
IEN: identity-enabled network
ION: identity-oriented network
IP: Internet Protocol
IPv4: Internet Protocol version 4
IPv6: Internet Protocol version 6
LISP: Locator/Identifier Separation Protocol
LOC: locator
OE: optical-to-electrical
OSS: operations support system
PKIX: Public-Key Infrastructure
RAM: random-access memory
RF: radio frequency
RFC: request for comments
ROM: read-only memory
RX: receiver unit
SIM: subscriber identity module
SRAM: static RAM
TCAM: ternary content-addressable memory
TX: transmitter unit.
FIG. 1
100
100
100
105
110
145
160
150
155
105
110
145
160
150
155
100
100
100
is a schematic diagram of a network . The network is an IP network, ION, IEN, ID-LOC network, or other network. ION networks are described in U. Chunduri, et al., Identity Use Cases in IDEAS, Jul. 3, 2017, which is incorporated by reference. The network comprises an OSS ; a GRIDS system ; entities , ; and gateways , . The OSS and the GRIDS system reside in a control plane, and the entities , and the gateways , reside in a data plane. The components of the network may be referred to as nodes. Though the network shows specific numbers of nodes, the network may have any number of such nodes. Each of the nodes is a hardware computer or server, a group of hardware computers or servers, or a software function of a hardware computer or server.
105
110
115
120
125
135
140
110
145
160
110
115
145
160
115
120
145
160
125
125
125
130
120
125
135
140
135
140
145
160
150
155
110
110
120
135
140
The OSS is a computer system or set of programs that an enterprise, a network provider, or a service provider such as Verizon, Orange, AT&T, or Sprint operates in order to provide network services such as traffic balancing and fault management. The GRIDS system comprises a vault ; a service node ; a database ; and access points , . The GRIDS system implements GRIDS, which moves identification of the entities , from IP, which uses addresses, and from LISP and HIP, which use identifiers and locators, to use of identities, identifiers, and locators. IP is described in Jon Postel, “Internet Protocol,” RFC 791, September 1981, which is incorporated by reference; LISP is described in D. Farinacci, et al., “The Locator/ID Separation Protocol (LISP),” RFC 6830, January 2013, which is incorporated by reference; and HIP is described in R. Moskowitz, et al., “Host Identity Protocol Version 2 (HIPv2),” RFC 7401, April 2015, which is incorporated by reference. Alternatively, the GRIDS system is another identity-identifier-locator mapping system. The vault stores secure data such as sensitive information that the entities , in particular and the nodes in the data plane in general may not access. The vault may store its data in an encrypted manner. The service node provides identity-identifier look-up, metadata, and other services to the entities , using the database . The database comprises locators, identifiers, and other data, as well as relationships among that data, as described further below. The database may also store its data in an encrypted manner. The GQE substrate provides control and interconnection of the service node ; database ; and access points , . The access points , are communication interfaces for the entities , and the gateways , to access the GRIDS system . The GRIDS system may comprise an access point for each entity. The service node and the access points , may or may not be co-located.
145
160
145
160
145
160
150
155
145
160
150
155
145
160
120
125
100
The entities , are mobile phones, tablet computers, connected vehicles, traffic cameras, or other endpoint devices. The entities , may be associated with proxies that act on behalf of the entities , . The gateways , provide direct communication between the entities , . The gateways , may also provide identity-identifier, look-up, metadata, caching, and other services to the entities , using the service node or the database . The network may comprise a gateway for each entity.
100
145
160
145
160
145
160
145
160
160
145
120
145
160
145
160
105
120
145
160
The network uses identities for the entities , . The identities are unique identifications of the entities , at given times and do not change when the entities , change locations. The nodes in the data plane may not communicate identities among themselves. For instance, the entity may not know the identity of the entity , and the entity may not know the identity of the entity . A node in the data plane and a node in the control plane may communicate an identity for authentication or other purposes. For instance, the service node assigns identities to the entities , , then uses those identities to identify, authenticate, and authorize the entities , . Alternatively, the OSS assigns the identities and provides those identities to the service node or the entities , register the identities. The identities may be PKIX certificates or IPv6 addresses.
100
145
160
100
145
160
120
120
145
160
125
120
145
160
145
160
150
155
150
155
120
145
160
The network also uses identifiers for the entities , . The identifiers are identifications of the identities and use address namespaces different from the identities. The network maintains connectivity among its nodes by disassociating the identifiers from IP addresses and by making forwarding decisions based on the identifiers. When the identifiers change, the identities may not change. There may be at least two categories of identifiers. A first category of identifier is publicly known, is used for location resolution, typically has a longer life, and may be referred to as a designated identifier or a long-lived identifier. Designated identifiers may include LISP EIDs and HIP HITs. A second category of identifier is used for anonymity or obfuscation of identities to nodes that should not know those identities, is used in packet headers, typically has a shorter life such as for a single session, is not publicly known, and may be referred to as an ephemeral identifier. The entities , register identifiers with the service node , and the service node makes associations of the identifiers with corresponding entities , and instructs storage of the associations in the database . Alternatively, the service node assigns identifiers to the entities , ; the entities , or the gateways , request such assignment; or the gateways , register identifiers with the service node on behalf of the entities , . The identifiers may be IP addresses such as IPv4 addresses or IPv6 addresses, or the identifiers may be cryptographic hashes of those IP addresses. Both designated identifiers and ephemeral identifiers may share a same format. The nodes in the data plane may use the identifiers to designate senders and receivers of packets in packet headers.
145
160
145
160
120
120
120
120
100
145
135
145
160
160
135
120
120
125
145
160
120
135
160
135
145
160
145
160
150
155
150
155
120
When the entity desires to communicate with the entity , first, the entities , authenticate with the service node using their identities, are assigned identifiers by or register identifiers with the service node , and provide current locations to the service node . Alternatively, authentication and identifier assignment occur beforehand. The service node may publish the identifiers to other nodes in the network . Second, the entity transmits to the access point a first message comprising a first identifier of the entity , a second identifier of the entity , and a request for a location of the entity . The message may also be referred to as a plane packet, and the location may also be referred to as a locator. The access point forwards the first message to the service node . The service node uses the database to authenticate the entity using the first identifier and to determine the location of the entity using the second identifier. The service node transmits to the access point a second message comprising the location of the entity . The access point forwards the second message to the entity . Finally, using the location of the entity from the second message, the entity communicates with the entity through the gateways , and any other nodes between the gateways , . Though the service node is described as authenticating, determining a location, and forwarding, other nodes may also do so.
120
120
145
160
145
160
145
160
120
100
145
160
145
160
100
The service node assesses an identity by a given identifier, assesses whether two identifiers belong to the same identity, and resolves metadata or other information associated with an identity using an identifier. In addition, the service node applies policies to target entities , by identifying the target entities , . Thus, policy expressions refer to identifiers when the identifiers identify the target entities , . It is desirable for the service node specifically and the network generally to resolve identities of the entities , using identifiers. Such resolution protects identities from revelation in the data plane and ensures provision of data when the entities , change locations in the network .
Disclosed herein are embodiments for identifier-based resolution of identities. In this context, the word “resolution” and its derivatives mean obtaining identifiers, determining identities associated with the identifiers, and retrieving data associated with the identities. Determining identities comprises a look-up procedure or other procedure. The data are metadata, policies, locations, other identifiers, or other data. Resolution may also comprise obtaining identifiers and retrieving data associated with the identifiers. The resolution occurs even when entities change locations, addresses, or other identifying information, thus ensuring connectivity among entities at all desired times. In addition, the embodiments provide a foundation for identity-based firewalls, network restraint systems, or other applications. Furthermore, the embodiments provide for specification of all identifiers associated with an identity for management, lawful intercept, regulatory, or other purposes. Such identifiers may be in use or previously used.
FIG. 2
FIG. 1
FIG. 2
125
125
200
210
220
230
240
125
200
210
220
230
240
is a schematic diagram of the database in according to an embodiment of the disclosure. The database comprises identities , identifiers , metadata , policies , and locations , as well as associations among those data. The database may comprise additional data that are not shown. illustrates the database in an abstracted manner to separate the identities , the identifiers , the metadata , the policies , and the locations .
200
210
210
200
220
145
160
220
135
140
145
160
145
160
210
145
160
145
160
200
210
230
145
160
145
160
230
210
220
240
240
145
160
240
125
125
Each of the identities may be associated with multiple identifiers . However, each of the identifiers is associated with only one of the identities at a time. The metadata represent data describing the entities , . For instance, the metadata describe an access point , of last registration; types of the entities , such as mobile phones, tablet computers, connected vehicles, or traffic cameras; indications of whether the entities , are or were policy offenders; lists of the identifiers that the entities , are interested in for purposes such as location updates; a subscription level or pay level of customers associated with the entities , ; billing information of those customers; GPS data; device management information; authentication keys or certificates; tags; capabilities associated with the identities or the identifiers ; or other information. The policies represent data describing how the entities , may communicate with each other, what information about each other that the entities , may access, or other information. For instance, the policies describe access restrictions to the identifiers , the metadata , and the locations . The locations represent data describing where the entities , are located. For instance, the locations describe networks or nodes within the networks. Various schemata for implementing the database are described below, but any suitable schema may implement the database .
FIG. 3
300
120
125
300
300
125
300
305
325
350
is a schematic diagram of a coarse-grained schema according to an embodiment of the disclosure. The service node maps and stores data in the database according to the coarse-grained schema . The coarse-grained schema implements storage of data in the database . The coarse-grained schema comprises table 1 , table 2 , and table 3 .
305
310
1
315
1
320
325
330
1
335
1
340
1
345
350
355
1
360
310
355
200
315
335
230
320
340
220
345
240
a
b
c
d
e
f
Table 1 comprises an identity , policies -, and metadata -. Table 2 comprises an identifier , policies -, metadata -, and locations -. Table 3 comprises an identity and identifiers -. A, b, c, d, e, and fare any suitable integers and may be the same. The identities , are from the identities ; the policies , are from the policies ; the metadata , are from the metadata ; and the locations are from the locations .
305
315
320
310
325
335
340
345
330
350
360
355
300
1
360
350
300
300
f
As can be seen, table 1 associates the policies and the metadata with a single identity ; table 2 associates the policies , the metadata , and the locations with a single identifier ; and the table 3 associates the identifiers with a single identity . The coarse-grained schema uses set-valued entries, meaning entries with multiple values or sets of values. For instance, there are -identifiers in table 3 , not just one identifier. Though the coarse-grained schema shows specific numbers of data instances, the coarse-grained schema may have any number of such data instances.
FIG. 4
FIG. 3
400
120
125
400
400
125
300
300
400
400
405
420
435
450
465
480
is a schematic diagram of a fine-grained schema according to an embodiment of the disclosure. The service node maps and stores data in the database according to the fine-grained schema . The fine-grained schema implements storage of data in the database as an alternative to the coarse-grained schema in . Unlike the coarse-grained schema , which associates multiple data instances to single identities or identifiers, the fine-grained schema associates single data instances to single identities or identifiers. The fine-grained schema comprises table 1 , table 2 , table 3 , table 4 , table 5 , and table 6 .
405
410
415
420
425
430
435
440
445
450
455
460
465
470
475
480
485
490
410
425
440
200
415
490
230
430
475
220
445
455
470
485
210
460
240
Table 1 comprises an identity and a policy . Table 2 comprises an identity and a metadatum . Table 3 comprises an identity and an identifier . Table 4 comprises an identifier and a location . Table 5 comprises an identifier and a metadatum . Table 6 comprises an identifier and a policy . The identities , , are from the identities ; the policies , are from the policies ; the metadata , are from the metadata ; the identifiers , , , are from the identifiers ; and the location is from the locations .
405
415
410
420
430
425
435
445
440
450
460
455
465
475
470
480
490
485
400
400
400
As can be seen, table 1 associates the policy with a single identity , table 2 associates the metadatum with a single identity , table 3 associates the identifier with a single identity , table 4 associates the location with a single identifier , table 5 associates the metadatum with a single identifier , and table 6 associates the policy with a single identifier . The fine-grained schema does not use set-valued entries. Though the fine-grained schema shows specific numbers of data instances, the fine-grained schema may have any number of such data instances.
300
400
300
400
300
400
The coarse-grained schema and the fine-grained schema may be implemented in any way that permits resolution of metadata, policies, and locations of an identity using any identifier associated with that identity. Similarly, the coarse-grained schema and the fine-grained schema may be implemented in any way that permits the metadata, policies, and locations to be applied to all identifiers associated with an identity. Thus, the coarse-grained schema and the fine-grained schema contrast other approaches that have only one-to-one relationships between identifiers on one hand and metadata, policies, and locations on another hand.
FIG. 5
FIG. 3
FIG. 4
500
300
400
120
125
500
120
125
500
is a relationship diagram according to an embodiment of the disclosure. While the coarse-grained schema in and the fine-grained schema in demonstrate specific examples of how the service node maps and stores data in the database , the relationship diagram generalizes relationships among those data. Thus, the service node may map and store data in the database in any manner consistent with the relationship diagram .
500
510
200
145
160
510
1
520
230
1
530
210
1
540
220
530
1
550
230
1
560
240
1
570
220
g
h
i
j
k
l
The relationship diagram comprises an identity , which is from the identities and corresponds to one of the entities , . The identity is associated with policies -, which are from the policies ; identifiers -, which are from the identifiers ; and metadata -, which are from the metadata . The identifiers are associated with policies -, which are from the policies ; locations -, which are from the locations ; and metadata -, which are from the metadata . G, h, i, j, k, and l are any suitable integers and may be the same.
520
530
540
510
145
160
550
560
570
560
530
510
145
160
As can be seen, multiple policies , identifiers , and metadata may be associated with a single identity and thus a single entity or . Similarly, multiple policies , locations , and metadata may be associated with a single identifier. The locations may be associated only with the identifiers and not directly with the identity . However, a single policy may be associated with multiple identifiers of an identity; an identifier of an identity may be associated with only a subset of policies for the identity; a single policy may be associated with multiple identities and thus multiple entities , ; a location may be the same for all identifiers of an identity; multiple locations may be associated with an identifier; or a single locator may be associated with multiple identifiers.
500
100
120
520
510
500
The relationship diagram demonstrates an improved manageability of the network . For instance, the service node need not update the policies when it assigns a new identifier to the identity . In addition, the relationship diagram supports the queries and operations described below, as well as additional queries and operations.
FIG. 6
FIG. 1
600
100
600
610
145
135
120
145
620
120
125
120
145
120
125
145
160
120
145
120
145
160
630
120
135
145
120
is a message sequence diagram demonstrating queries and operations in the network in . The message sequence diagram implements at least the five examples of queries and operations given below. At step , the first entity transmits a first message through the access point and to the service node . The first message is a message to authenticate an identity or an identifier of the first entity ; a message to update metadata, a policy, or a location associated with an identity or an identifier; a message to resolve metadata, a policy, or a location of an identity or an identifier; or another suitable message. At step , the service node accesses the database . To do so, the service node may authorize the entity to confirm that the service node may access the database on behalf of the entity . For instance, if the first message requests data associated with the entity , then the service node may authorize the entity to confirm that the service node may provide the data to the entity . Such authorization may depend on a policy associated with the entity . Finally, at step , the service node transmits a second message through the access point and to the entity . The service node does so in response to the first message.
610
620
120
125
530
550
570
550
570
630
550
570
510
500
550
570
530
500
560
530
520
530
540
510
FIG. 5
As a first example, at step , the first message comprises an identifier and requests policies and metadata associated with the identifier. At step , the service node accesses the database to select the identifier from the identifiers , determine the policies and the metadata associated with the identifier, and retrieve the policies and the metadata . At step , the second message comprises the policies and the metadata . The second message may not reveal the identity . Looking at the relationship diagram in , it can be seen that the first example demonstrates a direct query because there are arrows directly connecting the requested policies and metadata to the provided identifier . The relationship diagram supports other direct queries such as for requested locations of provided identifiers and for requested policies , identifiers , and metadata of a provided identity .
610
620
120
125
530
510
520
540
510
520
540
630
520
540
510
500
520
540
530
520
540
530
510
500
530
540
520
520
530
540
FIG. 5
As a second example, at step , the first message comprises an identifier and requests policies and metadata associated with the identifier's identity. At step , the service node accesses the database to select the identifier from the identifiers , determine the identity associated with the identifier, determine the policies and the metadata associated with the identity , and retrieve the policies and the metadata . At step , the second message comprises the policies and the metadata . The second message may not reveal the identity . Looking at the relationship diagram in , it can be seen that the second example demonstrates an indirect query because there are no arrows directly connected the requested policies and metadata to the provided identifier . Rather, there are arrows indirectly connecting the requested policies and metadata to the provided identifier through the identity . The relationship diagram supports other indirect queries such as for requested identifiers and metadata of provided policies and requested policies and identifiers of provided metadata .
610
520
145
620
120
125
530
510
530
510
510
160
630
As a third example, at step , the first message comprises a first identifier and requests a second identifier associated with the first identifier. The first identifier may be an ephemeral identifier, and the second identifier may be a designated identifier or may be another identifier that a policy in the policies permits the entity to receive. At step , the service node accesses the database to select the first identifier from the identifiers , determine that the identity is associated with the identifier, and retrieve from the identifiers the second identifier associated with the identity . The identity is associated with the entity . At step , the second message comprises the second identifier.
610
160
620
120
125
530
570
570
120
125
510
540
510
540
570
540
510
160
160
160
120
570
540
630
500
520
550
As a fourth example, at step , the first message comprises an identifier associated with the entity and requests metadata associated with the identifier and the identifier's identity. At step , the service node accesses the database to select the identifier from the identifiers , determine the metadata associated with the identifier, and retrieve the metadata . In addition, the service node accesses the database to determine the identity associated with the identifier, determine the metadata associated with the identity , and retrieve the metadata . The metadata associated with the identifier may be an access point where the identifier was last registered. The metadata associated with the identity , and thus the entity , may be a type of the entity such as a mobile phone or an indication of whether the entity has been a policy offender. The service node then merges the metadata and the metadata to create merged metadata. At step , the second message comprises the merged metadata. The fourth example demonstrates a merge operation. The relationship diagram supports other merge operations such as merging of the policies with the policies .
610
620
120
125
530
510
520
510
145
510
510
160
630
145
As a fifth example, at step , the first message comprises an identifier and requests data associated with the identifier. The data may be policies, locations, or metadata. At step , the service node accesses the database to select the identifier from the identifiers , determine that the identity is associated with the identifier, and retrieve from the policies a policy associated with the identity and indicating that the entity may not receive data associated with the identity . The identity is associated with the entity . At step , the second message indicates that the entity may not receive data associated with the identifier.
145
160
120
145
160
As a sixth example, the entities , register with the service node . As part of their respective registration processes, as shown in Table 1, the entity registers a first identity with a first identifier and a second identifier, and the entity registers a second identity, a third identifier, a policy, and a location.
TABLE 1
Registered Data
Entity 145:
Entity 160:
first identity
second identity
first identifier
third identifier
second identifier
policy
location
145
160
The policy indicates that the first identifier may receive data associated with the second identity, but that the second identifier may not receive data associated with the second identity. The data may be the location. In that case, when using the second identifier, the entity may not obtain the location of the entity .
610
620
120
125
530
510
520
560
630
As a first embodiment of the sixth example, at step , the first message comprises the first identifier and requests the location associated with the third identifier. At step , the service node accesses the database to select the third identifier from the identifiers ; determine that the second identity, the identity , is associated with the third identifier; retrieve from the policies the policy indicating that the first identifier may receive the location associated with the third identifier; and retrieve the location from the locations . At step , the second message comprises the location.
610
620
120
125
530
510
520
120
560
630
As a second embodiment of the sixth example, at step , the first message comprises the second identifier and requests the location associated with the third identifier. At step , the service node accesses the database to select the third identifier from the identifiers ; determine that the second identity, the identity , is associated with the third identifier; and retrieve from the policies the policy indicating that the second identifier may not receive the location associated with the third identifier. Because the policy indicates that the second identifier may not receive the location associated with the third identifier, the service node does not retrieve the location from the locations . At step , the second message indicates that the second identifier may not receive the location associated with the third identifier or the second identity.
145
160
120
610
620
120
125
530
510
520
160
160
120
630
As a seventh example, the entities , register with the service node as shown in Table 1. The policy indicates that the first identifier or any identifier associated with an identity of the first identifier, in this case the first identity, may not receive data associated with the second identity. The policy is silent with respect to the second identifier and may be so because the first identifier is the only published identifier associated with the first identity. The data may be the location. At step , the first message comprises the second identifier and requests the location associated with the third identifier. At step , the service node accesses the database to select the third identifier from the identifiers ; determine that the second identity, the identity , is associated with the third identifier; retrieve from the policies the policy indicating that any identifier associated with an identity of the first identifier may not receive data associated with the second identity; determine that the second identifier is associated with the first identity; and determine that the second identifier therefore may not receive the location associated with the third identifier. Because the policy may not be able to explicitly identify the first identity, which is not known to the entity when the entity registers the policy, the ability of the service node to associate the second identifier with the first identity permits honoring of the policy. At step , the second message indicates that the second identifier or the first identity may not receive the location associated with the third identifier or the second identity.
145
160
120
610
620
120
125
530
510
520
160
160
120
630
As an eighth example, the entities , register with the service node as shown in Table 1. The policy indicates that the first identifier may not receive data associated with the second identity. The policy is silent with respect to the second identifier and may be so because the first identifier is the only published identifier associated with the first identity. The data may be the location. At step , the first message comprises the second identifier and requests the location associated with the third identifier. At step , the service node accesses the database to select the third identifier from the identifiers ; determine that the second identity, the identity , is associated with the third identifier; retrieve from the policies the policy indicating the first identifier may not receive data associated with the second identity; determine that the second identifier is associated with the second identity and thus the first identifier; extend the policy to the second identifier; and determine that the second identifier therefore may not receive the location associated with the third identifier. Because the policy may not be able to explicitly identify the first identity, which is not known to the entity when the entity registers the policy, the ability of the service node to associate the second identifier with the first identifier permits honoring of the policy. At step , the second message indicates that the second identifier or the first identity may not receive the location associated with the third identifier or the second identity.
120
530
510
120
530
530
125
530
530
510
120
520
120
As a ninth example, the service node determines whether multiple identifiers belong to the same identity . For instance, the service node receives a first identifier from a first message, receives a second identifier from a second message, and accesses the database to determine whether both the first identifier and the second identifier belong to the same identity . If so, then the service node performs a first action. For instance, a policy If not, then the service node performs a second action.
FIG. 7
FIG. 3
FIG. 4
700
120
700
710
120
145
145
120
720
730
120
740
120
350
300
435
400
750
120
125
is a flowchart illustrating a method of associating identifiers with an identity according to an embodiment of the disclosure. The service node may implement the method . At step , an identity of a first entity is obtained. For instance, the service node obtains the identity of the entity when the entity registers or authenticates with the service node . At step , a first identifier of the identity is obtained. At step , a second identifier of the identity is obtained. For instance, the service node assigns the first identifier and the second identifier. At step , an association of the first identifier and the second identifier with the identity is created. For instance, the service node associates the first identifier and the second identifier with the identity as shown by table 3 in the coarse-grained schema in or as shown by two instances of table 3 in the fine-grained schema in . Finally, at step , storage of the association in a database is instructed. For instance, the service node instructs storage of the association in the database .
FIG. 8
800
800
800
810
820
830
840
850
860
800
810
820
840
850
is a schematic diagram of an apparatus according to an embodiment of the disclosure. The apparatus may implement the disclosed embodiments. The apparatus comprises ingress ports and an RX for receiving data; a processor, logic unit, baseband unit, or CPU to process the data; a TX and egress ports for transmitting the data; and a memory for storing the data. The apparatus may also comprise OE components, EO components, or RF components coupled to the ingress ports , the RX , the TX , and the egress ports for ingress or egress of optical, electrical signals, or RF signals.
830
830
830
810
820
840
850
860
830
870
870
800
800
860
870
830
The processor is any combination of hardware, middleware, firmware, or software. The processor comprises any combination of one or more CPU chips, cores, FPGAs, ASICs, or DSPs. The processor communicates with the ingress ports , RX , TX , egress ports , and memory . The processor comprises an identity-identifier resolution component , which implements the disclosed embodiments. The inclusion of the identity-identifier resolution component therefore provides a substantial improvement to the functionality of the apparatus and effects a transformation of the apparatus to a different state. Alternatively, the memory stores the identity-identifier resolution component as instructions, and the processor executes those instructions.
860
800
860
800
800
860
The memory comprises any combination of disks, tape drives, or solid-state drives. The apparatus may use the memory as an over-flow data storage device to store programs when the apparatus selects those programs for execution and to store instructions and data that the apparatus reads during execution of those programs. The memory may be volatile or non-volatile and may be any combination of ROM, RAM, TCAM, or SRAM.
An apparatus in an IP network, the apparatus comprises: a receiving element configured to: obtain an identity of a first entity, obtain a first identifier of the identity, and obtain a second identifier of the identity; and a processing element coupled to the receiving element and configured to: create an association of the first identifier and the second identifier with the identity, and instruct storage of the association in a database.
While several embodiments have been provided in the present disclosure, it may be understood that the disclosed systems and methods might be embodied in many other specific forms without departing from the spirit or scope of the present disclosure. The present examples are to be considered as illustrative and not restrictive, and the intention is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated in another system or certain features may be omitted, or not implemented.
In addition, techniques, systems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, components, techniques, or methods without departing from the scope of the present disclosure. Other items shown or discussed as coupled or directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and may be made without departing from the spirit and scope disclosed herein.
BRIEF DESCRIPTION OF THE DRAWINGS
For a more complete understanding of this disclosure, reference is now made to the following brief description, taken in connection with the accompanying drawings and detailed description, wherein like reference numerals represent like parts.
FIG. 1
is a schematic diagram of a network.
FIG. 2
FIG. 1
is a schematic diagram of the database in according to an embodiment of the disclosure.
FIG. 3
is a schematic diagram of a coarse-grained schema according to an embodiment of the disclosure.
FIG. 4
is a schematic diagram of a fine-grained schema according to an embodiment of the disclosure.
FIG. 5
is a relationship diagram according to an embodiment of the disclosure.
FIG. 6
FIG. 1
is a message sequence diagram demonstrating queries and operations in the network in
FIG. 7
is a flowchart illustrating a method of associating identifiers with an identity according to an embodiment of the disclosure.
FIG. 8
is a schematic diagram of an apparatus according to an embodiment of the disclosure. | |
IIT JEE Examination is held to give admission to IITs. This examination is held at two levels: JEE Main and JEE Advanced.
About Paper
This JEE Advanced Solved Mathematics Paper-1 in this article consists of 18 questions divided into three sections. The questions in this paper have been asked from the complete syllabus. Each question of this paper is very important from examination point of view and has been developed very carefully by Subject Experts at Jagranjosh.
Importance of Practice Papers
Practicing sample papers, practice papers and previous years’ question papers help in assessing your preparation and time management. It will also help in increasing your marks in examination.
Few questions from the practice paper are given below:
Q. A committee of 6 is chosen from 10 men and 7 women so as to contain atleast 3 men and 2 women. The number of ways this can be done, if two particular women refuse to serve on the same committee, is
(a) 8000
(b) 7900
(c) 7800
(d) 7700
Ans. (c):
Sol.
Let the men be M1, M2, …, M10 and women be W1, W2, …, W7. 6 person is chosen in a committee so that it contain atleast 3 men and 2 women.
So, the six member committee can contain 4 men and 2 women or 3 men and 3 women.
Let W1 and W2 do not want to be on the same committee.
Q. A rectangular sheet of fixed perimeter with sides having their length in the ratio 8 : 15 is converted into an open rectangular box by folding after removing squares of equal area from all four corners. If the total area of removed squares is 100, the resulting box has maximum volume. The lengths of the sides of the rectangular sheet are
(a) 24
(b) 32
(c) 45
(d) 60
Sol.
Correct Answer: (a, c)
Q. Let y = f(x) be a curve in the first quadrant such that the triangle formed by the co-ordinate axes and the tangent at any point on the curve has area 2. If y(1) = 1, then y(2) =
(a) 0
(b) 1
(c) 2
(d) 1/2
Ans. (a, d):
Sol.
Downoad Complete Solved Practice Paper
Also get, | https://www.jagranjosh.com/articles/jee-advanced-solved-mathematics-paper-1-set-x-1492090613-1 |
---
abstract: 'We compare relativistic approximation methods, which describe gravitational instability in the expanding universe, in a spherically symmetric model. Linear perturbation theory, second-order perturbation theory, relativistic Zel’dovich approximation, and relativistic post-Zel’dovich approximation are considered and compared with the Lemaître-Tolman-Bondi solution in order to examine the accuracy of these approximations. We consider some cases of inhomogeneous matter distribution while the homogeneous top-hat model has been usually taken in the previous Newtonian works. It is found that the Zel’dovich-type approximations are generally more accurate than the conventional perturbation theories in the weakly nonlinear regime. Applicable range of the Zel’dovich-type approximations is also discussed.'
address: |
$^1$ Department of Physics, Tokyo Institute of Technology, Oh-Okayama, Meguro-ku,\
Tokyo 152, Japan\
$^2$ Advanced Science Research Center, Japan Atomic Energy Research Institute,\
Tokai, Naka, Ibaraki 319-11, Japan\
$^3$ Department of Physics, Hirosaki University, Bunkyo-cho, Hirosaki 036, Japan
author:
- 'Masaaki Morita $^1$, Kouji Nakamura $^2$ and Masumi Kasai $^3$'
title: 'Relativistic Zel’dovich approximation in spherically symmetric model'
---
\#1
currentlabel@thefigure
@figure[[Fig..]{}]{}
Introduction
============
Structure formation in the universe is an important subject of research in cosmology. A standard view of the structure formation is that density fluctuations with small amplitudes in the early universe have grown to be a variety of cosmic structures due to gravitational instability. The growth of the density fluctuations has been throughly investigated by linear perturbation theory of the Friedmann-Lemaître-Robertson-Walker (FLRW) universe within both the Newtonian theory and general relativity [@peebles80]. Relativistic linear perturbation theory was first derived by Lifshitz [@lifshitz]. Such relativistic treatments are indispensable when we consider large-scale fluctuations. In his theory, however, there remains a gauge problem that unphysical perturbations are included in the solutions. This problem was carefully studied later by Press and Vishniac [@press]. Also developed was the gauge-invariant formulation [@bardeen], which gives a conceptually straightforward way for dealing with cosmological perturbation.
It is true that the linear theories play an important role in the study of gravitational instability, but they are valid only in the region where density contrast $\delta \equiv (\rho - \rho_b)/\rho_b$ is much smaller than unity. ($\rho$ is energy density of the perturbed FLRW universe and $\rho_b$ is that of the background FLRW universe.) As $\delta$ grows to be comparable to unity, nonlinear effects become essential and we need some kinds of nonlinear approximations. Tomita [@tomita67] developed second-order perturbation theory by extending Lifshitz’s work to study nonlinear effect of gravitational instability for the matter-dominated universe. His approach, however, still depends on the assumption of $\delta$ being small. An approximation scheme without the assumption was proposed by Zel’dovich [@zel70] within the Newtonian framework. This scheme is known as Zel’dovich approximation, which is now widely applied to the problems of the large-scale structure formation. It has been shown that the Zel’dovich approximation can be regarded as a subclass of the first-order solutions in the Lagrangian perturbation theory [@buchert92]. Then higher-order extension of the Zel’dovich approximation, say, post-Zel’dovich approximation (and post-post-Zel’dovich approximation and so on), is straightforwardly derived via the higher-order Lagrangian approach [@bueh; @buchert94; @bouchet]. Relativistic versions of the Zel’dovich approximation have been also studied for the last few years by several authors [@cpss; @kasai95; @rmkb; @mate]. Here we will focus on our tetrad-based approach, whose correspondence to the original Zel’dovich approximation is made clear in Ref. [@kasai95] and extension to second order is presented in Ref. [@rmkb].
One of remarkably advantageous points of the Zel’dovich-type approximations, both the original Newtonian one and the relativistic version, is that they include exact solutions when the deviation from the background FLRW universe is locally one-dimensional. These exact solutions are known as Zel’dovich solutions [@zel70] in the Newtonian case and (some class of) Szekeres solutions [@szekeres; @krasin] in the general relativistic case, respectively. For this reason, the Zel’dovich-type approximations are presumably accurate in description of nearly one-dimensional collapse. It is not clear, however, whether they also give high accuracy in the case of non one-dimensional collapse. In the Newtonian framework, it has been investigated by using spherical models: The so-called top-hat collapse model [@munshi], the top-hat void model [@sasha], and some more general case [@bouchet]. (See also Ref. [@saco] for review.) In addition, there is also a recent work [@ayako] in which homogeneous spheroidal models are considered. An interesting implication is obtained in it: As the deviation of the models from the spherical symmetry becomes larger, the accuracy of the Zel’dovich-type approximations increases while the conventional (Eulerian) approximations have the opposite tendency. It indicates that the Zel’dovich-type approximations may be the least accurate in the exactly spherical case. Then, considering the spherical case may tell us the lowest accuracy of the Zel’dovich-type approximations.
In general relativity, an exact solution of the spherically symmetric dust model is known as the Lemaître-Tolman-Bondi (LTB) solution [@landau]. It is, therefore, of interest to test the Zel’dovich-type approximations with the exact solutions to examine accuracy of the approximations. It is also instructive to clarify the differences between the conventional perturbation theories and the Zel’dovich-type approximations by imposing spherical symmetry and comparing them with the exact solutions.
In this paper, we compare the relativistic approximations such as the linear perturbation theory by Lifshitz [@lifshitz], the second-order perturbation theory by Tomita [@tomita67], the relativistic Zel’dovich approximation by Kasai [@kasai95], and the relativistic post-Zel’dovich approximation by Russ [*et al.*]{} [@rmkb] with the LTB solution. We consider some initial conditions which have inhomogeneous matter distribution, not homogeneous one like the top-hat model adopted in the Newtonian works. It will be shown that the Zel’dovich-type approximations are more useful than the conventional ones in the quasi-nonlinear regime in each case.
The plan of this paper is as follows. In the next section, we summarize the relativistic perturbation theories mentioned above. In the section \[spher\], the LTB solution is introduced and the relation to the relativistic perturbation theories is considered. Main results of this paper are shown in the section \[comparison\]. The section \[summary\] contains summary of our results and discussions.
Throughout this paper, units are chosen so that $c=1$. Indices $\mu, \nu, \cdots$ run from $0$ to $3$ and $i, j, \cdots$ run from $1$ to $3$.
Relativistic perturbation theories {#perturb}
==================================
In this section, we summarize general relativistic perturbation theories which describe gravitational instability in the matter-dominated FLRW universe. We assume that the background is the Einstein-de Sitter spacetime, whose line element is $$\label{FLRW}
ds^2 = -dt^2 + a^2(t)\,(dR^2 + R^2 d\theta^2 + R^2 \sin^2\theta d\phi^2)
\equiv -dt^2 + a^2(t)\,k_{ij}\,dx^i dx^j \;,$$ where $a(t)=t^{2/3}$ is the scale factor. The background four-velocity and energy density of the matter are $u_b^{\mu}=(1,0,0,0)$ and $\rho_b = 1/(6\pi G\, t^2)$, respectively. Density contrast $\delta$ shown later is defined as $\delta \equiv (\rho - \rho_b)/\rho_b$, where $\rho$ is energy density of the perturbed FLRW universe.
Conventional linear and second-order theories
---------------------------------------------
Lifshitz [@lifshitz] pioneered linear perturbation of the FLRW universe in the synchronous gauge $$ds^2 = -dt^2 + g_{ij} \,dx^i dx^j \;.$$ In his theory, the scalar-mode solutions for pressureless matter (dust) are $$\label{lif}
\left\{ \begin{array}{l}
\gamma_{ij} \equiv a^{-2} g_{ij}
= \left(1+\frac{20}{9} \Psi \right) k_{ij}
+ 2 t^{\frac23} \Psi_{|ij} + 2 t^{-1} \Phi_{|ij} \;, \\ \\
u^i_{(1)} = 0 \;, \\ \\
\delta_{(1)} = -t^{\frac23} \Psi^{|k}_{\ |k}
-t^{-1} \Phi^{|k}_{\ |k} \;,
\end{array}\right.$$ where $\Psi=\Psi(\mbox{{\boldmath}$x$})$ and $\Phi=\Phi(\mbox{{\boldmath}$x$})$ are spatial arbitrary functions of first-order smallness, $|$ denotes the covariant derivative associated with the background three-metric $k_{ij}$, and $u^i$ represents spatial component of the four-velocity of the matter. Subscript $(1)$ denotes first-order perturbation quantity. We will not consider contribution of the decaying mode, which is proportional to $t^{-1}$, later on.
Tomita [@tomita67] developed second-order perturbation theory by extending Lifshitz’s work. He obtained the following second-order perturbative solutions from the first-order scalar mode solutions: $$\label{tomi}
\left\{ \begin{array}{l}
\gamma_{ij} =
\left(1 + \frac{20}{9}\Psi + \frac{100}{81}\Psi^2 \right)k_{ij}
+ t^{\frac23} \left(2\Psi_{|ij} - \frac{40}{9}\Psi_{|i}\Psi_{|j}
-\frac{20}{9}\Psi\Psi_{|ij}+\frac{10}{9}\Psi^{|k}\Psi_{|k}\,k_{ij}
\right) \\
\hspace*{10mm}
+ \frac17 t^{\frac43} \left[ 19 \Psi_{|ik}\Psi^{|k}_{\ |j}
- 12 \Psi^{|k}_{\ |k}\Psi_{|ij}
+ 3 \left( \left(\Psi^{|k}_{\ |k}\right)^2
- \Psi^{|k}_{\ |\ell}\Psi^{|\ell}_{\ |k} \right) k_{ij}
\right] \;, \\ \\
u^i_{(1)} = 0 \;, \quad u^i_{(2)} = 0 \;, \\ \\
\delta_{(1)} + \delta_{(2)} = -t^{\frac23} \Psi^{|k}_{\ |k}
+\frac59 t^{\frac23} \left(
\Psi^{|k} \Psi_{|k} + 6\Psi \Psi^{|k}_{\ |k}
\right)
+\frac17 t^{\frac43} \left(
5(\Psi^{|k}_{\ |k})^2 + 2\Psi^{|k}_{\ |\ell}\Psi^{|\ell}_{\ |k}
\right) \;.
\end{array}\right.$$ Subscript $(2)$ represents second-order perturbation. Here we neglected the second-order tensor mode, which is induced by the first-order scalar mode and does not appear in the spherical case.
Zel’dovich-type approximations in general relativity
----------------------------------------------------
In this subsection, we review a relativistic version of the Zel’dovich approximation developed by us [@kasai95; @rmkb]. The irrotational dust model is assumed and then we can take the comoving synchronous coordinate $$ds^2 = -dt^2 + g_{ij} \,dx^i dx^j$$ with the four-velocity $u^{\mu}=(1,0,0,0)$. Thanks to this choice of the gauge, the energy equation $u_{\mu}T^{\mu\nu}_{\ \ ;\nu}=0$ with $T^{\mu\nu} = \mbox{diag}[\rho,0,0,0]$, which becomes $$\dot\rho + \rho K^i_{\ i} = 0 \;,$$ is formally solved in the form $$\label{rho-g}
\rho = \rho(t_{in},\mbox{{\boldmath}$x$})
\frac{\sqrt{\det \bigr[ g_{ij}(t_{in},\mbox{{\boldmath}$x$}) \bigl] }}
{\sqrt{\det \bigr[ g_{ij}(t, \mbox{{\boldmath}$x$}) \bigl] }}\; .$$ Here an overdot ($\dot{\ }$) denotes $\partial/\partial t$, and $K^i_{\ j}$ is the extrinsic curvature, whose expression in the present gauge is $K^i_{\ j} = \frac12 g^{ik} \dot{g_{jk}}$. Introducing the triad $$g_{ij} = a^2(t) \,\delta_{(k)(\ell)} \,e^{(k)}_{\ i} e^{(\ell)}_{\ j} \;,$$ Eq. (\[rho-g\]) is rewritten as $$\label{rho-e}
\rho = \rho_b
\frac{\det \bigr[ e^{(\ell)}_{\ i}(t_{in},\mbox{{\boldmath}$x$}) \bigl] }
{\det \bigr[ e^{(\ell)}_{\ i}(t, \mbox{{\boldmath}$x$}) \bigl] } \;.$$
We obtain perturbative solutions for the triad $e^{(\ell)}_{\ i}$ regardless of the energy density $\rho$ up to second order in the following form [@rmkb] $$\label{second}
e^{(\ell)}_{\ i} = k^{(\ell)}_{\ i} + E^{(\ell)}_{\ i}
+ \varepsilon^{(\ell)}_{\ i} \;,$$ where $k^{(\ell)}_{\ i}$ is the background triad defined by $k_{ij}=\delta_{(k)(\ell)}\,k^{(k)}_{\ i} k^{(\ell)}_{\ j}$, and $E^{(\ell)}_{\ i}$ and $\varepsilon^{(\ell)}_{\ i}$ are the first-order and the second-order solutions given by $$E^{(\ell)}_{\ i} = k^{(\ell)}_{\ j}
\left(\frac{10}{9} \Psi \,\delta^j_{\ i}
+ t^{\frac23}\,\Psi^{|j}_{\ |i}
\right) \;, \quad
\varepsilon^{(\ell)}_{\ i} = k^{(\ell)}_{\ j}
\left(t^{\frac23}\, \psi^j_{\ i}
+ t^{\frac43}\, \varphi^j_{\ i}
\right) \;.$$ Here $\Psi=\Psi(\mbox{{\boldmath}$x$})$ is the same function as the one used in Eqs. (\[lif\]) and (\[tomi\]), and $\psi^i_{\ j}=\psi^i_{\ j}(\mbox{{\boldmath}$x$})$ and $\varphi^i_{\ j}=\varphi^i_{\ j}(\mbox{{\boldmath}$x$})$ are quadratic quantities of $\Psi$, written by $$\psi^i_{\ j} = \frac59 \Psi^{|k} \Psi_{|k} \,\delta^i_{\ j}
- \frac{20}{9} \left(\Psi\Psi^{|i}_{\ |j}
+ \Psi^{|i}\Psi_{|j}\right) \;,$$ $$\varphi^i_{\ j} = \frac{3}{14}\left(
(\Psi_{\ |k}^{|k})^2-\Psi_{\ |\ell}^{|k}\Psi_{\ |k}^{|\ell}
\right)\delta_{\ j}^i
-\frac67 \left(
\Psi_{\ |k}^{|k}\Psi_{\ |j}^{|i}-\Psi_{\ |k}^{|i}\Psi_{\ |j}^{|k}
\right) \;.$$ Note that we removed a remaining gauge freedom in the linear level to derive the above solution. (See Appendix A of Ref. [@rmkb].) And we again neglected contributions of the decaying scalar mode and the tensor mode. We find that the solution (\[second\]) is consistent in the metric level with Eqs. (\[lif\]) and (\[tomi\]) [@rmkb].
Relativistic Zel’dovich and post-Zel’dovich approximations are obtained by substituting Eq. (\[second\]) into Eq. (\[rho-e\]). $$\begin{aligned}
\label{del-za}
\delta_{ZA} &=& \left(
\det \!\biggm[ \delta^i_{\ j} + \frac{t^{\frac23} \,\Psi^{|i}_{\ |j}}
{1 + \frac{10}{9} \Psi}
\biggm] \right)^{-1} -1 \;, \\
\label{del-pza}
\delta_{PZA} &=& \left(
\det \!\biggm[ \delta^i_{\ j} +
\frac{t^{\frac23} \,\Psi^{|i}_{\ |j} + t^{\frac23} \,\psi^i_{\ j}
+ t^{\frac43} \,\varphi^i_{\ j}}
{1 + \frac{10}{9} \Psi}
\biggm] \right)^{-1} -1 \;.\end{aligned}$$ Abbreviations ZA and PZA denote the Zel’dovich and the post-Zel’dovich approximations.
As was written previously, the results of the conventional linear and second-order theories are $$\begin{aligned}
\label{del-lin}
\delta_{LIN} &=& -t^{\frac23} \Psi^{|k}_{\ |k} \;, \\
\label{del-sec}
\delta_{SEC} &=& -t^{\frac23} \Psi^{|k}_{\ |k}
\right) \;.\end{aligned}$$ Abbreviations LIN and SEC denote the linear and the second-order perturbation theories. Expanding Eqs. (\[del-za\]) and (\[del-pza\]) under the condition $||\Psi|| \ll 1$ (where $||\Psi||$ denotes an appropriate norm of a function $\Psi$), Eqs. (\[del-lin\]) and (\[del-sec\]) can be also obtained, respectively. In this sense, ZA and PZA are extensions of LIN and SEC to $|\delta| \sim 1$.
Spherically symmetric model {#spher}
===========================
In this section, we consider the spherically symmetric model of gravitational instability in the FLRW universe. There exists an exact solution known as the LTB solution, which includes three arbitrary functions, in the spherically symmetric case [@landau]. Here we make clear the relations between the arbitrary functions included in the LTB solution and the ones which appears in the approximation methods mentioned in the section \[perturb\]. The line element of the LTB solution is $$\label{LTB_metric}
ds^2 = -dt^2 + \frac{r'^2}{1 + f} \;dR^2
+ r^2 (d\theta^2 + \sin^2 \theta \,d\phi^2) \;,$$ where $(\ ') \equiv \partial/\partial R$ and $f = f(R)$ is an arbitrary function which is related to initial velocity of dust. $r = r(t,R)$ satisfies the following differential equation $$\label{dotr2}
{\dot r}^2 = \frac{F(R)}{r} + f(R)$$ with an arbitrary function $F(R)$, which represents initial distribution of matter. Eq. (\[dotr2\]) can be integrated as follows
\(i) $f > 0$: $$\label{f>0}
r = \frac{F}{2f} (\cosh \eta - 1)\, , \quad
t - t_0(R) = \frac{F}{2f^{3/2}} (\sinh \eta - \eta) \;,$$
\(ii) $f < 0$: $$\label{f<0}
r = \frac{F}{-2f} (1 - \cos \eta)\, , \quad
t - t_0(R) = \frac{F}{2(-f)^{3/2}} (\eta - \sin \eta) \;,$$
\(iii) $f = 0$: $$\label{f=0}
r = \left( \frac{9F}{4} \right)^{\frac13} (t - t_0(R))^{\frac23} \;,$$ where $t_0(R)$ is an integration constant. The above cases (i), (ii) and (iii) may be called “open,” “closed,” and “flat,” respectively, as in the FLRW universe. In these three cases, the density reads $$\label{rholtb}
8\pi G\rho = \frac{F'}{r' r^2} \;.$$ Apparently the LTB solution includes the three arbitrary functions, $f(R)$, $F(R)$, and $t_0(R)$. But dynamical degree of freedom is actually two because there remains freedom of choice of the radial coordinate $R$.
The LTB solution is some extension of the FLRW solution and is often used as a model of an inhomogeneous universe (see Ref. [@krasin] for review), and this solution can be reduced to the FLRW solution. By choosing $f=0$ and $t_0=0$ (and $F=\frac49 R^3$ for convenience), we easily find that $r=a(t)R$ and the LTB solution is reduced to the spatially flat FLRW solution (\[FLRW\]). Furthermore, we can represent this solution by the form of the flat FLRW solution with its perturbations and can see that the arbitrary functions in the LTB solution corresponds to those in the linear perturbation theory. To see this, we consider spherical linear perturbation of the spatially flat FLRW solution. Substituting $$f = f_{(1)} \;, \quad t_0 = t_{0\,(1)} \;, \quad
F = \frac49 R^3 + F_{(1)} \;, \quad r = aR + r_{(1)} \;,$$ into Eq. (\[dotr2\]), the linearized equation for $r_{(1)}$ can be obtained, where the quantities with subscript $(1)$ are treated as linear perturbation. Solving this equation, we obtain $$r_{(1)} = aR \left( \frac34 F_{(1)} R^{-3}
+\frac{9}{20} t^{\frac23} f_{(1)} R^{-2} +t^{-1} B
\right) \;,$$ where $B=B(R)$ is an integration constant. Actually $B$ is related to $t_{0\,(1)}$ by $B=-\frac23 t_{0\,(1)}$. It is easily seen by choosing $f=0$ and using Eq. (\[f=0\]), which reads $$r_{(1)} = aR \left( \frac34 F_{(1)} R^{-3} - \frac23 t^{-1} t_{0\,(1)}
\right) \;.$$ Then we can write “linearized LTB metric” in terms of $f_{(1)}$, $t_{0\,(1)}$, and $F_{(1)}$ as follows $$\begin{aligned}
\gamma_{RR} &=& 1+\frac32 (F_{(1)} R^{-2})'-f_{(1)}
+\frac{9}{10} t^{\frac23} (f_{(1)} R^{-1})'
-\frac43 t^{-1} (t_{0\,(1)} R)' \;, \nonumber \\
\label{ltb1st}
\gamma_{\theta\theta} &=& R^2 \left(
1+\frac32 F_{(1)} R^{-3}+\frac{9}{10} t^{\frac23} f_{(1)} R^{-2}
-\frac43 t^{-1} t_{0\,(1)} \right) \;.\end{aligned}$$ On the other hand, the solution of the linear theory (\[lif\]) gives $$\begin{aligned}
\gamma_{RR} &=& 1+\frac{20}{9}\Psi +2t^{\frac23}\Psi''
+2t^{-1}\Phi'' \;, \nonumber \\
\label{lif-spher}
\gamma_{\theta\theta} &=& R^2 \left(
1+\frac{20}{9}\Psi +2t^{\frac23}\Psi' R^{-1}
+2t^{-1}\Phi' R^{-1} \right) \;.\end{aligned}$$ Comparing Eqs. (\[ltb1st\]) and (\[lif-spher\]), we find the following relations $$F_{(1)} = \frac{40}{27} \Psi R^3 \;, \quad
f_{(1)} = \frac{20}{9} \Psi' R \;, \quad
t_{0\,(1)} = -\frac32 \Phi' R^{-1} \;.$$ This tells us that the arbitrary functions $f(R)$ and $t_0(R)$ correspond to the growing and the decaying modes, respectively, in the linear level. We choose $t_0(R) =0$ hereafter because contributions of the decaying mode are not taken into account throughout. Moreover, we know that $F_{(1)}$ and $f_{(1)}$ are related to each other by the function $\Psi$. This is because we eliminated a residual gauge freedom and thus fixed the gauge condition completely in the previous section. This fixing corresponds to determination of the choice of the radial coordinate $R$ in the spherical case.
To see the relation between the LTB solution and the second-order perturbation, we introduce $f_{(2)}$, $F_{(2)}$, and $r_{(2)}$ so that $$f = \frac{20}{9} \Psi' R + f_{(2)} \;, \quad
F = \frac49 R^3 + \frac{40}{27} \Psi R^3 + F_{(2)} \;, \quad
r = aR + r_{(1)} + r_{(2)} \;,$$ and make calculations in the same way. Here $f_{(2)}$, $F_{(2)}$, and $r_{(2)}$ should be regarded as $O(||\Psi||^{2})$ and $r_{(2)}$ is obtained by solving Eq. (\[dotr2\]) perturbatively. Then we obtain the part of $O(||\Psi||^{2})$ in $\gamma_{RR}$ and $\gamma_{\theta\theta}$. Comparing these $\gamma_{RR}$ and $\gamma_{\theta\theta}$ with the solution of the second-order theory (\[tomi\]), we find $$F_{(2)} = \frac{400}{243} \Psi^2 R^3 \;, \quad
f_{(2)} = \frac{100}{81} (\Psi'^2 R^2 -2\Psi \Psi' R) \;.$$ On the other hand, the LTB solution as an exact model is obtained by choosing $$\begin{aligned}
F &=& \frac49 R^3 + \frac{40}{27} \Psi R^3
+ \frac{400}{243} \Psi^2 R^3 \;, \\
f &=& \frac{20}{9}\Psi' R + \frac{100}{81}(\Psi'^2 R^2-2\Psi \Psi' R) \;.\end{aligned}$$ From Eq. (\[rholtb\]), the density contrast of the LTB solution reads $$\delta_{LTB} = \frac34 \,\frac{t^2 F'}{r' r^2} -1 \;.$$
Let us turn our attention to the approximation methods. If we impose $\Psi = \Psi(R)$, Eqs. (\[del-lin\]), (\[del-sec\]), (\[del-za\]) and (\[del-pza\]) become $$\begin{aligned}
\delta_{LIN} &=& -t^{\frac23} (\Psi'' + 2 \Psi' R^{-1}) \;, \\
\delta_{SEC} &=& -t^{\frac23} (\Psi'' + 2 \Psi' R^{-1})
+ \frac59 t^{\frac23} \left(\Psi'^2 + 6\Psi (\Psi''+2\Psi' R^{-1})
\right)
+ t^{\frac43} \left(
\Psi''^2 + \frac{20}{7}\Psi''\Psi'R^{-1} + \frac{24}{7}\Psi'^2 R^{-2}
\right) ,\end{aligned}$$ $$\begin{aligned}
\delta_{ZA} &=& \left(1 + \frac{t^{\frac23} \Psi'R^{-1}}
{1 + \frac{10}{9}\Psi} \right)^{-2}
\left(1 + \frac{t^{\frac23} \Psi''}
{1 + \frac{10}{9}\Psi} \right)^{-1}
- 1 \;, \hspace{40mm} \\
&& \nonumber \\
\delta_{PZA} &=& \left(1 + \frac{t^{\frac23} \Psi'R^{-1}
+ \frac59 t^{\frac23} \left(\Psi'^2 - 4 \Psi\Psi'R^{-1}\right)
- \frac37 t^{\frac43} \Psi'^2 R^{-2}}
{1 + \frac{10}{9}\Psi} \right)^{-2}
\nonumber\end{aligned}$$ $$\hspace*{15mm}
\times \left(1 + \frac{t^{\frac23} \Psi''
- \frac59 t^{\frac23} \left(3\Psi'^2 + 4 \Psi\Psi''\right)
- \frac37 t^{\frac43} \Psi'R^{-1} (2 \Psi'' - \Psi'R^{-1})}
{1 + \frac{10}{9}\Psi} \right)^{-1}
- 1 \;.$$ Peculiar velocity, which represents deviation of motion of dust shell from the Hubble expansion and is defined by $v \equiv {\dot r}-Hr$, (where $H \equiv {\dot a}/a$ is the Hubble parameter) is written as follows $$\begin{aligned}
v_{LIN} = v_{ZA} &=& \frac23 t^{\frac13} \Psi' \;, \\
v_{SEC} = v_{PZA} &=& \frac23 t^{\frac13} \Psi'
+ \frac{10}{27} t^{\frac13}(\Psi'^2 R - 4\Psi\Psi')
- \frac47 t \Psi'^2 R^{-1} \;.\end{aligned}$$ Note that $v_{LIN} = v_{ZA}$ and $v_{SEC} = v_{PZA}$ because, in the metric level, the Zel’dovich-type approximations coincide with the conventional ones.
Now the density contrast and the peculiar velocity of the LTB solution and the approximations are written in terms of the only one function $\Psi$. The function $\Psi$ should be determined from initial conditions so that the regularity conditions at $R=0$, i.e., $\Psi(R=0)=0$ and $\Psi'(R=0)=0$ are satisfied. Then the peculiar velocity at $R=0$ is always zero both in the LTB solution and the approximation. Moreover, if $\Psi$ is taken so that $\Psi \propto R^2$ near $R=0$, the peculiar velocity near $R=0$ is proportional to $R$, $v \propto R$.
Comparison of LTB solution and approximations {#comparison}
=============================================
Let us proceed to comparison of the LTB solution and the approximations. As mentioned in the section \[perturb\], $\delta_{ZA}$ and $\delta_{PZA}$ include $\delta_{LIN}$ and $\delta_{SEC}$, respectively, when the density contrast is small. As for the peculiar velocity, ZA and PZA are coincident with LIN and SEC, respectively. Moreover, expanding the density contrast of ZA in the following form $$\label{expandza}
\delta_{ZA} \simeq -t^{\frac23} (\Psi'' + 2 \Psi'R^{-1})
+ \frac{10}{9} t^{\frac23} \Psi (\Psi'' + 2 \Psi'R^{-1})
+ t^{\frac43} (\Psi''^2 + 2\Psi''\Psi'R^{-1} + 3\Psi'^2 R^{-2})
+ O(||\Psi||^3) \;,$$ it is found that ZA includes second-order (and higher) terms partially in the expression of the density contrast. Thus we can expect that $\delta_{ZA}$ is as accurate as $\delta_{SEC}$ at late time.
In order to investigate relative accuracy of the approximations quantitatively, we compare the LTB solution and the approximations by using some specific initial conditions. Here initial conditions can be completely fixed by giving an initial density profile $\delta_{in}(R)$. For simplicity, we assume that $\delta_{in}(R)$ is a first-order quantity. Then the arbitrary function $\Psi$ is determined by the relation $$\label{deltain}
\delta_{LIN} \biggm|_{t=t_{in}} = -(\Psi''+ 2\Psi' R^{-1})
= \delta_{in}(R)$$ with normalization $t_{in}=1$. ($t_{in}$ may be regarded as the decoupling time in the history of the expanding universe.) Eq. (\[deltain\]) is solved to determine the function $\Psi$ with the boundary conditions $\Psi(R=0)=0$ and $\Psi'(R=0)=0$.
Here we consider the following two cases: $$\label{former_case}
\delta_{in}(R) = \epsilon \left( 1+\frac{R}{R_0} \right)
\exp \left( -\frac{R}{R_0} \right)$$ and $$\label{later_case}
\delta_{in}(R) = \epsilon \left[
1+\frac{R}{R_0}-\left(\frac{R}{R_0}\right)^2
\right]
\exp \left( -\frac{R}{R_0} \right) \;,$$ where $\epsilon$ is a small constant $(|\epsilon| \ll 1)$ which represents amplitude of an initial density perturbation, and $R_0$ is a comoving scale of the fluctuations. The former case can be regarded as smoothing out the top-hat model while in the latter case, if $\epsilon <0$, the neighborhood of $R=0$ is underdense region (void) and the outside is overdense, and then the shell-crossing will occur. (If $\epsilon >0$, the latter case also shows only similar behavior to the top-hat model as the former one.) For both of the cases, we can evaluate $\delta$ at the center of the fluctuations $(R=0)$ from the LTB solution and the approximation methods in the following form $$\label{delltbR=0}
\delta_{LTB} (R=0) = \left\{ \begin{array}{ll}
\frac{\displaystyle 9}{\displaystyle 2} \;\frac{\displaystyle (\eta -
\sin \eta)^2}{\displaystyle (1 - \cos \eta)^3} - 1 \;,
& \quad \mbox{for} \quad \epsilon > 0 \;, \\
\frac{\displaystyle 9}{\displaystyle 2} \;\frac{\displaystyle (\eta -
\sinh \eta)^2}{\displaystyle (\cosh \eta - 1)^3} - 1 \;,
& \quad \mbox{for} \quad \epsilon < 0 \;,
\end{array} \right.$$ with $$t = \left\{ \begin{array}{ll}
\frac{\displaystyle 9}{\displaystyle 20} \sqrt{\frac{\displaystyle
3}{\displaystyle 5}} \,\epsilon^{-\frac32} \,(\eta - \sin \eta)
\;,
& \quad \mbox{for} \quad \epsilon > 0 \;, \\
\frac{\displaystyle 9}{\displaystyle 20} \sqrt{\frac{\displaystyle
3}{\displaystyle 5}} \,(-\epsilon)^{-\frac32}
\,(\sinh \eta - \eta) \;,
& \quad \mbox{for} \quad \epsilon < 0 \;,
\end{array} \right.$$ and $$\begin{aligned}
\label{dellinR=0}
\delta_{LIN} (R=0) &=& \epsilon \, t^{\frac23} \;, \\
\label{delsecR=0}
\delta_{SEC} (R=0) &=& \epsilon \, t^{\frac23}
+ \frac{17}{21} \,\epsilon^2 \,t^{\frac43} \;, \\
\label{delzaR=0}
\delta_{ZA} (R=0) &=& \left(\, 1 - \frac{\epsilon}{3} \,t^{\frac23}\,
\right)^{-3} -1 \;,\\
\label{delpzaR=0}
\delta_{PZA} (R=0) &=& \left(\, 1 - \frac{\epsilon}{3} \,t^{\frac23}
- \frac{\epsilon^2}{21} \,t^{\frac43}\,
\right)^{-3} -1 \;.\end{aligned}$$ These expressions do not depend on the details of initial density profiles. It is of essence in the calculation that $\delta_{in}\simeq \epsilon$ near $R=0$, and $\epsilon >0$ and $\epsilon <0$ correspond to $f<0$ and $f>0$ at $R=0$, respectively. The above results are described in Figures 1 and 2, where Eqs. (\[delltbR=0\])-(\[delpzaR=0\]) are plotted as functions of $\delta_{LTB}(R=0)$. Figure 1, which represents the collapse case, tells us that ZA is more accurate than LIN and PZA is more accurate than SEC in all $\delta_{LTB}(R=0) >0$ region. (Figure 1 shows only the region of $0< \delta_{LTB}(R=0) <1$, but the tendency shown in Figure 1 does not change in denser region $\delta_{LTB}(R=0) >1$.) We also find that ZA becomes more accurate than SEC at late time when $\delta_{LTB}$ is larger than about $0.5$. This is understood by comparing Eq. (\[delsecR=0\]) and the expanded form of Eq. (\[delzaR=0\]), $$\label{expandza_origin}
\delta_{ZA}(R=0) \simeq \epsilon \, t^{\frac23}
+ \frac23\,\epsilon^2 \, t^{\frac43} + O(\epsilon^3) \;.$$ (This form is also obtained from Eq. (\[expandza\]).) We see from Eqs. (\[delsecR=0\]) and (\[expandza\_origin\]) that $\delta_{ZA}$ is smaller than $\delta_{SEC}$ at early time ($\epsilon t^{2/3} \ll 1$) due to the lack of the terms in $O(\epsilon^{2})$. In this sense, $\delta_{ZA}$ is less accurate than $\delta_{SEC}$ when $\epsilon t^{2/3} \ll 1$. However, due to the existence of the singularity at $\epsilon t^{2/3} = 3$ in $\delta_{ZA}$, $\delta_{ZA}$ becomes to be more accurate than $\delta_{SEC}$ at late time. This existence of the singularity in $\delta_{ZA}$ essentially determines the asymptotic behavior of $\delta_{ZA}$. Indeed, the exact solution $\delta_{LTB}$ for $\epsilon > 0$ in Eq. (\[delltbR=0\]) has a pole of order three at $\epsilon t^{2/3} = 3 (3\pi/2)^{2/3}/5 \sim 1.7$ ($\eta = 2\pi$). This pole corresponds to the crunching time at $R=0$, and the singularity occurs at this time.
From Eqs. (\[delltbR=0\])-(\[delpzaR=0\]), we can also evaluate accuracy of the approximations quantitatively at the turnaround time $\eta = \pi$ ($\epsilon t^{2/3} = 3 (9\pi^{2}/2)^{1/3}/10 \sim 1.1$), though it is not drawn in Figure 1. Here the turnaround time is characterized by ${\dot r}=0$, i.e., the maximum expansion. Physically speaking, the density fluctuation begins to collapse due to gravitational instability, overcoming the cosmic expansion at the turnaround time. At the turnaround time, $\delta_{LTB}$ becomes to be $4.6$. To this $\delta_{LTB}$, $\delta_{LIN}$, $\delta_{SEC}$, $\delta_{ZA}$ and $\delta_{PZA}$ grow to about $23\%$, $43\%$, $60\%$ and $84\%$, respectively. It is natural that $\delta_{PZA}$ is more accurate than $\delta_{ZA}$ from the viewpoint of the singularity at the crunching time. $\delta_{PZA}$ also has a pole of order $3$ at $\epsilon t^{2/3} = (\sqrt{133} -7)/2 \sim 2.3$. This crunching time is nearer to the real crunching time $\sim 1.7$ than that of $\delta_{ZA}$.
From Figure 2, which denotes the void case, we find that PZA gives the best fit at early time before $\delta_{LTB}\simeq -0.7$ while ZA works best at late time when $\delta_{LTB}$ is smaller than about $-0.7$. At late time, PZA gives bad results. It is due to difference of the signature between the first-order (-$\epsilon t^{2/3} /3 >0$) and the second-order (-$\epsilon^2 t^{4/3} /21 <0$) terms when $\epsilon <0$. The same feature also appears in SEC but at earlier time. However, in PZA, this difference is more serious than in SEC. In SEC, $\delta_{SEC}$ grows as $\epsilon^2 t^{4/3} /21 <0$, while in PZA, the difference of the signature makes $\delta_{PZA}$ diverge at a finite time $\epsilon t^{2/3} = (7 + \sqrt{133})/2$. This divergence is an apparent one which is caused by the formalization of PZA. Indeed, we can easily see from Eq. (\[delltbR=0\]) that the exact solution $\delta_{LTB}$ has no pole in $t>0$ (i.e. there is no singularity) and approaches to $-1$ as $\sim t^{-1}$. Though there is no singular point in $\delta_{LIN}$ except $t = \infty$, $\delta_{LIN}$ takes value smaller and smaller without limit as the time increases because there is no physics to stop this decrease of the energy density in this order. Then, only $\delta_{ZA}$ predicts true asymptotic value of $\delta$ without growth and apparent singularities. But $\delta_{ZA}$ approaches to $-1$ as $t^{-2}$. This difference is also seen in Figure 2. In the Newtonian case, detailed discussions on the void can be seen in Ref. [@sasha]. Our relativistic results up to now are quite similar to Newtonian ones, which are given in Refs. [@bouchet; @munshi; @sasha].
Figures 3 (a) and 4 (a) give the density contrast as a function of $R/R_0$ when $\epsilon = 1.0\times 10^{-3}$ and $t=2.0\times 10^4$, and $\epsilon = -1.0\times 10^{-3}$ and $t=3.7\times 10^4$ with the initial density profile (\[former\_case\]), respectively. These two figures show that difference between the LTB solution and the approximations is the largest at $R=0$ in the former case. Hence, it is sufficient to consider the difference at $R=0$ when we examine the accuracy of the approximations in the former case (\[former\_case\]).
We also see the evolution of the peculiar velocity with the initial density profile (\[former\_case\]) in Figures 3 (b) and 4 (b). Here we consider the peculiar velocity normalized by the Hubble flow $Hr$. It will be convenient to use it to see the deviation of the model from the FLRW universe in the metric level. For example, the normalized peculiar velocity $v/Hr =-1$ at the turnaround time. Figures 3 (b) and 4 (b) show the normalized peculiar velocity corresponding to Figures 3 (a) and 4 (a). Although these figures show that the normalized peculiar velocity $v/Hr$ is not zero at $R=0$, we must note that this is due to our normalization. As mentioned in the last section, the peculiar velocity must behave $\sim R$ near $R=0$. On the other hand, $Hr$ also behaves $\sim R$ near $R=0$. Hence our normalized peculiar velocity does not vanish at $R=0$ due to the normalization. It is also noted that the peculiar velocity obtained from the Zel’dovich-type approximations is the same as the one obtained from the conventional approximations as mentioned in the previous section. Furthermore, we find from Figures 3 (b) and 4 (b) that the deviation from the FLRW universe is maximum near $R=0$ for the initial profile (\[former\_case\]). And this is also shows that it is sufficient to consider the difference at $R=0$ when we examine the accuracy of the approximations in the former case (\[former\_case\]) as mentioned above.
On the other hand, for the initial density profile given by Eq. (\[later\_case\]), it cannot be said that the largest deviation of the approximations from the LTB solution occur at $R=0$. Figures 5 (a) and 6 (a) are for this initial profile when $\epsilon = -1.0\times 10^{-3}$ and $t=2.0\times 10^5$, and $\epsilon = -1.0\times 10^{-3}$ and $t=3.0\times 10^5$, respectively. For this initial density profile, the shell-crossing singularity will occur. The tendency of the occurring of shell-crossing can be seen from the peculiar velocity in Figures 5 (b) and 6 (b), where the profile of the normalized peculiar velocities for this case are drawn. In Figures 5 (b) and 6 (b), there exists a $v = 0$ point at $R/R_0 \sim 2.5$. This point, which is denoted by $R = R_{c} \ne 0$ hereafter, is a boundary where the universe is locally “open” ($f>0$) and “closed” ($f<0$), and then $f=0$ at the point. The peculiar velocity in the void region $R<R_{c}$ is positive, while that in the closed region $R>R_{c}$ is negative. Then we can expect that the shell-crossing of the dust matter will form at $R=R_{c}$ within a finite time.
Indeed, one can see from Eqs. (\[dotr2\]) and (\[f=0\]) that the shell-crossing, which is characterized by a finite radius at which $r'$ vanishes [@Lake], will occur at the radius $R=R_{c}$. From Eq. (\[f=0\]), which is the solution when $f=0$, we know $$\label{r_c}
r_c = \left(\frac{9 F_c}{4}\right)^{\frac13} t^{\frac23} \;.$$ (Subscript $c$ denotes value at $R=R_c$.) Differentiating Eq. (\[dotr2\]) with respect to $R$ and using Eq. (\[r\_c\]), we obtain the equation for $r'_c$. Integrating this equation, one finds $$\label{rcdash}
r_{c}' = \frac{3F_{c}'}{4}\left(\frac{4}{9F_{c}}\right)^{\frac23}
t^{\frac23} + \frac{3}{5} \left(\frac{4}{9F_{c}}\right)^{\frac13}
f_{c}' \, t^{\frac43} + C t^{-\frac13} \;,$$ where $C$ is an integration constant and is not essential in our argument. Here, $r'$ must be positive initially if $r$ is a monotonically increasing function of $R$ which is in our case. This means that the first term (cooperate with the third term) in Eq. (\[rcdash\]) must dominate on the initial surface. However, since $f_{c}'<0$ at $R = R_{c}$, $r'_c$ must vanish within a finite time. Hence, at $R = R_{c}$, the shell-crossing singularity will occur. (More generic arguments about the occurrence of shell-crossing singularity can be seen in Ref. [@Lake] and the shell-crossing may occur in the region $f<0$ at first.)
It should be noted that, in Figures 5 (a) and 6 (a), $\delta_{ZA}$ and $\delta_{PZA}$ take the same value as that of the LTB solution at $R=R_{c}$ where the shell-crossing will occur. Then we must say that these figures show that the Zel’dovich-type approximations are not necessarily inaccurate even when the shell-crossing is occurring. Let us consider the reason here. In fact, the deviation from the background Hubble expansion is locally one-dimensional at the point. The definition of “locally one-dimensional deviation” we adopt here is that two of the eigenvalues of the peculiar deformation tensor $V^i_{\ j} \equiv K^i_{\ j} - H \delta^i_{\ j} =
\frac12 \gamma^{ik} \dot{\gamma_{jk}}$ are zero [@kasai93]. According to the definition, let us show the local one-dimensionality at $R=R_c \ne 0$. From Eqs. (\[LTB\_metric\]) and (\[r\_c\]), $$V^R_{\ R} \biggm|_{R=R_c} =
\frac{\dot r'_{\ c}}{r'_{\ c}} - \frac{\dot a}{a} \ne 0 \;, \quad
V^{\theta}_{\ \theta} \biggm|_{R=R_c} =
V^{\phi}_{\ \phi} \biggm|_{R=R_c} =
\frac{\dot r_c}{r_c} - \frac{\dot a}{a} = 0 \;;
\quad v_{LTB} \biggm|_{R=R_c} =0 \;.$$ Since $V^i_{\ j}$ is diagonal in the spherical case, this means that the deviation at $R=R_c$ is locally one-dimensional. It is known that the Zel’dovich-type approximations become exact when the deviation is locally one-dimensional [@kasai95; @kasai93]. In this argument, the origin $R=0$ must be excluded because all components of the peculiar velocity vanish at the origin. Thus we can lead a significant consequence that, at the points where $f=0$, the Zel’dovich-type approximations coincide with the exact LTB solution, i.e., $$\delta_{LTB} = \delta_{ZA} = \delta_{PZA} \quad \mbox{and} \quad
v_{LTB} = v_{ZA} = v_{PZA} = 0 \;.$$ Note that this consequence is not limited to the specific initial density profile (\[former\_case\]) nor (\[later\_case\]).
Turning to Figures 5 (a) and 6 (a), we see that the coincidence of the density contrast at $R/R_0 \sim 2.5$ ($R = R_{c}$) contribute to accuracy of the Zel’dovich-type approximations, and $\delta_{ZA}$ and $\delta_{PZA}$ give good fit around $R/R_0 \sim 2.5$. This is the reason the Zel’dovich-type approximations do not necessarily give bad results even when the shell-crossing is occurring.
Summary and Discussions {#summary}
=======================
We have tested the relativistic perturbative approximations to gravitational instability with the LTB solution. It has been shown that the Zel’dovich-type approximations give higher accuracy than the conventional ones in the quasi-nonlinear regime $|\delta| \sim 1$ within general relativistic framework. Our results are partly similar to the Newtonian ones, but our consideration is more generic. Especially we considered some cases in which matter distribution is inhomogeneous, and found that the Zel’dovich-type approximations are not necessarily inaccurate even when the shell-crossing is occurring. Of course, the occurrence of the shell-crossing shows the break down of our treatment. However, this is due to the failure of our description of the matter as dust rather than the failure of Zel’dovich-type approximations.
Indeed, one of the case considered in the previous section includes the $f=0$ point, where the universe is locally “open” inside and is locally “closed” outside and the shell-crossing will occur at this radius. We have seen, in general, at the $f=0$ points (except the origin $R=0$), the deviation from the FLRW model is locally one-dimensional and the Zel’dovich-type approximations become exact. And in the neighborhood of the points, we can expect that the Zel’dovich-type approximations are particularly accurate. The case considered here is exactly such an example. It should be also noted that “one-dimensionality” which makes the Zel’dovich-type approximations exact means not only globally plane-symmetric but also locally one-dimensional: Such situations appear even in the spherically symmetric model.
To discuss applicable range of the Zel’dovich-type approximations, we reconsider the density contrast at $R=0$ in the collapse case. At the turnaround time, accuracy of the Zel’dovich-type approximations already begins to fall down, i.e., $\delta_{PZA}$ is about $84\%$ of $\delta_{LTB}$ and $\delta_{ZA}$ is about $60\%$. Inaccuracy will be accelerated beyond the turnaround time. In this sense, the turnaround epoch, when the peculiar velocity is as large as the Hubble expansion, is one of criterion of applicable range of the Zel’dovich-type approximations. However, this criterion might not be practical, because one cannot know the correct turnaround time in general, while we have been able to know that from the exact solution in our case. Instead of the turnaround time, the Zel’dovich-type approximations tell us the crunching time as the singularity in the density contrast, approximately. Furthermore, we have also seen that PZA tells us this crunching time more accurately than ZA in our spherical model. Then we may be able to know the approximate turnaround time by the half of this crunching time of PZA.
It is said that the Zel’dovich approximation predicts pancake formation in the gravitational collapse of dust [@zel70]. But it is beyond the turnaround time that the pancake will be formed and thus accuracy of the Zel’dovich approximation is not ensured at that time. If we try to examine final stage of the collapse quantitatively, we will need to develop a new approximation scheme which gives an accurate description even beyond the turnaround epoch.
M. M. would like to thank A. Hosoya for continuous encouragement, H. Ishihara for helpful discussions, and M. Morikawa and A. Yoshisato for valuable remarks.
[99]{}
P. J. E. Peebles, [*The Large-Scale Structure of the Universe*]{} (Princeton University Press, Princeton, 1980).
E. M. Lifshitz, J. Phys. (Moscow) [**10**]{}, 116 (1946).
W. H. Press and E. T. Vishniac, Astrophys. J. [**239**]{}, 1 (1980).
J. M. Bardeen, Phys. Rev. D [**22**]{}, 1882 (1980); H. Kodama and M. Sasaki, Prog. Theor. Phys. Suppl. [**78**]{}, 1 (1984).
K. Tomita, Prog. Theor. Phys. [**37**]{}, 831 (1967).
Ya. B. Zel’dovich, Astron. Astrophys. [**5**]{}, 84 (1970).
T. Buchert, Mon. Not. R. Astron. Soc. [**254**]{}, 729 (1992).
T. Buchert and J. Ehlers, Mon. Not. R. Astron. Soc. [**264**]{}, 375 (1993).
T. Buchert, Mon. Not. R. Astron. Soc. [**267**]{}, 811 (1994).
F. R. Bouchet, S. Colombi, E. Hivon, and R. Juszkiewicz, Astron. Astrophys. [**296**]{}, 575 (1995).
K. M. Croudace, J. Parry, D. S. Salopek, and J. M. Stewart, Astrophys. J. [**423**]{}, 22 (1994); D. S. Salopek, J. M. Stewart, and K. M. Croudace, Mon. Not. R. Astron. Soc. [**271**]{}, 1005 (1994).
M. Kasai, Phys. Rev. D [**52**]{}, 5605 (1995).
H. Russ, M. Morita, M. Kasai, and G. Börner, Phys. Rev. D [**53**]{}, 6881 (1996).
S. Matarrese and D. Terranova, Mon. Not. R. Astron. Soc. [**283**]{}, 400 (1996).
P. Szekeres, Commun. Math. Phys. [**41**]{}, 55 (1975).
A. Krasiński, [*Inhomogeneous Cosmological Models*]{} (Cambridge University Press, Cambridge, 1997).
D. Munshi, V. Sahni, and A. A. Starobinsky, Astrophys. J. [**436**]{}, 517 (1994).
V. Sahni and S. Shandarin, Mon. Not. R. Astron. Soc. [**282**]{}, 641 (1996).
V. Sahni and P. Coles, Phys. Rep. [**262**]{}, 1 (1995).
A. Yoshisato, T. Matsubara, and M. Morikawa, Astrophys. J., submitted. (preprint astro-ph/9707296)
L. D. Landau and E. M. Lifshitz, [*The Classical Theory of Fields*]{} (Pergamon Press, Oxford, 1975).
C. Hellaby and K. Lake, Astrophys. J. [**290**]{}, 381 (1985).
M. Kasai, Phys. Rev. D [**47**]{}, 3214 (1993).
| |
- China has the biggest population, over 1,300,000,000. India is the next one.
- If there were 100 people in a village on person would represent 70,500,000.
- There would be 60 people from Asia, 15 from Africa, 11 Europe, 8 South America, Central America (including Mexico), 5 from Canada and United States, 1 from Oceania.
- There are almost 6,000 languages.
- 50% of the population speaks 8 languages and 50% of the population speak under 8 languages.
- In the global village 16 speak Mandarin, 9 Hindi, 9 English, 7 Spanish, 4 Bengali, 4 Arabic, 3 Portuguese, and 3 speak Russian.
- In the global village 9 are under 5, 10 between 5 and 9, 18 between 10 and 19, 17 are between 20 and 29, 15 are between 30 and 39, 12 are between 40 and 49, 9 are between 50 and 59, 6 are between 60 and 69, 3 are between 70 and 79, and 1 is over 79.
- Every decade 12% of the population will grow
- Many countries are getting smaller because they are getting older.
- In the global village 33 are Christians, 22 are Muslims, 15 are non-religious or atheist, 14 are Hindus, 9 practice Shamanism, animism, and other folk religions, 5 are Buddhists, 2 belong to other global religions such as Judaism, Confucianism, Shintoism, Sikhism, Jainism or the Baha'i faith.
- In the village there would be 31 sheep and goats, 23 cows, bulls and oxen, 15 pigs, 3 camels, 2 horses, 700 chickens (wow).
- 30 people in the village do not have a reliable source of food and are hungry some all of the time. 17 people are severely undernourished and are always hungry. So 47 people in the village do note have food security. 53 people in the village are food secure.
- 87 villagers have a safe source of water near their home or from their tap, though 13 do not and must spend a large part of each day. Most of the collecting water is done by women and girls.
- 62 of the villagers have access to adequate sanitation. While 38 do not and are risk for diseases. Almost half of the world does not have clean sanitation. 68 people breath clean, while 32 breathe air that is unhealthy because of pollution.
- Only 36 kids are 5-24. But only those 30 of them go to school. One teacher for 30. The 6 other kids have to go to a job or stay at home and work. Children also might serve in the military.
- 14 people cannot read or write. Still male have more instances then female.
- 63 adults can work but 6 people can't work or can't find it. There are also 6 people that are retired.
- If we all split the money each person would get $10,300 USD per year.
- In the village, 10 people have 85% of the world's wealth. Each of these 10 have more than $87,500 per year. If you take the poorest 10 have an average of about $6 USD per day. The other 80 people are in the middle of this.
- 50% people of the world make $6 USD per day.
- In the possession of the village there are 50 radios, 45 televisions, 103 telephones (more than 86 are cell phones), 28 computers. There are also 2 trucks, 10 automobiles, and 20 bicycles.
- 76 people have electricity and 24 do not. Of those 76 who have electricity most use it only for light at night. But some of the villagers have other electricity needs. They need energy to manufacture and transport goods.
- In 1850 the average life was to 38. By 1900 the life expectancy was 47. Today a newborn can expect to live to 68. Some of the villagers would live longer lives and some would live sort lives. Merelia is a big, and bad disease. 6% of the world gets Merelia every year. 80 people in the village have received vaccinations.
- Around the year 2150, there will be 250 people in the village. Many experts think that 250 is the maximum number of people the village can sustain. There might be shortages of food, shelter, and other resources.
Reflection Paragraph
After seeing how bad some people live and how spoiled people live I was very surprised. So the US is a spoiled country. All most every person in America has some where to live, clean water, and some where to get a meal everyday from. Some countrys you have to live in an apartment that you share with 6-10 people with. Imagine living in there! A lot of people that dot have a good and safe water and food supply usually have Merelia. Most people in the world (other from America) usually have possessions that are cheap. Another thing is that a lot of kids and adults don't get the right education, don't get enough education, or don't get any education and have to work for their family. I also learned that there are over 6,000 languages on Earth. That is a lot of languages. Another thing that I learned is that most people on Earth are Christians, though Muslim is still a very big religion. | https://paulsolarz.weebly.com/martin/if-the-world-were-a-village |
The Three Little Kittens LOST their ALPHABET MITTENS: Virtual Book Club for Kids featuring Paul Galdone
This month the Virtual Book Club for Kids is featuring books by Paul Galdone. He is known for creatively illustrating many classic children’s tales and rhymes such as The Three Bill Goats Gruff, The Three Little Pigs, Jack and the Beanstalk, The Little Red Hen, etc.
The book we choose to create some activities for was The Three Little Kittens. This has always been one of my favorite nursery rhymes … probably because I really enjoyed the way my mom read it to me when I was a child. I loved the way she used different inflections in her voice to show the emotion and remember it fondly to this day. Therefore, I gravitated toward that book for this month’s Virtual Book Club so I could share my love for the rhyme with my own daughter.
For these activities I cut out 26 pairs of mittens (52 total) out of card stock paper and on each pair I wrote out the uppercase letter on one mitten and the lowercase letter on other … creating 26 pairs of Alphabet Mittens!
After reading the story with my four-year old daughter, I showed her the pairs of mittens and then we had fun with several different ways to play and learn with the Alphabet Mittens:
The Three Little Kittens LOST their Alphabet Mittens in the Sensory Bin
I hid them in a sensory bin I had created and asked my daughter to identify the letters as she found them in the bin. (This particular bin has thistle seed and some gems, but any dry sensory bin would work great)
Hide and Go Seek Alphabet Mittens
My daughter and I took turns hiding the alphabet mittens around the house and the other would go and find the missing mittens. After we believed we found them all, we would come back to the table and line them up in alphabetical order to see if any were still missing!
Hanging them Out to Dry
Just as the kittens do in the story, we hung up our mittens. With each Alphabet Mitten my daughter hung up on our makeshift “clothesline” with a clothespin I would have her identify the letter. This was also a great fine-motor skill strengthening exercise. (and could also be a great way to work on spelling or sight words!)
Other ways you could use the Alphabet Mittens:
- Practice spelling words or one’s name
- Play a game of Memory to find the lower and upper case matches
- Play Go Fish! using the Upper and lower case pairs
- Take them to a nature center or park and have the child find them in alphabetical order along a trail like a scavenger hunt.
Instead of ALPHABET MITTENS, you could also make:
- Color mittens
- Number mittens (to identify numbers or to mix and match for addition, subtraction, multiplication, & division facts)
- Shape mittens
- Spelling, Sight, or Vocabulary Word mittens
Now let’s check out what other great activities the Virtual Book Club for Kids came up with to go along with books by Paul Galdone!
Laura Hutchison
Latest posts by Laura Hutchison (see all)
- Hacked By GeNErAL - August 21, 2015
- ABCs of Easy Crafts for Kids - August 14, 2015
- What is Learned through PLAY - August 12, 2015
- ABCs of Playground Parkbench - August 7, 2015
- ABCs of Encourage Play - July 31, 2015
-
Love the mitten match!
-
This is such a lovely activity! Thanks for the great idea: we might try the shape mittens. I love the picture of her with the sensory bin! <3
Laura recently posted…13 Valentine’s Activities for Toddlers. Colors and sizes.
-
What a great activity to go with a classic story! Thank you for sharing at Sharing Saturday!
Carrie recently posted…Dreaming of Hawaii with Pu’ili, Hawaiian Rhythm Sticks, Music and Stories
-
What a wonderful activity!! Thanks for the idea! | http://blog.playdrhutch.com/2014/02/13/three-little-kittens-lost-alphabet-mittens-virtual-book-club-kids-featuring-paul-galdone/ |
Minnetrista is a museum and cultural center that serves the people of East Central Indiana. Minnetrista offers exhibits and programs for children, families, adults, scouts, teachers and students that focus on nature, history, gardens, and art.
Minnetrista's 40-acre campus includes beautiful gardens, a modern museum facility, an historic home, Nature Area, numerous sculptures, and a portion of the White River Greenway. More than 10,000 square feet of behind-the-scenes space at Minnetrista is devoted to the preservation of the artifacts and archival material that document the history of East Central Indiana.
Mission:
Minnetrista's mission is to provide lifelong learning experiences in art, history and science. It features nationally touring science exhibits in addition to a number of exhibitions showcasing the art and history of East Central Indiana. | https://bestthingsin.com/place/minnetrista-muncie-in.html |
With a focus on languages, semiotics and linguistic structures, LINGUA is a series of zines that visualize and write about linguistic data, namely, phonology, morphology, semantics, syntax and pragmatics.
The Phonology zine aims to create a sign system that visualizes some examples of contrastive sounds that make minimal pairs in English. The sign system is then applied to another language I am learning – Japanese. By examining minimal pairs in English, an English speaker may be able to use this system to decode the pronunciations of some Japanese words.
The Morphology zine uses the example of receipts to visually define “morpheme”, which is the smallest linguistic unit that carries meaning. One or more morphemes can form a word. Similar to the structure of a word, a receipt is a collection of one or multiple smallest actions or objects, and combining several small transactions together amounts to a social scenario that tells a narrative.
The Semantics zine addresses the lexical gap between English and Chinese. Lexical gap is when there isn’t a word for something in a language. Sometimes words don’t exist in a language because they are specific to another culture, and there isn’t as much need for these words in a certain language. A native speaker of one language may not find it troublesome, as s/he rarely attempts to address things that are not in the lexicon. A multilingual speaker, however, may finds the missing of certain words results in some sort of confusion. We encounter lexical gaps whenever there is a need to translate a language to another. When we can’t find a word in the target language, we might use a longer phrase to describe it. In this publication, I try to address the lexical gap between English and Chinese. While lexical gap might be a purely linguistic concept, it’s undoubtedly influenced by cultures.
On the personal level, I see myself residing in this lexical gap. The moments I blank out, I stumble over words, I mispronounce something, I make syntactic errors, I get lost in translation, I feel culturally disconnected...I am very much living in the gap in between cultures and languages.
And the Syntax zine illustrates a diagram to define grammatical nonsense. Syntax is the component of grammar that deals with how words and phrases are combined into larger phrases. Although it’s important to be grammartically correct to precisely convey meanings, we are able to understand many ungrammatical sentences as well. Meanwhile, it’s also possible that a sentence is completely grammatical while being meaningless. | https://www.ivyli.art/ling/ |
The page provides the exchange rate of 400000 Colombian Peso (COP) to Venezuelan bolívar soberano (VES), sale and conversion rate. Moreover, we added the list of the most popular conversions for visualization and the history table with exchange rate diagram for 400000 Colombian Peso (COP) to Venezuelan bolívar soberano (VES) from Sunday, 21/04/2019 till Sunday, 14/04/2019. Latest update of 400000 Colombian Peso (COP) exchange rate.
Convert currency 400000 COP to VES. How much is 400000 Colombian Peso to Venezuelan bolívar soberano? — 519590.71 Venezuelan bolívar soberano.
400000 Colombian Peso to Venezuelan bolívar soberano, convert 400000 COP in VES. | https://cop.currencyrate.today/ves/400000 |
The center of a circle has coordinates (6, -5). The circle is reflected about the line y = x.What are the x, y coordinates of the center of the image circle? State the x coordinate first. | https://web2.0calc.com/questions/help-sorry-for-so-many-questions |
4003.26 miles roughly
The total distance from England to US is 4,333 miles. This is equivalent to 6,974 kilometers or 3,766 nautical miles.
The distance between New York, US and London, England is 3470 miles (5585 km)
It is 17000 from England
About 7000
How many miles is from England to manila
8,765.54 kilometres (5,446.67 miles)
The distance between Solihull, England, and Iowa, US, is 4,013 miles. (6,459 km).
3000
The air distance from Manchester, England, to Houston, Texas, is 4,724 miles. That equals 7,603 kilometers or 4,105 nautical miles.
123 miles
England - 50,346 square miles.
about 102 miles
86 miles
210 miles
48 miles
Manchester, England is about 200 miles from London, England.
London, England and Hereford, England are about 135 miles apart.
Depends from where in England, but it is approximately 3500 miles
It is 680.98 air miles from England to Spain.
129 miles.
About 140 miles on the M3.
175 miles
About 210 miles
The flight distance from Sugar Creek, OH to London is about 3,855 miles. | https://www.answers.com/Q/How_many_miles_is_it_from_the_U.S._to_England |
The purpose of this setup is to show how a curved bar deflects. The device uses a dial gauge that is fixed to the frame to determine the degree of deflection. Thus, in order to calculate the overall mechanical displacement of constructions that use curved sections, it is crucial to examine how curved bars deflect. The first theorem of Castiglione or the unit-load approach are two of the more useful techniques for estimating deflections in curved bars. A distinction between beams and arches is made in construction engineering. A construction with a curved axis and two fixed support bearings or clamp fasteners is referred to as an arch. A double-articulated arch’s support bearings can withstand forces both vertically and horizontally. The arch’s ends in the hinges do not move. This causes the system’s static arching effect. Crane hooks and chain links are common examples of curved beams in mechanical engineering. Five separate beams make up SM-1407, all of which are supported by statically fixed supports: a circular beam, a semicircular beam, a quadrant beam, a right angle beam, and a curved davit. A number of weights are loaded onto the beam being tested. Dial gauges capture both vertical and horizontal deformations. The second moment of area for all 5 beams has the same cross-section as well. This makes it possible to compare test results side by side. Beams that are both semi-circular and circular are fastened to a bearing on the pillar. A bearing block is used to fasten the quadrant beam. The experiment’s various components are all organised nicely and kept safe in a box.
Experiments:
- Bending behaviour of a curved-axis beam.
* circular beam
* semi-circular beam
* quadrant beam
* curved davit beam
* right angled beam
- Application of the principle of virtual forces (the force method) to calculate deformation.
- Second moment of area.
- Comparison of calculated and measured Deformations.
Specifications:
- Elastic deformation of curved-axis beams under load.
- 5 different beams with the same cross-section: circular beam, semi-circular beam, quadrant beam, curved davit beam and right angled beam.
- Bearing block to fix the beams.
- Pillar with bearing to support the circular beam.
- 1 set of weights to place the beam under load.
- 3 dial gauges to record the horizontal and vertical deformation. | https://infinit-technologies.com/product/sm-1407-deformation-of-curved-beam-2/ |
The Good Guy (Shortlisted, 2016 Costa First Novel Award)
It is human nature to be attracted to that which we feel we can relate to, things that resonate within us. And perhaps that is why as a debut novel, The Good Guy, is such a memorable first impression. Inspired by her own experience, Susan Beale offers a compelling insight into the classic tale: the darker side of the suburban white picket fence.
Although the subject matter and plot line are not in themselves novel, following the tropes of the destructive influence of adultery on a superficially happy family, what makes Beale’s writing so poignant is her ability to paint her characters with such acute attention to detail. Her intimate understanding of the way people rationalise their behaviour under the stresses of married (and dating) life highlights how easy it is to slip into miscommunication, particularly in regard to self-reflection and honesty, and equally as frighteningly, the capacity of two people living intimately together to be almost completely unaware of each other.
With each chapter covering the perspective of one of the three central characters, the reader is allowed to piece together their various experiences to create the story that exists somewhere outside of all of their individual minds. Indeed, one of the main points of this novel is to emphasise the importance of perspective in life and how its skewing can lead to social deterioration. One of the book’s central themes then: the dangers of self-delusion.
Ted, the main instigator of movement through the story, is “the good guy”: he embodies the typical soul lost to the American Dream as his initially innocent ambitions surpass the potential of his current circumstances, and extend into a fantasy that leads him astray. Through him, Beale explores how affairs are justified and rationalised with ease even though they can cause heartbreak on two fronts. Meanwhile his wife, Abigail, has her own take on the struggles of the ambitious mother who doesn’t fit traditional motherhood ideals; between active suppression and passive submission, she lets the things she loves pass her by in hope of achieving a more socially acceptable image as a mother and wife.
The frankness with which Beale covers these issues is grounding; perhaps it is her intimacy with the subject that creates such an empathetic and honest relationship with the reader. And although The Good Guy seems an awfully sarcastic title as regards the events that unfold in the novel, in the end, we are reminded of our own humanity. We are all equally subject to the struggle of doing our best where ends don’t always meet, finding that what we want and what we have is not always the same thing, or even capable of co-existing. Ted’s dreams may eventually destroy him and taint the lives around him, yet consolation comes as his wife and lover both earn a cathartic redemption despite their own shortcomings. It is the novel’s development of these themes – the resilience of individuals, a recognition of how vulnerable we are in our own lives – that rejuvenates the classic suburban tale. | https://dura-dundee.org.uk/2017/01/02/the-good-guy/ |
I recently took some time to ask Jesus a personal question about myself. Cognizant of the damage that fear can do, its paralyzing affects and its controlling nature I took some time to see myself in the presence of My Lord Jesus. Knowing that my Lord knows me better than I know myself, I asked Him, “What are some of the areas of fear that you see in my heart?” Along with that question I also asked, “and then Lord, what will you do with the fear, how will you help me to overcome the fear that might be present?”
Inviting the Lord to expose areas of your heart can be challenging as He might show some areas that you don’t want to see. However, I waited for an answer, expecting that Jesus would list at least one or two key areas of concern that gripped my heart or that could negatively influence me. To my surprise there was no comment provided about fear in my heart, just one word seemed to stick out and that was just the word “FEAR” itself. It seemed like the Lord wanted to talk to me about the general concept of fear.
I re-asked the second question, “Lord, what will you do with the fear, how will you help me to overcome the fear that is present?” I waited and then I felt the Lord say, “I will put you through troubles and challenges.” This shocked me, I was asking God to deal with possible fears in my heart and He was saying that I’ll have to go through “troubles and challenges.” This seemed to be one of the very things that would bring more fear into my life, not relieve the fears.
A little puzzled I asked “troubles and challenges how will that help overcome fear”? Immediately I thought of David standing before King Saul. In 1 Samuel 17: 37 David said, “The LORD, who delivered me from the paw of the lion and from the paw of the bear, He will deliver me from the hand of this Philistine.” The struggles with the lion and the bear were actually tools used by God to prevent David from being fearful when he faced Goliath, the giant .
God was encouraging me to see the troubles and challenges that I have gone through, and those that are presently around me, as preparation for greater moments in the days ahead. These were not meant to destroy me but to develop me. These apparent adversaries and struggles were to become marks of victory, insignias which cause me to rally forwards when fear could otherwise overwhelm.
We fail to understand what our God is doing, we complain because of the struggles, and miss the long term growth that is necessary for us. Take a moment to thank God for His loving desire to train and develop us so that we too can become champions for Him. Allow the fears of the future to be overcome by properly responding in the present challenges. | https://www.churchoftherock.ca/bronx-park/blog/fear/ |
While I’m weeding in my garden these days the sound of little wings is so loud! (That and the sound of my sweat dripping in this heat wave…)
I have heard there are thousands of native bees and other pollinators in New England. I bet many are tiny. As I look around for pollinators to photograph, I see many are really small. I notice bees, wasps, flies and butterflies. I know the flies have big eyes, and bees have smaller eyes, and wasps have waists (in general – except for the exceptions).
Sometimes its hard to get a photo of just one pollinator!
Most of my pollinator photos ended up being on the little yellow dill flowers. I’ve read that these are great to have in the garden as they attract beneficial insects. I let the dill come up where it self seeds from the previous year – all over the garden. Some pollinators were also on borage, echinacia, daisies and Johnny-jump-ups. If you want a plant that attracts pollinators, walk through a local nursery and select the plant with the most bees on it!
I need to ask our local bee keepers about their honey bees this year. We have four set of boxes near our community gardens where I took these photos. In previous years, it seems I have seen more honey bees in my garden. On the day I took these pictures, I only saw one honey bee (though a really nice one who got the number one photo spot!) among at least ten or twenty other types of bees. | https://skippysgarden.com/2013/07/garden-pollinators.html |
Here’s a question for you: In the U.S., what is as large as the United Kingdom (the countries of England, Scotland, Wales, and Northern Ireland) and was visited by more than 300 million people last year?
The answer: our 59 national parks.
A few days ago, we celebrated the 100th birthday of the National Park Service (NPS), which is responsible for taking care of our national parks and other national sites (important places). One 20th-century historian (someone who studies history) called our national parks “the best idea we’ve ever had.” Many would agree.
Our national parks are as diverse (very different from each other) as the people who live in the U.S. You’ll find rocky cliffs (side of a mountain that drops straight down) and waterfalls in Yosemite. Seven small islands surrounded (to be all around) by clear blue water in Dry Tortugas. Wooded (full of trees) hills in Smoky Mountains and Shenandoah. The lowest, hottest, and driest place in the U.S. in Death Valley. Rain forests in Olympic, one of the wettest places in the U.S. Some of the world’s largest trees in Sequoia and King’s Canyon. Wetlands, crocodiles, and Florida panthers in Everglades. Glaciers (slow-moving sheets of ice) with deep crevasses (cuts in the ice) in Kenai Fjords. The deep, colorful walls of the Grand Canyon. A palace (home for an important person) built by early American Indians in the side of a cliff in Mesa Verde. A quiet path along a slow-moving river in Cuyahoga. Fiery volcanoes in Hawai’i Volcanoes. Trees that are different than any trees you’ve ever seen in Joshua Tree. Trees that have been dead for hundreds, maybe thousands, of years in Petrified Forest. And that’s just a sample (part of the whole group).
There’s no better way to explore the parks than to visit them. But if you haven’t, or can’t, the Internet is a good place to get a taste of them. I invite you to take some time, look around, and discover them for yourself. Here’s how you can do it.
For an overview (quick look) of the national parks, watch See all U.S. National Parks in One Minute (Note: there’s a short advertisement at the beginning.). You may want to watch it more than once! And to whet your appetite (make you hungry) for more, look at Mark Burns’ beautiful new black and white photographs.
Next, look at the short National Geographic Best of… videos from the five most popular parks – the links are below. You’ll enjoy the scenery, see some unusual animals, and pick up (learn) some new “park” vocabulary (Note: there’s a short advertisement at the beginning of each one.).
For a closer look, the NPS website is the best place to go. Here are the NPS home pages for the 10 most popular national parks. When you get to the home page, click on Plan Your Visit > Places to Go to explore the park.
If you want to visit other parks, you can use the NPS Find a Park page to find their home pages.
My favorites from this list are Great Smoky Mountains, Yosemite, Rocky Mountain (I used to live next to it), and Acadia. Which do you like?
~ Warren Ediger – ESL tutor and coach. My website is Successful English.
Photo from Hawai’i Volcanoes National Park courtesy of Wikipedia Commons.
Note: For a list of all the national parks we’ve discussed on our English Cafe, see here.
national parks all over the world.
their movies once and again, but we never get tired of watching such records.
animals in Africa, Asia or America.
absolutely well made, with nice and beutiful sights or wild avetures.
Yes, Cuca and me have enjoyed them lot of times as other chanels of television usally emit rubbish and more rubbish.
By the way here in Spain we have 15 Natural Parks but I think it is not enough, we need more, but our size is not as big as other coutries round the world.
There is a man “David Frederick Attenborough” to whom I admire absotutely.
and He has given his voice and His efforts to produce nine incredible videos about LIFE and Nature in the Earth.
Every one of his videos are a model of how we have to treat our Planet, admiring ITS beauty.
Congratulations Mr. Attenborough I really admire you and your work.
Ghandhi “A Chorus Line” or hte anti-apartheid drama “Cry Freedom”.
I don´t want to be the only one who writes here, but if it is necessary I will be.
she will be not home.
when you will be not.
Sooner or later Cuca will be not here at home with me, I try not to think too much about it, but ahead it is the real fact.
What will I do, not any idea but it is sure that the house will be not home any more.
Multiple Sclerosis is making it´s bad work inside her and after resisting me for a long time at last it has been necessary to ask for a nursing where she could receive a good care.
Doctor said me that once and again, so finally after talking with the social assistant and listening to her I gave my arm twisted and have done what they told me.
May be a year of waiting, six months, I don´t really know but little by little my mind is accepting the idea of letting her going out home.
In some way it is possible she could be better than here but not me that start to miss her before the fact happens.
Of course this was written thinking on Cuca, my wife.
Talking about our world, our Spanish world what can I say now?.
It is easy, more soon that later we will be again fighting against adversity, that is for sure.
Nothing good is coming for the year 2016 or even the 2017.
The wealthy countries of Europe are going to be poor once more.
be off from their country´s war.
The building buble, the banks, the politicians, we, the people spending too much money than money that we could afford?. It could be just a Little of all, we thought the time of being not so rich were over but reality is coming faster every day.
Here, in my country politicians said continously that it is necessary to follow in the “State of good being”….”El estado del bien estar” but what is that?.
What is good being?. To have money to spend, to much money than we have? the subject than we have only rights but not any obligations? The rule of the less effort? Just that everything were for free?.
It seems to me that all these last years we have been living up our real situation. Listening so many lies we thought we were rich and spent too much euros that the amount we have.
But now we have to pay them.
Ir is impossible such amount, and we could pay only the interest of the loans, what to do, just to ask for more loans to pay the interest….and so on, year after year…..till when?.
Who knows, the debts are increasing and the saving decrease too much faster.
The past years of 2012 to 2015 have been so bad, but I think the year we are living now it could be even worst till the point no body knows, we have not any Goberment, we know nothing about future, the only news thar are coming from Europe or the world are really tragics and very painfully.
I could not understand how so many young people have to be off from here, it is happening now in Spain the same situation than in South America some years before?.
More than 45 per cent of young people have not work, that is impossible of support for a country where are so many retired persons.
They need to go out from here, year after year, the same that I would do if I could be in the same situation.
family stop with my daughters…, and fortunately till the momment they could work.
Well, it is too much for today, I am going to bed and I prefer not to think in future.
Tomorrow will be another day to live, and that is enough for the moment…..just to be alive and going on despite everything looks not so well.
Now here in my country, Spain, the question is: Do we have a Goverment along this year?.
what is even better being in Europe.
May be I have to learn what the difference is. regards again.
Parks I would like to know from all the list you have given us?
and not mentioned by you, it is not a National Park, but I want to see Niagara Falls by all means.
Places in EE.UU. I would like to be and to know.
too much money, and what is even worst too many years allready.
across nearle all my dear Spain.
which make the country so interesting for millions and millions of visitors.
orthe third country in the world reciving such a number of tourists.
London and close near them New York.
New York, my second city after Madrid.
Hello, Warren. Nice topic. I love it. Talking abput nature is always a good thing. Unfortunately men are destroying their home by looking for wealth and the worst is to find out why, if all you have you lewave here. Nothing can be carried into the coffin but your body. Madness? Maybe. Who knows? | https://www.eslpod.com/eslpod_blog/2016/08/30/our-best-idea/ |
The Winthrop University Foundation, Inc. is designated a charitable organization under Section 501(c)(3) of the Internal Revenue Code. The Foundation was created in 1973, but expanded in 1993 for the purpose of managing assets and maintaining gift records for the benefit of Winthrop University.
The current mission of the Foundation, which is governed by a volunteer board, is to support and enhance Winthrop University by encouraging alumni and friends to provide private funds and other resources for the university's benefit, to manage those assets, and to provide volunteer leadership in support of the university's objectives.
The Foundation is the principal recipient of private gifts from alumni and friends of the university. It also manages and distributes restricted and unrestricted endowment gifts, deferred gifts, and special gifts to build and maintain programs at Winthrop University. Gifts of cash, negotiable securities, and deferred gifts support endowed chairs, endowed professorships, scholarships, awards, faculty enrichment programs, and other educational services of the university. Copy SC Certification, 1973.
An honorary degree was conferred posthumously to Mary Spann Richardson LeNoir, a member of the Class of 1942, during Winthrop University’s Commencement on Saturday, May 4, 2019. Pictured are her daughter, Vickie LeNoir Saunders ’79 ‘81 holding the honorary degree, her son Rowland Alston to Vickie’s left, her son Bill LeNoir to Vickie’s right along with other family members including three grandchildren. Vickie’s husband, Frasier Saunders, III and her son, Frasier Saunders, IV are also Winthrop alumni.
Mary’s family has made sure her legacy remains intact at Winthrop. At the annual Faculty, Staff and Retirees Awards Ceremony each spring, the Mary Spann Richardson Award is given to a faculty or staff member who has provided exceptional service to Winthrop. The award was established by siblings Rowland Alston, Bill LeNoir and Vickie LeNoir Saunders ’79 ’81 in memory of their mother. | https://www.winthrop.edu/foundation/index.aspx?id=39905 |
Try and leave this world a little better than you found it, and when your turn comes to die, you can die happy in feeling that at any rate, you have not wasted your time but have done your best – Robert Baden-Powell
Life is a gift too precious to waste. Thus, we expect everyone to enjoy life to the fullest. In our own little way, we all strive to gain experiences, explore opportunities, and achieve success. However, if you truly wish to make your life worthwhile, It is important to be aware of the responsibility to leave a good legacy. But how do you start your journey of leaving a good legacy? You can do this by making an effort to create a positive impact with every project you work on, every place you go to, and in every person you encounter. It also means choosing to avoid negative attitudes and bad habits. The following are three mindsets that will make it impossible for you to leave a wonderful legacy:
Refusing to hone talent and pursue excellence.
A good legacy should inspire people: something you cannot do if you refuse to believe in your own capabilities. Each of us has a talent to share and it is our duty to hone it. Aside from simply pursuing your craft or passion, you also refuse to settle for mediocrity by taking steps to continually improve in what you do. This makes it possible for you to produce outstanding work or accomplish exemplary tasks. Your adherence to excellence will likely empower people and encourage them to do the same. Both your work and your attitude will lead to lasting legacy you can be proud of.
Refusing to act with kindness.
It may not seem apparent but your behavior affects the people around you. Your words and actions can make or break someone’s day. Thus, if your goal is to lead someone in the right direction, you may want to always consider kindness. Practice empathy, patience, and understanding in the way you react to things and interact with others.The decision to be kind can be a bit challenging at times but when you come to think of it, resorting to rude actions and gestures rarely solve anything. On the contrary, choosing kindness can help you make a positive and lasting difference to others.
Refusing to share.
Legacy is sharing, so it involves being generous with your time, talent, and resources. If you can paint, sing, or cook, you may want to seek various ways on how you can use your skills to help society. Instead of stubbornly holding on to too much stuff, consider donating your extra resources to groups that need it. You can also spend valuable time as volunteers to support causes that you believe in. To successfully create a good legacy, practice generosity and throw selfishness out the window.
Leaving a legacy may sound like a big task but is nevertheless something that is worth all the effort. After all, our time in this world may be limited, but the legacy you will leave behind can last for generations to come. Moreover, it is clearly a way of giving back to the world, an action that will likewise give a sense of purpose and fulfillment. | https://tyandvenessacrandell.com/mindsets-hinder-leaving-good-legacy/ |
This article could be the 2nd area of the past one. Please, begin to see the beginning.
Algorithm of actions for linking and referencing
The next thing is to sort the sources. To get this done, modify the range of literature in a way that all source begins having a red line, and prior to the writer’s surname there have been no extraneous indications. From then on, just push the “sort” switch. Please note that in the same time, foreign sources will soon be delivered first, then domestic ones, whereas, based on the guidelines, domestic sources must be the first. Going sources that are russian have to be done manually.
View the text that is entire of review and place the initials prior to the author’s surnames.
Then in the same way they list references to foreign scientists if there are several references in parentheses, it is necessary to observe a certain sequence of them: first, there are references to domestic scientists, beginning with the first mentions and ending with the last years; if references to different scientists are dated by one year, they are listed in alphabetical order if the first author of the work – the same, the references are listed in alphabetical order of the subsequent authors. Used, nonetheless, these rules are seldom followed.
Completing the entire process of forming list that is literature
Let us name three last actions to make when developing the directory of literature:
While editing the review, you could delete sources of information with links or include new ones. Because of this, the correspondence for the a number of sources to your text associated with the review might be violated. Practice implies that after reviewing the review, it is important to compare the references when you look at the review text and also the selection of recommendations utilized. The best way to get this done is to copy the review text into an independent file, delete all of the text, leaving just the links (therefore that they focus on surnames), sort by the “sort” command and, by deleting duplicate links, compare all of them with the a number of literature. It’s convenient to work on this by printing out of the links. For each supply link, for the duration of the writing, mark whether there clearly was a source within the directory of recommendations. For each source into the selection of literary works, mark in the review text whether you found a link for it. Without fail, this procedure must be done before changing the links in the course of the text with numbers.
The work of domestic authors can be tried in the search system of the CNMB and the Internet, and foreign ones in the PubMed database if it was not possible to find bibliographic descriptions of sources to the part of the links.
If when you look at the final form of the review links ought to be in the form of numbers, you will need to replace the surnames in the course of the written text with numbers. To work on this, print the ultimate type of the bibliography. Further, for the duration of the review, change the surnames to numbers corresponding to the origin figures within the variety of sources. Whenever links that are multiple https://eliteessaywriters.com specified, they have been placed in numerical purchase. | http://weddings.mihamatei.com/essay-writers-rank/establishing-links-and-forming-a-list-of-2/ |
Thank you for posting on the SORE League.
The Comrades Ultra Marathon takes place on Sunday. This year it is the "down run" where they run 89km (56 miles) from Pietermaritzburg through to Durban. I should imagine the "down run" is just as tortuous as the "up run" although the "up" is slightly shorter at 87km! So good luck to those runners!
Happy running everyone!
:star2: NB: I am having difficulty at the moment uploading the latest SORE League Table. Hopefully this will appear soon when the problem is solved. :star:
You can access the latest SORE League by clicking here: Latest SORE League
Points are awarded for every run of 10 miles or more as follows:
10 up to 13 miles: 0.5 point
13 up to 16 miles: 1 point
16 up to 20 miles: 1.5 points
20 up to 26 miles: 2 points
26 up to 30 miles: 3 points
30 miles and over: 4 points
Maximum points score for a single run is 4.
Providing home-made cake after a social run or other valuable services to running: 0.5 point
Forgot to post 13.2 up to Preston last Tuesday🙃A bit of hill practice !
Hi,
I did 11 miles yesterday.
Thanks!
Half Marathon for me please...
Hi,
Tuesday - 10 miles
Thursday - 10 miles including MWL
Sunday - 16 miles on the Ashwell & Mordens loop with 4 other squirrels for company
Hi Sue,
12 miles on GSR leg 1 recce our & back, would have been 15m if I hadn’t bumped into a helpful guide runner!
Well done to all the Squirrels who raced this weekend 🙌
Sarah
13.5 for me today. Quickswood route again.
13.1 miles at Saint Albans half marathon for me today!
I did:
10 miles on Saturday to a derelict church on the Greensands Ridge.
20 miles with Lucy on a magical mystery tour of Bedfordshire featuring Greensands, forests, woods, fun buildings and mini bananas!
Naomi
20 miles for me yesterday around some beautiful Beds villages with Naomi :)
Hi Sue
Saturday : 10 miles on leg 4 recce of the GSRR Woburn To Millbrook and a little extra
Sunday ; 17.1 on the Ashwell Loop from home with Adam, Ed, John R and Matt. 👍🏻
Many thanks
Stewart
1/2 point please for 10.2 miles on those hills in Lilley.
Kat B too.
12 for me on Sunday morning. To Pegston and a lap of the hills 😊
I did just over 15 miles reccie-ing (how would you spell that??) the relay route for this weekend coming. 1 point please :) I got stung scratched and lost (only a little bit lost)
10.4 miles locally this evening for me :relaxed:
Just sneaking in at the last minute - 10.1 miles round Letchworth and Baldock after work this afternoon for me.
10 miles for me, Sarah and Babs last night - checking out leg two of GSRR. Needless to say our mileage shows we didn’t quite get the 3.9 mile out and back route correct!! | http://pub48.bravenet.com/forum/static/show.php?usernum=4103351043&frmid=21&msgid=1152076&cmd=show |
Kenneth and Krista Prescott, who own a construction company in the Klamath Basin, have been helping load pallets of food boxes at the Klamath Falls Gospel Mission, which has been distributing $500,000 worth of fresh food twice a week for about a month as part of a federal program aimed at filling gaps in the country’s food supply chain created by the COVID-19 pandemic.
Klamath Falls church delivers $1 million in food boxes each week
Kenneth and Krista Prescott, who own a construction company in the Klamath Basin, have been helping load pallets of food boxes at the Klamath Falls Gospel Mission, which has been distributing $500,000 worth of fresh food twice a week for about a month as part of a federal program aimed at filling gaps in the country’s food supply chain created by the COVID-19 pandemic.
KLAMATH FALLS — Early mornings in October are chilly, but the parking lot of the Klamath Falls Gospel Mission somehow still felt warm on Thursday, Oct. 23, as volunteers helped distribute boxes of food throughout Oregon and Northern California.
Forklifts transferred pallets of produce, dairy and meat boxes from vast refrigerated cargo trailers and onto a line of cars and trucks. Mostly wearing bright orange, the volunteers jovially greeted the visitors, making small talk as they filled their vehicles to the brim with fresh, free food. The cars drove off to deliver their bounty across the county and even the state.
Ammond Crawford, the mission’s executive director, said this is the only major distribution point for hundreds of miles. People have come from as far away as Lakeview, Medford and even Salem to pick up food and distribute it further within their own communities.
“It’s going everywhere,” he said.
Six months ago, farmers across the nation were dumping millions of gallons of milk, smashing hundreds of thousands of eggs and leaving onions and beans to decompose in fields. With pandemic stay-at-home orders leading to zero food demand from schools, hotels and restaurants and food banks overwhelmed with donations, growers had no other way to get rid of much of their crops.
In mid-May, the U.S. Department of Agriculture began the Farmers to Families Food Box Program to make up for losses in the country’s agricultural supply chain. The federal government purchased food from farmers and ranchers and entered into contracts with food distributors, who would package and deliver boxes of fresh food to community food banks and anti-hunger organizations.
Crawford said few of those distributions initially occurred in the West, prompting him to reach out to USDA asking to set up food deliveries to the mission. Sysco eventually handled the packaging and trucking of the food from farms throughout Oregon (the program requires it to originate within 400 miles of its end point) to Klamath Falls twice a week. The mission acts as a drive-through bulk grocery store for churches and community organizations to feed people directly.
The mission’s efforts were relatively modest in June, receiving and distributing about 770 units for each of those deliveries. Each unit includes a 10-pound box of produce, a 5-pound box of meat, a 5-pound box of dairy and eggs and a gallon of milk.
For about four weeks now, the mission has been receiving and distributing around 5,400 units with each shipment. That’s more than $500,000 worth of food — twice a week.
“It’s Christmas in October,” Crawford said.
Crawford said since most families have taken some kind of a hit during the pandemic, not having to take a trip to the grocery store can go a long way. And every box delivered means a farmer gets paid for food they would’ve otherwise had to throw away. Between 60 and 80 different “vendors,” as he called them, show up at each distribution event to take the boxes.
Crawford first asked Pete Bradley, a mission volunteer chaplain, to help haul boxes. Now, Bradley is the operation’s de-facto coordinator, managing a team of around 40 volunteers, directing traffic and coordinating each delivery to the mission.
Bradley said his favorite part of the job is seeing people’s faces when they receive the food.
“I’ve taken a box up to a lady and she just started crying her eyes out,” he said.
Volunteers head to the mission at 6 a.m. every Monday and Thursday and begin unloading pallets, breaking some down into smaller units for organizations that don’t need quite that much food.
“It’s awesome to watch,” Crawford said. “I had no idea we would get this much help.”
Krista and Kenneth Prescott, who own the Malin-based construction company KP Builders LLC, have been lending themselves and their equipment to the distribution events since Bradley asked them if they could step in for a forklift operator who couldn’t come in one day. They agreed — and brought their own forklift with them to boot.
When Bradley asked the Prescotts how long they could help out for, Krista said they’d stay “’till the program ends or the wheels fall off.”
Volunteers at the distributions said it feels good to get out and help, especially when a lot of neighborly activities can’t happen during a pandemic.
Patrick Neal from Calvary Temple of Klamath Falls has been battling lung cancer but said he was all in when he first heard about this program. People he’s distributed food to have been very appreciative.
“They’re grateful,” he said, “and I’m just grateful to be able to do it.”
Watch this discussion.Stop watching this discussion.
(0) comments
Welcome to the discussion.
Keep it Clean. Please avoid obscene, vulgar, lewd,
racist or sexually-oriented language. PLEASE TURN OFF YOUR CAPS LOCK. Don't Threaten. Threats of harming another
person will not be tolerated. Be Truthful. Don't knowingly lie about anyone
or anything. Be Nice. No racism, sexism or any sort of -ism
that is degrading to another person. Be Proactive. Use the 'Report' link on
each comment to let us know of abusive posts. Share with Us. We'd love to hear eyewitness
accounts, the history behind an article. | |
If you get housing benefits like Section 8 or MSA Housing Assistance or you live in public housing or in a project-based unit, you may think that you’ll have to move out if you start working or if something changes in your situation.
The amount of rent you have to pay each month may depend on your income. For example, if you get Section 8 benefits and have $500 in income each month, you might have to pay $150 per month in rent, and Section 8 would pay the rest. If your income goes up to $1,000 per month, your rent payment might go up to $300 per month. Even though you’d be paying more, you might still qualify for Section 8.
Special rules may let you start working without worrying that your earnings will impact your housing. For example, if you have a disability, the Earned Income Disregard (EID) means that you can start working and your rent won’t go up at all for an entire year due to your earned income.
You always have to report changes in your income or living situation. If your income goes up or down or if somebody moves in or out of your place, you have to tell the office of the program that gives you benefits. If you don’t, you might stop getting benefits.
The program may only agree to help you if you live in an apartment that meets certain conditions. For example, if you live alone, the program might say that you could only stay in a one-bedroom apartment.
A program may only help people who are in certain situations. For example, HOPWA only helps people with HIV/AIDS, while other programs only help people with disabilities or seniors. If you get HOPWA benefits because a family member is HIV-positive and that family member moves out of your apartment, HOPWA will no longer help pay for your apartment.
You may have to live in certain locations to get help from a program. For example, if you qualify for project-based housing or public housing, you may not get to choose what unit you want to live in and if you move out of your unit, you might stop getting benefits.
There may be a time limit for staying on the program. Some programs have time limits, but often you can stay on them longer if you apply for other benefits when required.
Learn more about the exact rules for the benefits you get in HB101’s Programs section.
You might be worried that if you start working, you’ll lose the benefits that help you pay for your housing. The good news is that most housing benefits programs are designed so that as your income goes up, your situation will usually get better. For example, many programs, including Section 8 and public housing, require you to pay about 30% of your income as rent. That means that if you start working and your income goes up by $100 per month, you’ll only have to pay $30 more in rent and will get to keep the other $70.
Sheila and her two children are on Section 8. She currently works and makes $700 per month. Her rent is 30% of $700, which is $210 per month. Section 8 pays the rest of her rent.
At her job, Sheila’s going to work more hours and make $1,200 per month. Her family will keep getting Section 8 benefits, but now will have to pay 30% of $1,200 in rent, which is $360 per month. Section 8 will still pay the rest.
Their income will go up $500 per month, while their rent will only go up $150 per month, so they’ll be better off.
Read the articles in HB101’s Programs section to see how your income might impact your benefits.
Some housing programs, including Section 8, public housing, and HOPWA, have a rule called the Earned Income Disregard (EID) that helps people with disabilities on housing benefits start working.
With the EID, if you have a disability and start working, the money you earn won’t be counted for the first year after you start working. That means your rent won’t go up. During the second year after you start working, only half of your work income will be counted, so your rent won’t go up as much as it otherwise would. After the second year, your entire income will be counted by the program.
The exact rules for the EID can vary depending on the benefits you get and your family situation, so ask the office that runs your program about it.
Learn more about the Earned Income Disregard on Disability Benefits 101.
Beverly has an apartment that costs $800 a month in rent. However, she only pays $100 a month, while Section 8 pays the rest. She’s worried that she’s going to have to start paying more rent, because she just got a new job where she makes $1,000 each month.
Beverly talks to her caseworker at her local public housing authority (PHA). The worker tells her that because she has a disability, she qualifies for the Earned Income Disregard (EID). This means that the PHA won’t count her income and Beverly will keep paying the same amount for her apartment, $100, even though her income has gone up.
A year passes, and Beverly is doing well at her job. The PHA reviews Beverly’s situation and starts counting $500 of her earnings (not all $1,000). Beverly has to pay 30% of that as rent on her apartment, which equals $150 ($500 x 30%). This is in addition to the original $100 she was paying before she got her job. So her new rent is $250 per month.
If you live in public housing, you can stay in your place even if you stop qualifying to get help paying for your rent. What this means is that you will have to pay the full rent for your apartment on your own. This can be good for you, since full rent in public housing units is often lower than it is in privately owned housing.
If you are in a program that lets you choose a privately owned unit, like the Section 8 housing choice voucher program, you also can stay in your privately owned apartment even if you don’t qualify for Section 8 anymore. As with public housing, you will have to pay your full rent. The difference is that the full rent in your privately owned apartment will likely be at market rates, which means it won’t be cheaper than other, similar apartments. | https://mn.hb101.org/a/40/a3.htm |
I'm not a big tea drinker. Blasphemous, I know. So many of my good friends are such big tea aficionados, downing 2-3 mugs of tea per day. One friend of mine even brought her own tea to a weekend wedding. But me, personally? Nah. My boyfriend has been trying to convince me to start drinking green tea, citing this article and arguing that it was one of the healthiest foods in the world. I laughed. I run a dessert blog. Healthy foods is one of the last things on my mind.
So I had mixed feelings about this particular Hummingbird Bakery recipe. The last green tea recipe I made -- Green Tea Ice Cream -- turned out too tannic and had a bit of a bitter aftertaste. I had to tone it down with some Dark Chocolate Earl Grey Fudge. Tough life I lead, I know.
A close look at the Hummingbird Bakery Cookbook's recipe for Green Tea Cupcakes, however, revealed that Tarek Malouf and I were on the same page. First of all, the cookbook notes that green tea flavor works wonderfully with either chocolate or vanilla. In fact, the cupcake's cake base is actually a chocolate cake base... almost the same exact recipe as the cookbook's recipe for chocolate cupcakes. The only difference was that this cake used milk that had been cold-infused with green tea.
One of my readers commented on my Green Tea Ice Cream recipe that tea, when heated, can become so bitter because of the heat's reaction with the tea's tanins. A way around this bitterness is to extract the tea's flavor by cold infusion. Simply leave the tea bags in the milk overnight to infuse the milk. Which is exactly what this recipe tells you to do. It's the same method that was used in the Hummingbird Bakery Cookbook's recipe for Lavender Cupcakes. You get have the delicate, floral, taste of tea without any of the bitterness. The chocolate cake of the cupcakes has a faint scent of green tea without any of the bitterness.
I had no trouble adapting this recipe for high-altitude. Like I said before, the recipe is almost identical to the Hummingbird Bakery Cookbook's recipe for chocolate cupcakes, with the exception of the use of green tea-infused milk. The adjustments that I made for the chocolate cupcakes worked perfectly with these guys.
I'd actually even made this recipe at high-altitude before. Kailin, one of my former coworkers, is a big green tea fanatic. For her birthday, she requested that I make her a batch of cupcakes. What better than Green Tea Cupcakes for a green tea fanatic?
Isn't that adorable? I found these on sale at Sur La Table. Once the cupcakes are inside (sorry, I didn't have a photo of that either. I fail), you can see them through the little plastic window. Too cute! Kailin shrieked when she saw the box. She also said that the cupcakes were delicious. | https://www.hummingbirdhigh.com/2012/05/hummingbird-bakery-green-tea-cupcakes.html |
ACS is pleased to have a number of leading speakers joining the program for the 2015 Community Forum - Securing Our Place in the Future.
David is a Director in the Aged Care Reform Taskforce of the Commonwealth Department of Health. David has worked on aged care policy and programmes for the last five years, including managing the Aged Care Approvals Round, the introduction of the home care packages programme, and the early development of the Commonwealth Home Support Programme. He is currently managing the reforms to introduce greater choice and flexibility in home care. Previously, David has worked across a number of health-related reforms in primary care and Medicare policy.
Jane Mussared
CEO COTA SA, Chair of NACA HCP/CDC Advisory Group
Jane Mussared is the Chief Executive of the Council on the Ageing in South Australia (COTA SA), the peak body promoting the rights, needs and interests of older South Australians.
Jane came to COTA SA in 2015 from ACH Group where she had been part of their Executive team since 2001, initially managing the Health and Community Services Division and then heading up People and Innovation. Prior to that Jane was the Manager of the State Government Office for the Ageing.
Jane chairs the National Aged Care Alliance Home Care Committee and is on the Federation of Ethnic Communities Council (FECCA) Healthy Ageing Reference Committee and the SA Economic Development Board Healthy Ageing Group. Jane is also the Deputy Chair of Cirkidz.
In 2008 Jane was the SA Winner of the Innovation Award in the Telstra Business Women’s Awards and in 2007 she won a Sanicare Scholarship to visit aged care in Malta and the Netherlands. Jane has a Masters’ Degree in Social Work (majoring in social policy and research) from the University of Michigan.
David Kay
KPMG
David Kay is a Director in KPMG’s Health, Ageing and Human Services team. He has worked with government agencies and not-for-profits in the disability and community care sectors for more than 10 years, advising them on strategy, policy and program reform, review and evaluation. His recent work has focussed on the development and implementation of the National Disability Insurance Scheme and provider readiness for the Scheme, as well as community care projects including the Evaluation of the Commonwealth Home Care Packages program.
Alison Choy-Flannigan
Holman Webb Lawyers
Alison has over 25 years of corporate, commercial and regulatory experience, specialising in advising health, aged care and life science clients.
Alison was previously General Counsel of Ramsay Health Care Limited (one of Australia's largest private hospital operators, which previously operated an aged care division) and was a partner of a major Australian National law firm.
Advice for aged care and retirement living clients includes advising on corporate advisory, mergers and acquisitions, infrastructure projects, advising on regulatory requirements including the Aged Care Act, reviewing funding agreements, drafting agreements between aged care providers and their clients and advising on issues such as duty of care, consent, security of tenure, guardianship, elder abuse and advance care directives.
In each year since 2008, Alison has been nominated by her peers in Best Lawyers International: Australia and in the Australian Financial Review as one of Australia's 'best lawyers' in the areas of health and aged care and has been named in Australasian Legal Business for mergers and acquisitions.
She was listed as one of the “50 Women to Watch – Specialist in their Field in Australia and New Zealand” – Australasian Lawyer 2015.
Sarah Brisbane:
Community Care Northern Beaches
Sarah joined CCNB in June 2014 and leads CCNB’s Executive and Leadership Teams. She brings 25 years’ experience in strategy, change management, stakeholder relations, marketing/communication, professional services and risk management. Sarah leads CCNB’s transformation agenda – remaining strongly connected to the communities in which we operate, focused on our vision and purpose, but being more consumer-centric, contemporary, competitive, sustainable and ‘future proofed’.
She has a Bachelor of Arts in Organisational Communication, Master of Arts in Communication Management and a diploma from the Australian Institute of Company Directors (AICD). She is a fellow of the AICD.
Michelle Newman:
Aged & Community Services NSW & ACT
Michelle has worked in the human services sector for the past 25 years in a range of positions in local, state and federal government. Michelle has managed Home and Community Care (HACC) services in local government and has participated in many community Boards of Management.
Michelle has been working with ACS NSW & ACT
for the past four years managing a number of projects aimed at improving the
capacity of aged care service providers to prepare and position themselves
for the broader reforms in both the aged and disability programs.
Michelle has represented ACSA on a number of national projects and committees including the NACA Commonwealth Home Support Program Advisory Group, the Service Group 5 sub group and the CHSP Fees sub group. Michelle has recently been appointed to the new Home Care Reform Advisory Group.
Michelle works closely with the sector support
workers and peaks across NSW and chairs the NSW Community Care Forum. Michelle is
passionate about the provision of high quality services across the sector and
the balance of regulation and compliance in a service system that is accessible,
responsive and innovative.
Jenni Allan
Adssi Home Living
Jenni has over thirty years’ experience in the commercial and financial sectors with twenty of those years in the NFP sector, including roles as Finance Operations Manager and Chief Executive Officer of Adssi HomeLiving Australia (AHLA). She holds a Master of Business Administration and a Master of Commerce, Professional Accounting both from the University of New England. Jenni is focused on the future of AHLA; on maximising opportunities provided by the Government’s reform agenda to ensure the business’ ongoing viability and expansion, and is driven by her passion to make a difference to the daily lives of AHLA’s clients.
Michael Fine: BA (Hons), PhD (Sydney), FAAG.
Macquarie University
Michael Fine is Adjunct Professor in the Department of Sociology at Macquarie University, Sydney and was Head of the Department from 2008 to 2013. He is currently a member of the NSW Ministerial Advisory Committee on Ageing, and of the NSW Carers Advisory Council and a Board Member of Northside Community Forum; Editorial Advisor and Board member for the Australasian Journal of Ageing and a number of other international journals.
He is also Principal Investigator for an Australian Research Council (ARC) project, undertaken in collaboration with Macquarie University, University of Wollongong, ACSA (NSW) and a number of leading community care services, on the development of an Australian Community Care Outcomes Measure (ACCOM). In September 2015 he was an invited senior scholar for the University of Vienna’s research laboratory on Practices of Care. | https://na.eventscloud.com/ehome/138831/323461/ |
Christmas Carols Jigsaw is a free online game from genre of puzzle and jigsaw games. You can select one of the 12 images and then select one of the three modes: easy with 25 pieces, medium with 49 pieces and hard with 100 pieces. Have fun and enjoy!
How to Play?
Follow these instructions to play this game. Also, you can find other instructions in-game.
Use your mouse to play the game or tap on the screen
Christmas Carols Jigsaw : If you like this game you might like these too
christmas fun html5 jigsaw kids mobile puzzle
Christmas Carols Jigsaw : Watch Walkthrough
Is it hard to complete the levels of this game? Let`s watch how to play video. | https://4-win.com/christmas-carols-jigsaw-2/ |
This event has taken place
Congratulations to everyone who took part. We're waiting on information about the next edition of this event.
Did you race? Let the community know how you got on and leave a review.Add your review
Want to see all previous and future editions of this event, find them below.
The Castle Swim Series - Hever Castle
Hever, KentHever Rd, Hever, Edenbridge TN8 7NH, UK
Sat 26th September 2020
3 races
£26.00 - £41.00
COVID-19 INFORMATION
In light of the recent government announcement (Monday 22 Feb), we are very encouraged to see a return to organised outdoor sport from March 29. All our 2021 events will fall in line with National Governing Body guidance and we are therefore very confident that we will be able to deliver a safe and successful season of mass participation events. We look forward to kicking off our season with the Cholmondeley Castle Triathlon and Multi-Sport Festival on June 19th/20th.
Organiser's Description
Races
-
1 Mile Swim£26.00 CLOSED
Age 13 and Over
-
2.5 Km Swim£34.00 CLOSED
Age 16 and Over
-
5 Km Swim£41.00 CLOSED
Age 18 and Over
Book
-
1 Mile Swim£26.00 CLOSED
Age 13 and Over
-
2.5 Km Swim£34.00 CLOSED
Age 16 and Over
-
5 Km Swim£41.00 CLOSED
Age 18 and Over
How To Get There
Hever Castle is situated three miles South East of Edenbridge off the B2026 between Sevenoaks and East Grinstead in the village of Hever. Situated 30 miles from London in West Kent, exit the M25 at junctions 5 or 6 and follow the Brown tourist signs. Also well signed from the Hildenborough exit of the A21.
With such a busy event we urge you to check your journey time and allow plenty of time to access the car parks. All car parks are in adjoining fields to the event. Please follow the below instructions to avoid delays
On approaching to the venue, be sure to follow the yellow triathlon car park signs. DO NOT FOLLOW YOUR SAT NAV.
Please follow directions from marshals.
Many of you will be arriving while competitors are out on the cycle course, please drive with caution.
It is roughly 10 – 15 minutes walk to the event village.
Camping
The Castle Triathlon Series is offering camping at all UK and Irish castles (but we don’t supply the tent). If you are a camper with a tent or have a Campervan or Caravan take this fantastic opportunity to stay within the spectacular grounds of the Castle at which you are competing.
There will be toilet and showering facilities available at all sites. There will also be an electric hook-up at Castle Howard and Hever Castle which can be added to your booking on the online shop. There is no human waste disposal for caravans or camper vans on any of the Castle camping sites.
Prices for camping at the Castles are as follows:
Tent (2 people) – £25
Extra Person – £8.00 per night
No charge for under 5’s
Cut Off Times
For health and safety reasons for both the competitors and swim safety team we have to have cut-off times in the water. These are as follows:
1 Mile – 90 mins
2.5K – 2 hours
5K – 3 hours
What's Included
All competitors will receive a medal
Festival theme with live music, a climbing wall, children’s inflatables, yoga and an array of retail outlets and concessions.
A variety of locally-sourced foods, with succulent hog roasts and wood fired pizza just two of the highlights, with a huge variety of refreshments for competitors and spectators alike. | https://findarace.com/events/the-castle-swim-series-hever-castle |
This hotel is located in Paris' 18th district near to the Sacré Coeur. Place du Tertre and the northern Paris flea market are just a few minutes away, and the Sacred Heart Basilica is located just 10 minutes' walk away. The nearest metro station is Marcadet-Poissonniers. The hotel has a 6 storey main building with a total of 47 rooms of which 8 are singles and 39 are doubles. There is a 24-hour reception and a TV room. Wireless Internet connection is available throughout the hotel. The en suite rooms are functionally furnished and fully equipped as standard. Guests may enjoy a full continental breakfast every morning served at the hotel. In addition, Marcadet-Poissonière and Château Rouge metro stations offer direct access to Les Halles shopping district, the Latin District, Montparnasse and Saint-Lazare Stations, and the Orsay Museum.
Key points
- •Internet access
- •Wi-fi
- •Wired Internet (*)
- •Check-in hour 14:00 - 00:00
- •Check-out hour 01:00 - 12:00
Payment methods:
Description Amarys Simart
- • Single rooms: 8
- • Number of floors (main building): 6
- • Total number of rooms: 47
- • Double rooms: 39
Hotel type
- • hotel
Room facilities (Standard room)
- • Bathroom
- • Central heating
- • Direct dial telephone
- • Hairdryer
- • Shower
- • Soundproof room
- • TV
-
Cot on demand
-
Disability-friendly bathroom
-
Extra beds on demand
-
Internet access
-
Living room
-
Smoking rooms
-
Wheelchair-accessible
-
Wi-fi
Facilities
- • Hotel safe
- • Luggage room
- • Mobile phone coverage
- • Transfer service (*)
- • Check-in hour: 14:00 - 0:00
- • Check-out hour: 1:00 - 12:00
-
24-hour reception
-
Car park
-
Garage
-
Large pets allowed (over 5 kg)
-
Small pets allowed (under 5 kg)
-
Wheelchair-accessible
-
Wi-fi
-
Wired Internet(*)
Restaurant
- • Snacks
Business
- • Fax (*)
- • Photocopier (*)
Distances (in meters)
- • Nearest Bus / Metro Stop: 0.16 mi
Interest points
- • Moulin Rouge: 1.24 mi
- • Basilica of the Sacred Heart of Paris: 0.56 mi
- • Paris Opera: 1.74 mi
- • Galeries Lafayette: 1.55 mi
(*) Additional services not included in the price.
Hotels nearby Amarys Simart
- At 0.16 miles
Montmartre
Located just a 10-minute walk from Montmartre, this splendid hotel is right in the middle of a bustling city. Bakeries, bars and restaurants are within walking distance and Château Rouge Metro Station is only 400 metres away, which offers direct access to the Notre Dame. Guests can take a 20 minute walk to visit the Moulin Rouge. All rooms and studios feature a flat-screen TV, heating and a private bathroom. There is also an equipped kitchenette with a microwave, refrigerator and stove for those who prefer to cook their own meals. Guests will be sure to have a delightful stay at this establishment.
From$97.00
See more
- At 0.22 miles
Montclair by River Hotels
This charming Hotel is situated in Arr18:Montmartre-Sacré Coeur. The total number of guests rooms is 40. Wi-Fi internet connection is available for further comfort and convenience. The reception is open 24/7. Cots are not available at Montclair. Pets are not allowed at Montclair. Some services of Montclair may be payable. Groups for more than 8 people are not allowed. PLease contact our desk for group requests.A valid credit card under your name will be asked at your check in with our IDMinimum age requested to check in is 18In case of no show or offbeat arrival, the total charged amount by the hostel is not refundable and cannot be transferred.Additional charges may be requesteD for offbeat arrivalAny cancellation / delay must be notified by email
From$88.00
See more
- At 0.25 miles
Du Globe 18
This cosy hotel is situated in Arr18:Montmartre-Sacré Coeur. A total of 25 guests rooms are available for guests' convenience. Guests may keep updated with the internet and Wi-Fi access in public areas, while the latter is available also in the bedrooms. Customers will appreciate the 24-hour reception. This property offers a specially designed family room including a cot for children. Common areas are suitable for wheelchair-disabled people. Pets are not allowed at this hotel. | https://www.hotelopia.com/h/hotel-amarys-simart_paris_17558/ |
Hubble Space Telescope captures stunning close-up of Orion Nebula
- Herbig-Haro object HH 505 is around 1000 light-years from the Earth.
- HH objects are bright patches of nebulosity associated with newborn stars.
- The photograph was created with 520 ACS images in five different colors to get the sharpest view ever.
The Hubble telescope has taken a new magical image of the Orion Nebula.
One of the most beautiful and spectacular parts of the night sky is the Orion constellation. The Orion Nebula is one of the Milky Way's most studied and photographed objects and a nest of material where young stars are being formed. Alnitak, Saif, and Rigel are floating in a large, dense cloud of interstellar dust and gas between the stars.
“This celestial cloudscape captured the colorful region surrounding the Herbig-Haro object HH 505,” read the European Space Agency's (ESA) official website.
Herbig–Haro (HH) objects are bright patches of nebulosity associated with newborn stars.
The magnificent cloud is a valuable laboratory for studying star formation because of its close vicinity (1,344 light-years from the Sun). Its size and proximity, which spans 24 light-years, make it visible to the naked eyes, reported ScienceAlert on Sunday.
“This observation was also part of a spellbinding Hubble mosaic of the Orion Nebula, which combined 520 ACS images in five different colors to create the sharpest view ever taken of the region,” ESA documented.
“The Orion Nebula is awash in intense ultraviolet radiation from bright young stars.”
Hubble can clearly see the shockwaves created by the outflows, but this radiation also draws attention to the slower-moving stellar material currents. This enables astronomers to see jets and outflows up close and understand their structures.
What are Herbig–Haro (HH) objects?
HH objects are created when fast-moving, nearby clouds of gas and dust clash with narrow jets of partly ionized gas expelled by stars at the speed of several hundred kilometers per second.
The objects are frequently observed around a single star, aligned with its rotating axis, in star-forming regions. Although some have been seen several parsecs away, the majority of them are located within around one parsec of the source.
Parsec is a unit of distance used in astronomy, equal to about 3.26 light-years.
These objects, which are bright regions surrounding young stars, are created when stellar winds or jets of gas emitted by these stars collide violently with neighboring gas and dust.
In the case of HH 505, these outflows originate from the star IX Ori, which lies on the outskirts of the Orion Nebula around 1000 light-years from Earth.
The outflows themselves are visible as gracefully curving structures at the top and bottom of this image and are distorted into sinuous curves by their interaction with the large-scale flow of gas and dust from the core of the Orion Nebula.
Some interesting Hubble facts
Hubble has covered a distance equivalent to a trip to Neptune, the furthermost planet in our solar system.
The telescope has peered back into the distant past to more than 13.4 billion light-years from Earth. Since its mission began in 1990, Hubble has made more than 1.3 million observations.
The new wallpaper size image can be downloaded from the Hubble website.
Experts say this year's monsoon rains in Pakistan were the worst ever, flooding one-third of the South Asian country, ruining crops on hundreds of thousands of acres, and uprooting at least 33 million people of its 220 million population. | https://interestingengineering.com/science/hubble-space-telescope-orion-nebula |
3rd Step of Camp NaNo Prep
"If you fail to plan, you plan to fail" - that quote has been attributed to Benjamin Franklin, but I couldn't confirm that. No matter who said it, it is a fairly solid, and I believe, mostly true statement. Especially when it comes to writing your first book.
If you haven't put a plan in place, in other words, schedule your writing times, then you are less likely to get any writing done because let's face it - Life's gonna life. In other words, distractions abound! Excuses will fall from the sky like life-giving rain. And no, a plan won't magically make those distractions or excuses go away but it will reduce their ability to stop you from getting your word count done for the week.
So Step 3 is this: Schedule Your Writing Sessions for the month! Grab a calendar for April (paper, digital...doesn't matter, just have a calendar you can write on or edit), and let's determine when and how many words you need to write during Camp.
Note - The following tips are based on taking 30 days to draft a 50K word novel per the NaNoWriMo guidelines. Also, also...and this is important. Self Care (ie. REST, FUN, RECREATION) should factor into this process.
First, block off days/times for activities that foster your mental and emotional health. Also known as, SELF CARE! Things like rest, medical/dental appointments, nail appointments, massages, time binge-watching your favorite show, whatever it is that soothes your soul and refills your cup; put all of that on the calendar first (include times).
Next? Block time commitments that "can't" be broken, i.e. work hours, family functions, the kid's games, etc. Again, be sure to include the time for each activity, including travel times. I had an hour and a half commute on workdays so my time block in the morning for work went from 6:30 am to 11:00 am. That was a time commitment I had very little control over changing.
After that, take a fun color and draw blocks to encompass the minutes or hours you have that are free for you to squeeze in some writing time. If you have five minutes, draw a block. If you have a full four hours, block it off. You don't have to use it all, but it's good to know it's there.
Put that calendar somewhere you can get to it. Then, when the month starts, you can adjust the schedule on the daily. Life interrupts with an unexpected whatever? No stress, you can easily scan the calendar and see when your next writing session is and act accordingly.
Oh, my bad, I forgot to tell you how to figure out your word count for your sessions. *Warning* Here comes some math. This is based on the traditional NaNoWriMo word goal of 50,000k words for the month. Your word count will vary depending on what you're writing but that's a whole other blog post.
Final Word Count Goal divided by 30 = number of words you need to write each day.
Ex. 50,000 ÷ 30 = 1,667 (rounded up) each day.
1,923 words a day if you subtract a full day of rest / self-care.
But since you've already identified how many minutes/hours you have to write and when you need to do some additional number crunching to make things work. *ahem*
Final Word Count Goal divided by the total number of hours you have to write = the number of words you need to write each hour you sit down to write. The following example is based on my average day back when I worked for "the man". Average number of hours I had free totaled 48 (I took all of Sunday off no matter what).
Ex. 50,000 ÷ 48 = 1,042 words an hour
Divide that number by 60 and you have the number of words you need to write each minute. Then...no matter how many minutes you have to write that day, you know how many words need to be on the page at the end of your writing session.
Ex. 1,042 ÷ 60 = 17.37 words per minute.
Sooo, if I had a five-minute writing session, I knew I needed to have 86.83 words on the page.
If it's still confusing, feel free to drop a question in the comments below. I'll do what I can to clarify.
Alright, that's going to do it for this one. Come back next Friday for the last step to prep. And we'll be ready to go to Camp.
As always, I send you light and inspiration. | https://www.nowatapressllc.com/post/3rd-step-of-camp-nano-prep |
Related To Category : Adult
In the United States and Canada, the terms learning disability, learning disabilities, and learning disorders (LD) refer to a group of disorders that affect a broad range of academic and functional skills including the ability to speak, listen, read, write, spell, reason and organize information.
A learning disability is not indicative of low intelligence. Indeed, research indicates that some people with learning disabilities may have average or above-average intelligence. Causes of learning disabilities include a deficit in the brain that affects the processing of information.
Types of learning disabilities
Learning disabilities can be categorized either by the type of information processing that is affected or by the specific difficulties caused by a processing deficit.
Information processing deficits
Learning disabilities fall into broad categories based on the four stages of information processing used in learning: input, integration, storage, and output.
Input
This is the information perceived through the senses, such as visual and auditory perception. Difficulties with visual perception can cause problems with recognizing the shape, position and size of items seen. There can be problems with sequencing, which can relate to deficits with processing time intervals or temporal perception. Difficulties with auditory perception can make it difficult to screen out competing sounds in order to focus on one of them, such as the sound of the teacher’s voice. Some children appear to be unable to process tactile input. For example, they may seem insensitive to pain or dislike being touched.
Integration
This is the stage during which perceived input is interpreted, categorized, placed in a sequence, or related to previous learning. Students with problems in these areas may be unable to tell a story in the correct sequence, unable to memorize sequences of information such as the days of the week, able to understand a new concept but be unable to generalize it to other areas of learning, or able to learn facts but be unable to put the facts together to see the “big picture.” A poor vocabulary may contribute to problems with comprehension.
Storage
Problems with memory can occur with short-term or working memory, or with long-term memory. Most memory difficulties occur in the area of short-term memory, which can make it difficult to learn new material without many more repetitions than is usual. Difficulties with visual memory can impede learning to spell.
Output
Information comes out of the brain either through words, that is, language output, or through muscle activity, such as gesturing, writing or drawing. Difficulties with language output can create problems with spoken language, for example, answering a question on demand, in which one must retrieve information from storage, organize our thoughts, and put the thoughts into words before we speak. It can also cause trouble with written language for the same reasons. Difficulties with motor abilities can cause problems with gross and fine motor skills. People with gross motor difficulties may be clumsy, that is, they may be prone to stumbling, falling, or bumping into things. They may also have trouble running, climbing, or learning to ride a bicycle. People with fine motor difficulties may have trouble buttoning shirts, tying shoelaces, or with handwriting.
Specific learning disabilities
Deficits in any area of information processing can manifest in a variety of specific learning disabilities. It is possible for an individual to have more than one of these difficulties. This is referred to as comorbidity or co-occurrence of learning disabilities.
The most common learning disability. Of all students with specific learning disabilities, 70%-80% have deficits in reading. The term “dyslexia” is often used as a synonym for reading disability; however, many researchers assert that there are different types of reading disabilities, of which dyslexia is one. A reading disability can affect any part of the reading process, including difficulty with accurate and/or fluent word recognition, word decoding, reading rate, prosody (oral reading with expression), and reading comprehension. Before the term “dyslexia” came to prominence, this learning disability used to be known as “word blindness.”
Common indicators of reading disability include difficulty with phonemic awareness — the ability to break up words into their component sounds, and difficulty with matching letter combinations to specific sounds (sound-symbol correspondence).
Writing disability
Impaired written language ability may include impairments in handwriting, spelling, organization of ideas, and composition. The term “dysgraphia” is often used as an overarching term for all disorders of written expression. Others, such as the International Dyslexia Association, use the term “dysgraphia” exclusively to refer to difficulties with handwriting.
Math disability
Sometimes called dyscalculia, a math disability can cause such difficulties as learning math concepts (such as quantity, place value, and time), difficulty memorizing math facts, difficulty organizing numbers, and understanding how problems are organized on the page. Dyscalculics are often referred to as having poor “number sense”.
Nonverbal learning disability
Nonverbal learning disabilities often manifest in motor clumsiness, poor visual-spatial skills, problematic social relationships, difficulty with math, and poor organizational skills. These individuals often have specific strengths in the verbal domains, including early speech, large vocabulary, early reading and spelling skills, excellent rote-memory and auditory retention, and eloquent self-expression. | https://paknarf.com/index.php/learning-disorders/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.