text
stringlengths 104
605k
|
---|
# $n$ th Roots
A square root of a number $b$ , written $\sqrt{b}$ , is a solution of the equation ${x}^{2}=b$ .
Example: $\sqrt{49}=7$ , because ${7}^{2}=49$ .
Similarly, the cube root of a number $b$ , written $\sqrt[3]{b}$ , is a solution to the equation ${x}^{3}=b$ .
Example: $\sqrt[3]{64}=4$ , because ${4}^{3}=64$ .
More generally, the ${n}^{\text{th}}$ root of $b$ , written $\sqrt[n]{b}$ , is a number $x$ which satisfies ${x}^{n}=b$ .
The ${n}^{\text{th}}$ root can also be written as a fractional exponent:
$\sqrt[n]{b}={b}^{\frac{1}{n}}$
## When does the ${n}^{\text{th}}$ root exist, and how many are there?
If you are working in the real number system only, then
• If $n$ is an even whole number, the ${n}^{\text{th}}$ root of $b$ exists whenever $b$ is positive ; and for all $b$ .
• If $n$ is an odd whole number, the ${n}^{\text{th}}$ root of $b$ exists for all $b$
Examples:
$\sqrt[4]{-81}$ is not a real number.
$\sqrt[5]{-32}=-2$
If you are working in the complex number system , then things get more, well, complex.
Here every number has $2$ square roots, $3$ cube roots, $4$ fourth roots, $5$ fifth roots, etc.
For example, the $4$ fourth roots of the number $81$ are $3,-3,3i$ and $-3i$ . Because:
$\begin{array}{l}{3}^{4}=81\\ {\left(-3\right)}^{4}=81\\ {\left(3i\right)}^{4}={3}^{4}{i}^{4}=81\\ {\left(-3i\right)}^{4}={\left(-3\right)}^{4}{i}^{4}=81\end{array}$
|
# Difference between revisions of "Planar Projection"
Summary
## Contents
### Projection Parameters
Each of the parameters uses either the world-coordinate (WC) or viewing reference-coordinate (VRC) system. The WC system uses the standard x, y, and z axes, while the VRC system uses the u, v, and n axes.
• The View Reference Point (VRP) is the point (WC) from which the camera is viewing the 3D geometry.
• The View Plane Normal (VPN) is the normal (WC) which, once projected, defines the n axis.
• The View Up Vector (VUP) is the vector (WC) that defines the orientation of the camera (i.e. which way is up) and, once projected, defines the v axis.
• The Projection Reference Point (PRP) is the point (VRC) ...
• The Viewing Window is the rectangle (VRC) that defines the size of the 2D window upon which the 3D geometry will be projected. It is defined by umin, umax, vmin, and vmax.
• The projection type can be either parallel or perspective.
## Parallel Projection
Summary
### Orthographic Projection
The first step is to translate the VRP to the origin, which can be achieved by multiplying its complement with the following matrix T:
$T = \left[ \begin{array}{cccc} 1 & 0 & 0 & -vrp_{x} \\ 0 & 1 & 0 & -vrp_{y} \\ 0 & 0 & 1 & -vrp_{z} \\ 0 & 0 & 0 & 1 \end{array} \right]$
The second step is to then rotate VPN to the z axis and VUP to the y axis. To do this we will calculate the following vectors:
• $R_{z} = \frac{VPN}{|VPN|}$
• $R_{x} = \frac{VUP \times R_{z}}{|VUP \times R_{z}|}$
• $R_{y} = R_{z} \times R_{x}$
The components of these vectors (e.g. Rx = < rx1, rx2, rx3 >) then form the rotation matrix R:
$R = \left[ \begin{array}{cccc} r_{x_{1}} & r_{x_{2}} & r_{x_{3}} & 0 \\ r_{y_{1}} & r_{y_{2}} & r_{y_{3}} & 0 \\ r_{z_{1}} & r_{z_{2}} & r_{z_{3}} & 0 \\ 0 & 0 & 0 & 1 \end{array} \right]$
The third step is to shear the geometry so the the direction of projection (DOP) is parallel to the VPN (now aligned with the z axis). The DOP is defines as follows:
$DOP = \left[ \begin{array}{c} \frac{u_{max} + u_{min}}{2} - prp_{u} \\ \frac{v_{max} + v_{min}}{2} - prp_{v} \\ -prp_{n} \\ 1 \end{array} \right]$
To shear the DOP, we need to align it with the z axis.
TODO: define shx and shy
$SH_{par} = \left[ \begin{array}{cccc} 1 & 0 & sh_{x} & 0 \\ 0 & 1 & sh_{y} & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right]$
TODO: add translate (Tpar) and scale (Spar)
The final transformation matrix for orthographic projection is then the result of the following multiplication:
$N_{par} = (S_{par} \cdot (T_{par} \cdot (SH_{par} \cdot (R \cdot T \cdot (-VRP)))))$
Summary
|
# Tag Info
Accepted
### How "hard" is it to maximize a polynomial function subject to linear constraints?
Your problem is NP-hard, even for polynomials of degree 2. The crucial reference is Theodore Motzkin and Ernst Strauss (1965) "Maxima for graphs and a new proof of a theorem of Turan" ...
• 5,722
Accepted
### Is there a counterexample to this work?
Predecessor versions of this paper have been around for more than 15 years. I remember that there were counter-examples to the first versions, then first revisions, counter-examples to the first ...
• 5,722
Accepted
### Intuitively, why is the complementary slackness condition true?
As you have noted, complementary slackness follows immediately from strong duality, i.e., equality of the primal and dual objective functions at an optimum. Complementarity slackness can be thought of ...
• 1,713
Accepted
### Why is complementary slackness important?
Complementary slackness is key in designing primal-dual algorithms. The basic idea is: Start with a feasible dual solution $y$. Attempt to find primal feasible $x$ such that $(x, y)$ satisfy ...
### Finding the sparsest solution to a system of linear equations
Consider the problem $\text{MAX-LIN}(R)$ of maximizing the number of satisfied linear equations over some ring $R$, which is often NP-hard, for example in the case $R=\mathbb{Z}$ Take an instance of ...
• 2,295
Accepted
### Checking equivalence of two polytopes
I cannot say for sure if you will consider the following approach as better, but from a complexity-theoretic point of view there is a more efficient solution. The idea is to rephrase your question in ...
I find the geometric interpretation useful. Say we have the primal as $\max c x$ subject to $Ax \le b$ and $x \ge 0$. We know that optimum solutions are vertices of the polytope defined by the ...
|
## Abstract
We identify a novel contextual variable that alters the evaluation of delayed rewards in healthy participants and those diagnosed with attention deficit/hyperactivity disorder (ADHD). When intertemporal choices are constructed of monetary outcomes with rounded values (e.g., $25.00), discount rates are greater than when the rewards have nonzero decimal values (e.g.,$25.12). This finding is well explained within a dual system framework for temporal discounting in which preferences are constructed from separate affective and deliberative processes. Specifically, we find that round dollar values produce greater positive affect than do nonzero decimal values. This suggests that relative involvement of affective processes may underlie our observed difference in intertemporal preferences. Furthermore, we demonstrate that intertemporal choices with rounded values recruit greater brain responses in the nucleus accumbens to a degree that correlates with the size of the behavioral effect across participants. Our demonstration that a simple contextual manipulation can alter self-control in ADHD has implications for treatment of individuals with disorders of impulsivity. Overall, the decimal effect highlights mechanisms by which the properties of a reward bias perceived value and consequent preferences.
## INTRODUCTION
Problems with self-control are some of the most detrimental for individuals as well as society, with obesity, excessive debt, and substance abuse representing major health and economic concerns (Madden & Bickel, 2009; Reynolds, Leraas, Collins, & Melanko, 2009; Madden, Petry, Badger, & Bickel, 1997). These issues all have one feature in common: People opt for more immediately rewarding options and undervalue future benefits to their overall detriment. To understand such phenomena, research has posited that future outcomes are evaluated using hyperbolic or quasihyperbolic discount functions, which effectively describe the tendency to overvalue immediate rewards (Frederick, Loewenstein, & O'Dohoghue, 2002). In these functions, value rapidly decreases as rewards are delayed from the present and decreases more slowly as rewards are delayed from future times.
The discount rate expressed in hyperbolic discounting is the critical factor determining relative preferences for immediate rewards. Discount rates depend on a wide variety of contextual and personal variables, such as the nature of the reward, its modality (McClure, Ericson, Laibson, Loewenstein, & Cohen, 2007; Bickel & Marsch, 2001), its magnitude (Green, Myerson, & McFadden, 1997; Thaler, 1981), and even the scent in the experimental room (Li, 2008). Individual factors that predict differences in delay discounting include age (Steinberg, 2010; Sozou & Seymour, 2003; Green, Fry, & Myerson, 1994), health (Chao, Szrek, Pereira, & Pauly, 2009), intelligence (Shamosh et al., 2008), and some psychiatric disorders (Ahn et al., 2011; Heerey, Robinson, McMahon, & Gold, 2007). Peters and Büchel (2011) refer to these dependencies as trait (immutable, e.g., person-related) and state (mutable, framing/context) factors that affect discounting rates. The prototypical disorder associated with greater discounting and poor self-control is attention deficit/hyperactivity disorder (ADHD; Marco et al., 2009; Paloyelis, Asherson, & Kuntsi, 2009; Tripp & Alsop, 1999; Schweitzer & Sulzer-Azaroff, 1988, 1995; Rapport, Tucker, DuPaul, Merlo, & Stoner, 1986).
Process theories of temporal discounting propose a dual system model of decision-making to begin to capture the many influences on relative preferences for immediate reward (van den Bos & McClure, 2013). The first system is posited to be myopic in nature and is linked to positive emotional reactions to rewards. We use the term “affective” to represent this system (Loewenstein, 1996), which is thought to be subserved by brain areas including the nucleus accumbens (NAcc) in the ventral striatum, the ventromedial pFC (vmPFC), and other areas involved in evaluating rewards (Kable & Glimcher, 2007; McClure et al., 2007; McClure, Laibson, Loewenstein, & Cohen, 2004). These brain reward regions have been linked to affective responses (Knutson & Greer, 2008; Panksepp, 2004) and are thought to signal reward value in a stereotyped manner acquired through associative learning (Daw, Niv, & Dayan, 2005; Schultz, Dayan, & Montague, 1997). The second process is hypothesized to be far sighted in nature, slow and rule-based in response, but flexible enough to adaptively control behavior. We refer to this as the “deliberative” system. It is thought to be subserved by the dorsolateral pFC (dlPFC) and posterior parietal cortex (pPC; McClure et al., 2004, 2007).
Here we explore a novel effect on temporal discounting that appears to arise from differences in affective responses to reward prospects. The effect results from changing a seemingly innocuous feature of offered monetary rewards. Specifically, within-subject discount rates differ when choices are constructed from monetary rewards with rounded decimal values (e.g., $25.00) or numbers with nonzero decimal value (e.g.,$25.12). Individuals tend to choose more impulsively when the choice is constituted of monetary rewards that are rounded numbers. We refer to this as the decimal effect. As rounded decimal amounts ($25.00) are more common in daily experience than are nonzero decimal values ($25.12; with .99 a possible exception), we speculate that this effect may result from greater familiarity and hence perceptual fluency with rounded dollar values (cf. Oppenheimer & Frank, 2008; Alter & Oppenheimer, 2006). Our primary aim is to provide a process account of the decimal effect. On the basis of data from several experiments, we will argue that nonzero decimal values in monetary rewards influence affective responses to the rewards and consequently influence how individuals trade off present for delayed rewards.
Our first study, Experiment 1, demonstrates the decimal effect. In Experiment 2, we show behavioral evidence that the decimal effect is related to increased positive affect to rounded monetary rewards. In Experiment 3, we provide fMRI evidence to support our main conclusions. In Experiment 4, we provide an extension of the decimal effect, testing whether rounded values have the ability to increase the value of delayed rewards. Our final study, Experiment 5, examines the decimal effect across a wide developmental period between typically developing controls and participants with ADHD.
## EXPERIMENT 1
Affective processes may signal value in an automatic, stereotyped manner that is slowly acquired through experience. We hypothesized that differential experience with monetary rewards with rounded values relative to nonzero decimal values may bias how the rewards are processed by facilitating automatic responses and consequently influencing intertemporal preferences (Butterworth, 1999). We tested this prediction in our first experiment.
### Methods
#### Participants
We recruited 28 participants; 12 at Stanford University (eight men, mean age = 20.26 years, range = 18–22 years) and 16 from Baylor College of Medicine and the greater Houston area community (10 men, mean age = 26.38 years, range = 20–36 years). (See Table 1 for inclusion/exclusion criteria for all studies and Table 2 for demographic data for Experiments 1–4.) We excluded one participant from each site because they failed to submit choices on all trials. Participants from Baylor College of Medicine completed the task while undergoing fMRI scanning (see Experiment 3).
Table 1.
Inclusion/Exclusion Criteria for All Experiments
GroupExperiments 1–4
Inclusion Criteria
HC Ages 18–50
Exclusion Criteria
HC Clinical history of neurological, major medical or psychiatric disorder
fMRI contraindicationsa
Experiment 5
Inclusion Criteria
IQ over 80 as per WASI
HC t score of 60 or lower on the total DSM total ADHD score
HC 3 or more inattentive and 3 or more hyperactive/impulsive DSM symptoms
ADHD t score of 65 or higher on the total DSM total ADHD score
ADHD 6 or more inattentive and 6 or more hyperactive/impulsive DSM symptoms
ADHD Significant symptoms before age 7 and across at least two domains (e.g., home and school/work)
Exclusion Criteria
HC and ADHD Clinical history of neurological, major medical of psychiatric disorder
HC History of treatment with psychoactive medication
HCa fMRI contraindications
GroupExperiments 1–4
Inclusion Criteria
HC Ages 18–50
Exclusion Criteria
HC Clinical history of neurological, major medical or psychiatric disorder
fMRI contraindicationsa
Experiment 5
Inclusion Criteria
IQ over 80 as per WASI
HC t score of 60 or lower on the total DSM total ADHD score
HC 3 or more inattentive and 3 or more hyperactive/impulsive DSM symptoms
ADHD t score of 65 or higher on the total DSM total ADHD score
ADHD 6 or more inattentive and 6 or more hyperactive/impulsive DSM symptoms
ADHD Significant symptoms before age 7 and across at least two domains (e.g., home and school/work)
Exclusion Criteria
HC and ADHD Clinical history of neurological, major medical of psychiatric disorder
HC History of treatment with psychoactive medication
HCa fMRI contraindications
HC = healthy control.
aExclusion for Experiment 3.
Table 2.
Demographic and Clinical Characteristics for Participants in Experiments 1–4
ExperimentGroupAgeAge RangeGender (male)n
HC 22.4 18–36 17 42
HC 29.5 19–50 19 40
HC 26.1 20–36 16
HC 35.5 19–45 92 183
ExperimentGroupAgeAge RangeGender (male)n
HC 22.4 18–36 17 42
HC 29.5 19–50 19 40
HC 26.1 20–36 16
HC 35.5 19–45 92 183
Data are summarized as mean for the continuous variables.
#### Materials
Each participant was presented with 62 intertemporal choices offering an immediate reward and a larger but delayed reward. For half of the choice trials, rewards had rounded decimal values (e.g., $11.00 today or$21.00 in 6 weeks; rounded condition). The other half had only nonzero decimal values (e.g., $10.87 today or$20.74 in 6 weeks; decimal condition). We omitted decimal values of .25, .50, .75, and .99, as these are common numbers and may have intermediate effects between our rounded and nonzero decimal values. Trials were presented in random order.
The choice trials were derived from the hyperbolic discounting function (Mazur, 1987) that models subjective value as a function of delay according to the function,
where r is the magnitude of the reward, d is the delay until receipt, and V is the discounted value. For each trial, a unique discount rate, keq, implies indifference between the immediate reward and the discounted, delayed reward. Choices were constructed so that each trial in the rounded condition matched a trial in the decimal condition with an equal discount rate (keq) and delay. For the rounded value rewards, magnitudes spanned a range of $2 to$33; nonzero decimal values ranged from $2.14 to$32.90. Delayed rewards were available between 7 and 56 days in the future (in 7-day increments). Reward magnitudes could not be exactly equated; thus, half of the decimal values were slightly larger and the other half slightly smaller than their rounded pairs. As it was not possible to make the average magnitudes exactly the same, decimal values were on average 18¢ (±$1.33) smaller than rounded values. This design ensured that both conditions spanned the same range of intertemporal trade-offs, while controlling for any bias because of differences in reward magnitude (Thaler, 1981). #### Procedure Participants had unlimited time on each trial to make their choice. A 2000 msec blank intertrial interval was used (see Figure 1A). The 62 trials were split into four blocks of either 15 or 16 trials, with one 15 trial and one 16 trial block for both the rounded and decimal conditions. Block order was counterbalanced according to condition, with half of participants beginning with rounded and ending with decimal trials. Trial order within each block was randomly generated. Figure 1. (A) Intertemporal choices for monetary outcomes with nonzero and rounded decimal values elicit different temporal discount rates. (B) Discount rates are consistently higher for rounded dollar values across participants, producing a robust mean decimal effect. Figure 1. (A) Intertemporal choices for monetary outcomes with nonzero and rounded decimal values elicit different temporal discount rates. (B) Discount rates are consistently higher for rounded dollar values across participants, producing a robust mean decimal effect. We used a lottery system in which one of the participant's choices was randomly selected and paid to the participant according to the amount and delay of the selected choice. Participants were instructed to consider each choice seriously as any one could potentially be paid according to their selection. This encouraged participants to remain focused throughout the experiment and to treat all trials as equally determinant of their overall earnings. #### Estimation of Discount Rates For each participant and condition, discount rates were estimated by maximum likelihood. Participants' binary choices between the immediate and delayed rewards were modeled with the exponential version of the Luce choice model (Luce, 2005). If we summarize the subjective value of the two alternatives as V1 and V2 for the immediate and delayed rewards, respectively, then the probability of choosing the immediate outcome for an arbitrary k is given by where VΔ(k) is the difference V1V2 for some value of k. Likewise, the probability of choosing the delayed outcome is equal to 1 − P(ChooseV1). The parameter m captures how consistent choices are with the fitted discount function. The likelihood of any set of choices per participant is the product of the probability for each observed choice. For each condition (c), we form the likelihood function, where J = 1 if the immediate reward is chosen and zero otherwise. We maximized Equation 3 with respect to k and m using a simulated annealing optimization algorithm. This yields condition-specific estimates for k and m. The standard errors of the estimates were obtained by invoking the asymptotic normality of the maximum likelihood estimators. ### Results Choices revealed the decimal effect: Participants made more impulsive decisions in the rounded relative to the decimal condition. We performed analyses on log-transformed discount rates using nonparametric tests because the distributions of log(k) were nonnormal (Kolmogorov–Smirnov tests, p < .001 for both decimal and rounded conditions). The decimal effect held among 22 of our 26 participants (see Figure 1B). Moreover, discount rates in both the decimal and rounded conditions were not significantly different across participants recruited from Stanford University and Baylor College of Medicine (Wilcoxon rank sum test; p > .24 comparing discount rates in rounded and decimal conditions). We therefore analyzed data collectively across these two groups. Comparing the estimated discount rates across conditions within participants, the mean of the differences between the log-discount rates in the rounded versus decimal conditions is positive (0.27) and significantly different from zero (sign test, p < .001). We ruled out two potential confounds associated with the decimal effect. First, we found no difference in RT between the two conditions (mean RT rounded = 3273.04 msec; mean RT decimal = 3088.59; mean rounded − decimal = 184.45 msec, SE 138.59, t(25) = 1.28, p > .20). Second, choice consistency was not influenced by task condition. Comparing m values indicated no significant difference (Wilcoxon signed rank test p = .67). Likewise, fitted k values predicted an average of 90.12% and 88.34% of choices in the decimal and rounded conditions, respectively (Wilcoxon signed rank test, p = .17). Reward magnitude is also known to influence discount rates (e.g., Thaler, 1981). To rule out an influence of magnitude on our results, we split choices (by median) into low- and high-magnitude trials, collapsing across decimal conditions. We then estimated k separately for low- and high-magnitude choices per participant. We performed a sign test on the difference in log(k) values across magnitudes and found no significant difference (p = .33). ### Discussion Consistent with our hypothesis, we found that the nature of the decimal values in monetary rewards influenced intertemporal preferences. We suggested that monetary rewards containing rounded values would be more perceptually fluent and therefore trigger affective valuation processes to a greater degree than would nonzero decimal values. As affective processes are thought to be myopic in nature (Loewenstein, 1996), this would account for our observed differences in discount rates. ## EXPERIMENT 2 Experiment 2 tested the hypothesis that rounded dollar values differ from nonzero decimal values on the basis of affective response. We primed affective processes by asking participants to rate their emotional reaction (Hsee & Rottenstreich, 2004) to the prospect of winning different amounts of money to determine how rounded and nonrounded monetary rewards are evaluated using emotionally based valuation. We manipulated decimal values while holding magnitude comparable. We hypothesized that if valuation of round numbers involves more affective processing, round numbers would generate greater positive affect than comparable nonzero decimal numbers. The alternative hypothesis is that affective processes are unaffected by decimal value, in which case affect ratings between rounded and nonzero decimal values should not differ. ### Methods #### Participants A total of 54 volunteers were recruited (25 men; mean age = 28.8 years) from the Stanford community and gave written informed consent to participate. Because of a technical error in conducting the experiment, 14 participants did not complete all of the ratings and thus were excluded, leaving 40 participants for analyses. #### Materials and Procedure In accordance with the two-dimensional affective circumplex model of emotion (Watson, Wiese, Vaidy, & Tellegen, 1999; Watson & Tellegen, 1985), we separately assessed valence and arousal to measure the subjective emotional impact of rounded versus decimal monetary rewards. Participants received an online questionnaire, asking them to make subjective assessments of 10 monetary rewards, five rounded and five with nonzero decimal values. Each rounded reward was matched to a decimal reward; in each pair, the rounded number had a smaller objective value. Each of the 10 numbers was presented in a random order, and participants were asked the following questions: Imagine you have the chance to win$25.00. How Positive or Negative would you feel? How Activated/Aroused would you feel?
Participants answered the questions using sliding scales numbered from 0 to 100 and anchored to 50 on presentation of the question.
### Results
As valence (v) and arousal (a) ratings were significantly correlated in our data (r2 = .54, p < .0001), we combined these measures on a single dimension of positive arousal as our primary variable of interest (; based on Knutson & Greer, 2008). A two-way, within-subject ANOVA was conducted to compare the main effects of (1) condition (rounded vs. decimal values) and (2) reward magnitude for participants' affect ratings for rewards. We found greater PA for rounded values with a significant main effect of condition, F(1, 38) = 5.48, p = .03. We also found a significant main effect of reward magnitude on PA ratings, F(4, 38) = 29.82, p < .001, with larger values eliciting more positive ratings. These results are shown in Figure 2, where we have plotted normalized ratings (z score corrected within participants across conditions) as a function of reward amount. The interaction between condition and reward magnitude was not significant (p = .18).
Figure 2.
Positive arousal reported for the prospect of earning a rounded dollar amount was larger than that reported for nonzero decimal values or marginally greater objective value. Data have been normalized within participants (z score transformed); error bars are standard errors of the mean.
Figure 2.
Positive arousal reported for the prospect of earning a rounded dollar amount was larger than that reported for nonzero decimal values or marginally greater objective value. Data have been normalized within participants (z score transformed); error bars are standard errors of the mean.
Similar results held when valence or arousal were analyzed using similar ANOVAs. For valence, there was a main effect of amount, F(4, 38) = 30.50, p < .001, and condition, F(1, 38) = 4.98, p = .03, but no significant interaction (p = .38). For arousal, there was a main effect of amount, F(4, 38) = 23.96, p < .001, and a trend for condition, F(1, 38) = 3.43, p = .07, with no significant interaction (p = .23).
Because of the large age range in our participants, we conducted additional ANOVA analyses looking for a main effect of age (split into quartiles) or an Age × Reward magnitude interaction. We found no significant differences on the basis of participants' age (p > .46 for both analyses).
### Discussion
These results suggest that participants feel more positive arousal for monetary rewards with rounded compared with those with nonzero decimal values. Not surprisingly, they also reported feeling more positive arousal for greater magnitudes of monetary rewards. Importantly, this differential affective response overcomes the fact that rounded values were smaller in objective value.
## EXPERIMENT 3
Properties linked to affective and deliberative processes distinguish the functions of the NAcc and dlPFC in intertemporal choice (Peters & Büchel, 2011; McClure et al., 2004). Affective responses to rewards and related NAcc activity predict individual discount rates (Hariri et al., 2006). Cognitive ability correlates with dlPFC activity and lower discount rates (Shamosh & Gray, 2008; Shamosh et al., 2008). Furthermore, manipulating these systems either pharmacologically (Pine, Shiner, Seymour, & Dolan, 2010) or by direct stimulation (Figner et al., 2010) alters discount rates in the expected directions. In this study, we measure correlates of affective and deliberative processing while participants make intertemporal choices containing rounded or nonzero decimal values. Given the results from Experiment 2, we conjectured that rounded values would more effectively recruit the NAcc than would nonzero decimal values. fMRI also allows us to test whether rounded and decimal values differentially recruit deliberative processes by measuring activity in the dlPFC and pPC.
### Methods
#### Participants
Out of 28 participants in Experiment 1, the 16 participants from Baylor College of Medicine performed the task while undergoing fMRI scanning. The two participants excluded from the analysis in Experiment 1 were from this group of 16.
### Discussion
These results replicate our main finding that decimal values influence discount rates—even in those with elevated levels of impulsivity, such as ADHD. The tendency to favor immediately available rewards plays a central role in the delay aversion theory (Sonuga-Barke, Taylor, Sembi, & Smith, 1992) and the steeper and shorter delay-of-gratification gradient theory of ADHD (Sagvolden, Aase, Zeiner, & Berger, 1998). Our replication of the decimal effect in impulsive individuals is particularly significant for populations who display a greater tendency to select immediate rewards, such as adolescents and individuals with substance dependence (Madden & Bickel, 2009). Increased discounting is linked to poor health outcomes and reduced academic achievement and occupational success (Golsteyn, Gronqvist, & Lindahl, 2013). Attempting to improve self-control in individuals with heightened impulsivity by altering reward perception would be a novel approach for reducing the negative outcomes associated with impulsivity. Treatment of ADHD and substance use disorders currently involves contingency management in which rewards are given for appropriate behavior (e.g., Bickel et al., 2010; Barkley, 2006). Although the size and delay of the rewards are typically considered in developing a behavior plan, it has not been considered how to best frame or present rewards in these plans. Our findings suggest that future research should assess how framing effects could enhance the value of delayed rewards to increase self-control across conditions associated with impulsivity.
We also replicate the finding that younger individuals have higher discount rates than do older people, independent of the presence or absence of ADHD (Steinberg et al., 2009). Casey and colleagues (Casey, Duhoux, & Malter Cohen, 2010; Casey, Jones, & Hare, 2008) propose that an increase in risky behavior during adolescence is because of an imbalance between relatively more mature, subcortical brain systems versus less mature functioning in cortical regions linked to cognitive control. Studies suggest impaired modulation of hyperactive reward-related striatal regions by cognitive control regions (i.e., dlPFC) in adolescence (Christakou et al., 2011; Van Leijenhorst et al., 2010; Berns, Moore, & Capra, 2009; Galvan et al., 2006). Brain regions linked to self-control and evaluation of future outcomes (Galvan et al., 2006) mature later in development (e.g., Christakou et al., 2011; Cohen et al., 2010; Olson et al., 2009). Optimal connectivity between dlPFC and other regions (pPC, vmPFC) to support more self-controlled behavior putatively occurs in adulthood (Luna, 2009). Regions such as the NAcc, which have been associated with more impulsive choices in Experiment 3, have also been consistently implicated in ADHD impairments (Hart, Radua, Nakao, Mataix-Cols, & Rubia, 2013; Scheres et al., 2007).
## GENERAL DISCUSSION
Emotional responses have long been hypothesized to underlie the short-sighted behavior evident in choices involving tempting immediate rewards (Loewenstein, 1996; Mischel, 1974). We identify a novel effect on delay discounting consistent with this assertion: subtle features of prospective rewards can change affective responses and impatience.
A large number of effects influence how intertemporal preferences are formed (van den Bos & McClure, 2013). One potential unifying framework for understanding these diverse influences may come from positing independent neurocognitive systems that underlie the evaluation of rewards. We refer to one common dichotomy of such systems herein as affective and deliberative. We have shown that such a framework can explain how a relatively innocuous feature of an intertemporal choice, the numbers following the decimal point, comes to influence discounting. We combined behavioral and neural measures to test how decimal values alter the affective responses that distinguish these two modes of valuation. Overall, we have established a pathway whereby properties of a reward influence consequent discount rates. Although it is possible that the decimal effect is better explained by other effects such as subtle differences in sensory processing or calculation of numerical differences between the rounded and decimal conditions, we believe this is less likely. We found no evidence in to support differences between the rounded and decimal conditions in visual or sensory brain regions nor in decision-related RTs.
It remains to be seen whether the dual system framework will be sufficient to account for the number of factors known to influence intertemporal preferences. For example, people are more patient when the time of reward outcomes is expressed as an exact date as opposed to the duration of time from the present (Read, Frederick, Orsel, & Rahman, 2005). A recent fMRI study has shown that a similar manipulation, switching from delays to dates, modulates dlPFC activity, consistent with dual system theory (Peters & Büchel, 2010). Perhaps as interestingly, the dual system framework suggests novel effects. The idea for the decimal effect arose from considering ways in which we might modulate NAcc activity.
Positing two neurocognitive systems is almost certainly an oversimplification of how intertemporal preferences are actually constructed. The validity of dual system models of discounting is a source of much debate in the neuroscience literature (e.g., Hare et al., 2009; Kable & Glimcher, 2007). Nonetheless, such models have distinct advantages in accounting for numerous phenomena in delay discounting (van den Bos & McClure, 2013). One important future direction will be to relate dual system models to construal level theory (Trope & Liberman, 2003). Recent work by Fujita and colleagues has shown that priming people to think in broader, more abstract terms (high-level construal) increases self-control (Fujita & Han, 2009). It is intriguing to hypothesize that thinking more abstractly depends on the dlPFC and priming this neural system increases that self-control, but this is pure speculation at this point. We also acknowledge that there may be other plausible mechanisms than the dual processing account or the familiarity of rounded numbers that may explain the downstream effect of an increased affective response to the rounded stimuli studied herein. However, our primary goal for this project was to document the outcome of altered affective responses. Future studies will attempt to determine the mechanism underlying the outcome.
The decimal effect also suggests one avenue for interventions aiming to ameliorate the effects of impulsivity. Our approach represents a novel attempt to shift impulsive behavior in populations associated with poor self-control by manipulating the choice context. ADHD is associated with problematic functioning in brain networks implicated in both cognitive (dlPFC/pPC) and affective/reward (vmPFC/NAcc) processes (Fassbender & Schweitzer, 2006). Despite this, attempts to modify self-control in ADHD and adolescents tend to focus on teaching deliberative strategies (Dawson & Guare, 2010). It should be possible to design choice environments in ways that decrease affective responses, reduce NAcc activity, and lead to more far-sighted choices. This suggestion is very similar to Mischel and colleagues' demonstration that, thinking of the abstract, physical qualities of a marshmallow increase ones' ability to delay gratification and ultimately obtain more marshmallows (Mischel & Baker, 1975). The findings here suggest the neurobiological basis by which these framing effects may function. It may also be that differential neural activity relates to distinct symptom profiles in individuals with ADHD. For example, steeper discounting may be because of some combination of heightened sensitivity to immediate rewards, problems with response inhibition, or an ineffectiveness of future outcomes to influence current behavior.
Reprint requests should be sent to Samuel M. McClure, Department of Psychology, Stanford University, 450 Serra Mall, Building 420, Stanford, CA 94305, or via e-mail: [email protected].
## REFERENCES
Ahn
,
W. Y.
,
Rass
,
O.
,
Fridberg
,
D. J.
,
Bishara
,
A. J.
,
Forsyth
,
J. K.
,
Breier
,
A.
,
et al
(
2011
).
Temporal discounting of rewards in patients with bipolar disorder and schizophrenia.
Journal of Abnormal Psychology
,
120
,
911
921
.
Alter
,
A. L.
, &
Oppenheimer
,
D. M.
(
2006
).
Predicting short-term stock fluctuations by using processing fluency.
Proceedings of the National Academy of Sciences, U.S.A.
,
103
,
9369
9372
.
Barkley
,
R. A.
(
2006
).
Attention-deficit hyperactivity disorder. A handbook for diagnosis and treatment
(3rd ed.).
New York
:
The Guildford Press
.
Berns
,
G. S.
,
Moore
,
S.
, &
Capra
,
C. M.
(
2009
).
Adolescent engagement in dangerous behaviors is associated with increased white matter maturity of frontal cortex.
PLoS One
,
4
,
e6773
.
Bickel
,
W. K.
,
Jones
,
B. A.
,
Landes
,
R. D.
,
Christensen
,
D. R.
,
Jackson
,
L.
, &
Mancino
,
M.
(
2010
).
Hypothetical intertemporal choice and real economic behavior: Delay discounting predicts voucher redemptions during contingency-management procedures.
Experimental and Clinical Psychopharmacology
,
18
,
546
552
.
Bickel
,
W. K.
, &
Marsch
,
L. A.
(
2001
).
Toward a behavioral economic understanding of drug dependence: Delay discounting processes.
,
96
,
73
86
.
Butterworth
,
B.
(
1999
).
Science
,
284
,
928
929
.
Casey
,
B. J.
,
Duhoux
,
S.
, &
Malter Cohen
,
M.
(
2010
).
Adolescence: What do transmission, transition, and translation have to do with it?
Neuron
,
67
,
749
760
.
Casey
,
B. J.
,
Jones
,
R. M.
, &
Hare
,
T. A.
(
2008
).
Annals of the New York Academy of Sciences
,
1124
,
111
126
.
Chao
,
L. W.
,
Szrek
,
H.
,
Pereira
,
N. S.
, &
Pauly
,
M. V.
(
2009
).
Time preference and its relationship with age, health, and survival probability.
Judgment and Decision Making
,
4
,
1
19
.
Christakou
,
A.
,
Brammer
,
M.
, &
Rubia
,
K.
(
2011
).
Maturation of limbic corticostriatal activation and connectivity associated with developmental changes in temporal discounting.
Neuroimage
,
54
,
1344
1354
.
Cohen
,
J. R.
,
Asarnow
,
R. F.
,
Sabb
,
F. W.
,
Bilder
,
R. M.
,
Bookheimer
,
S. Y.
,
Knowlton
,
B. J.
,
et al
(
2010
).
A unique adolescent response to reward prediction errors.
Nature Neuroscience
,
13
,
669
671
.
Costa Dias
,
T. G.
,
Wilson
,
V. B.
,
Bathula
,
D. R.
,
Iyer
,
S. P.
,
Mills
,
K. L.
,
Thurlow
,
B. L.
,
et al
(
2013
).
Reward circuit connectivity relates to delay discounting in children with attention-deficit/hyperactivity disorder.
European Neuropsychopharmacology
,
23
,
33
45
.
Daw
,
N. D.
,
Niv
,
Y.
, &
Dayan
,
P.
(
2005
).
Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control.
Nature Neuroscience
,
8
,
1704
1711
.
Dawson
,
P.
, &
Guare
,
R.
(
2010
).
Skills in children and adolescents: A practical guide to assessment and intervention
(2nd ed.).
New York
:
Guilford Press
.
Dickstein
,
S. G.
,
Bannon
,
K.
,
Castellanos
,
F. X.
, &
Milham
,
M. P.
(
2006
).
The neural correlates of attention deficit hyperactivity disorder: An ALE meta-analysis.
Journal of Child Psychology and Psychiatry
,
47
,
1051
1062
.
Fassbender
,
C.
, &
Schweitzer
,
J. B.
(
2006
).
Is there evidence for neural compensation in attention deficit hyperactivity disorder? A review of the functional neuroimaging literature.
Clinical Psychology Review
,
26
,
445
465
.
Figner
,
B.
,
Knoch
,
D.
,
Johnson
,
E. J.
,
Krosch
,
A. R.
,
Lisanby
,
S. H.
,
Fehr
,
E.
,
et al
(
2010
).
Lateral prefrontal cortex and self-control in intertemporal choice.
Nature Neuroscience
,
13
,
538
539
.
Frederick
,
S.
,
Loewenstein
,
G.
, &
O'Dohoghue
,
T.
(
2002
).
Time discounting and time preference: A critical review.
Journal of Economic Literature
,
40
,
351
401
.
Fujita
,
K.
, &
Han
,
H. A.
(
2009
).
Moving beyond deliberative control of impulses: The effect of construal levels on evaluative associations in self-control conflicts.
Psychological Science
,
20
,
799
804
.
Galvan
,
A.
,
Hare
,
T. A.
,
Parra
,
C. E.
,
Penn
,
J.
,
Voss
,
H.
,
Glover
,
G.
,
et al
(
2006
).
Earlier development of the accumbens relative to orbitofrontal cortex might underlie risk-taking behavior in adolescents.
Journal of Neuroscience
,
26
,
6885
6892
.
Golsteyn
,
B. H. H.
,
Gronqvist
,
H.
, &
Lindahl
,
L.
(
2013
).
Time preferences and lifetime outcomes (No. 7165).
Discussion Paper Series.
Forschungsinstitut zur Zukurift der Arbeit
.
Green
,
L.
,
Fry
,
A.
, &
Myerson
,
J.
(
1994
).
Discounting of delayed rewards: A life-span comparison.
Psychological Science
,
5
,
33
36
.
Green
,
L.
,
Myerson
,
J.
, &
,
E.
(
1997
).
Rate of temporal discounting decreases with amount of reward.
Memory & Cognition
,
25
,
715
723
.
Hare
,
T. A.
,
Camerer
,
C. F.
, &
Rangel
,
A.
(
2009
).
Self-control in decision-making involves modulation of the vmPFC valuation system.
Science
,
324
,
646
648
.
Hariri
,
A. R.
,
Brown
,
S. M.
,
Williamson
,
D. E.
,
Flory
,
J. D.
,
de Wit
,
H.
, &
Manuck
,
S. B.
(
2006
).
Preference for immediate over delayed rewards is associated with magnitude of ventral striatal activity.
Journal of Neuroscience
,
26
,
13213
13217
.
Hart
,
H.
,
,
J.
,
Nakao
,
T.
,
Mataix-Cols
,
D.
, &
Rubia
,
K.
(
2013
).
Meta-analysis of functional magnetic resonance imaging studies of inhibition and attention in attention-deficit/hyperactivity disorder: Exploring task-specific, stimulant medication, and age effects.
JAMA Psychiatry
,
70
,
185
198
.
Heerey
,
E. A.
,
Robinson
,
B. M.
,
McMahon
,
R. P.
, &
Gold
,
J. M.
(
2007
).
Delay discounting in schizophrenia.
Cognitive Neuropsychiatry
,
12
,
213
221
.
Hsee
,
C. K.
, &
Rottenstreich
,
Y.
(
2004
).
Music, pandas, and muggers: On the affective psychology of value.
Journal of Experimental Psychology: General
,
133
,
23
30
.
Kable
,
J. W.
, &
Glimcher
,
P. W.
(
2007
).
The neural correlates of subjective value during intertemporal choice.
Nature Neuroscience
,
10
,
1625
1633
.
Knutson
,
B.
,
,
C. M.
,
Fong
,
G. W.
, &
Hommer
,
D.
(
2001
).
Anticipation of increasing monetary reward selectively recruits nucleus accumbens.
Journal of Neuroscience
,
21
,
RC159
.
Knutson
,
B.
, &
Greer
,
S. M.
(
2008
).
Anticipatory affect: Neural correlates and consequences for choice.
Philosophical Transactions of the Royal Society of London, Series B, Biological Sciences
,
363
,
3771
3786
.
Li
,
X.
(
2008
).
The effects of appetitive stimuli on out-of-domain consumption impatience.
Journal of Consumer Research
,
34
,
649
656
.
Loewenstein
,
G.
(
1996
).
Out of control: Visceral influences on behavior.
Organizational Behavior and Human Decision Processes
,
65
,
272
292
.
Luce
,
R. D.
(
2005
).
Individual choice behavior: A theoretical analysis
(Dover ed.).
New York
:
John Wiley & Sons
.
Luna
,
B.
(
2009
).
Developmental changes in cognitive control through adolescence.
Advances in Child Development and Behavior
,
37
,
233
278
.
,
G. J.
, &
Bickel
,
W. K.
(
2009
).
Impulsivity: The behavioral and neurological science of discounting.
Washington, DC
:
APA Books
.
,
G. J.
,
Petry
,
N. M.
,
,
G. J.
, &
Bickel
,
W. K.
(
1997
).
Impulsive and self-control choices in opioid-dependent patients and non-drug-using control participants: Drug and monetary rewards.
Experimental and Clinical Psychopharmacology
,
5
,
256
262
.
Marco
,
R.
,
Miranda
,
A.
,
Schlotz
,
W.
,
Melia
,
A.
,
Mulligan
,
A.
,
Muller
,
U.
,
et al
(
2009
).
Delay and reward choice in ADHD: An experimental test of the role of delay aversion.
Neuropsychology
,
23
,
367
380
.
Mazur
,
J. E.
(
1987
).
An adjusting procedure for studying delayed reinforcement.
In
M. L.
Commons
,
J. E.
Mazur
,
J. A.
Nevin
, &
H.
Rachlin
(Eds.),
Quantitative analysis of behavior: Vol. 5. The effect of delay and intervening events on reinforcement value
(pp.
55
73
).
Hillsdale, NJ
:
Erlbaum
.
McClure
,
S. M.
,
Ericson
,
K. M.
,
Laibson
,
D. I.
,
Loewenstein
,
G.
, &
Cohen
,
J. D.
(
2007
).
Time discounting for primary rewards.
Journal of Neuroscience
,
27
,
5796
5804
.
McClure
,
S. M.
,
Laibson
,
D. I.
,
Loewenstein
,
G.
, &
Cohen
,
J. D.
(
2004
).
Separate neural systems value immediate and delayed monetary rewards.
Science
,
306
,
503
507
.
Mischel
,
W.
(
1974
).
Processes in delay of gratification.
,
7
,
249
292
.
Mischel
,
W.
, &
Baker
,
N.
(
1975
).
Cognitive appraisals and transformations in delay behavior.
Journal of Personality and Social Psychology
,
31
,
254
261
.
Olson
,
E. A.
,
Collins
,
P. F.
,
Hooper
,
C. J.
,
Muetzel
,
R.
,
Lim
,
K. O.
, &
Luciana
,
M.
(
2009
).
White matter integrity predicts delay discounting behavior in 9- to 23-year-olds: A diffusion tensor imaging study.
Journal of Cognitive Neuroscience
,
21
,
1406
1421
.
Oppenheimer
,
D. M.
, &
Frank
,
M. C.
(
2008
).
A rose in any other font would not smell as sweet: Effects of perceptual fluency on categorization.
Cognition
,
106
,
1178
1194
.
Paloyelis
,
Y.
,
Asherson
,
P.
, &
Kuntsi
,
J.
(
2009
).
Are ADHD symptoms associated with delay aversion or choice impulsivity? A general population study.
,
48
,
837
846
.
Panksepp
,
J.
(
2004
).
Affective neuroscience: The foundations of human and animal emotions.
New York
:
Oxford University Press
.
Peters
,
J.
, &
Büchel
,
C.
(
2010
).
Episodic future thinking reduces reward delay discounting through an enhancement of prefrontal-mediotemporal interactions.
Neuron
,
66
,
138
148
.
Peters
,
J.
, &
Büchel
,
C.
(
2011
).
The neural mechanisms of inter-temporal decision-making: Understanding variability.
Trends in Cognitive Sciences
,
15
,
227
239
.
Pine
,
A.
,
Shiner
,
T.
,
Seymour
,
B.
, &
Dolan
,
R. J.
(
2010
).
Dopamine, time, and impulsivity in humans.
Journal of Neuroscience
,
30
,
8888
8896
.
Prencipe
,
A.
,
Kesek
,
A.
,
Cohen
,
J.
,
Lamm
,
C.
,
Lewis
,
M. D.
, &
Zelazo
,
P. D.
(
2010
).
Development of hot and cool executive function during the transition to adolescence.
Journal of Experimental Child Psychology
,
108
,
621
637
.
Rangel
,
A.
, &
Hare
,
T.
(
2010
).
Neural computations associated with goal-directed choice.
Current Opinion in Neurobiology
,
20
,
262
270
.
Rapport
,
M. D.
,
Tucker
,
S. B.
,
DuPaul
,
G. J.
,
Merlo
,
M.
, &
Stoner
,
G.
(
1986
).
Hyperactivity and frustration: The influence of control over and size of rewards in delaying gratification.
Journal of Abnormal Child Psychology
,
14
,
191
204
.
,
J. P.
,
Frederick
,
S.
,
Orsel
,
B.
, &
Rahman
,
J.
(
2005
).
Four score and seven years from now: The date/delay effect in temporal discounting.
Management Science
,
41
,
1326
1335
.
Reynolds
,
B.
,
Leraas
,
K.
,
Collins
,
C.
, &
Melanko
,
S.
(
2009
).
Delay discounting by the children of smokers and nonsmokers.
Drug and Alcohol Dependence
,
99
,
350
353
.
Sagvolden
,
T.
,
Aase
,
H.
,
Zeiner
,
P.
, &
Berger
,
D.
(
1998
).
Altered reinforcement mechanisms in attention-deficit/hyperactivity disorder.
Behavioural Brain Research
,
94
,
61
71
.
Scheres
,
A.
, &
Hamaker
,
E. L.
(
2010
).
What we can and cannot conclude about the relationship between steep temporal reward discounting and hyperactivity-impulsivity symptoms in attention-deficit/hyperactivity disorder.
Biological Psychiatry
,
68
,
e17
e18
.
Scheres
,
A.
,
Lee
,
A.
, &
Sumiya
,
M.
(
2008
).
Journal of Neural Transmission
,
115
,
221
226
.
Scheres
,
A.
,
Milham
,
M. P.
,
Knutson
,
B.
, &
Castellanos
,
F. X.
(
2007
).
Ventral striatal hyporesponsiveness during reward anticipation in attention-deficit/hyperactivity disorder.
Biological Psychiatry
,
61
,
720
724
.
Scheres
,
A.
,
Tontsch
,
C.
,
Thoeny
,
A. L.
, &
Kaczkurkin
,
A.
(
2010
).
Temporal reward discounting in attention-deficit/hyperactivity disorder: The contribution of symptom domains, reward magnitude, and session length.
Biological Psychiatry
,
67
,
641
648
.
Schultz
,
W.
,
Dayan
,
P.
, &
Montague
,
P. R.
(
1997
).
A neural substrate of prediction and reward.
Science
,
275
,
1593
1598
.
Schweitzer
,
J. B.
, &
Sulzer-Azaroff
,
B.
(
1988
).
Self-control: Teaching tolerance for delay in impulsive children.
Journal of the Experimental Analysis of Behavior
,
50
,
173
186
.
Schweitzer
,
J. B.
, &
Sulzer-Azaroff
,
B.
(
1995
).
Self-control in boys with attention deficit hyperactivity disorder: Effects of added stimulation and time.
Journal of Child Psychology and Psychiatry
,
36
,
671
686
.
Shamosh
,
N. A.
,
Deyoung
,
C. G.
,
Green
,
A. E.
,
Reis
,
D. L.
,
Johnson
,
M. R.
,
Conway
,
A. R.
,
et al
(
2008
).
Individual differences in delay discounting: Relation to intelligence, working memory, and anterior prefrontal cortex.
Psychological Science
,
19
,
904
911
.
Shamosh
,
N. A.
, &
Gray
,
J. R.
(
2008
).
Delay discounting and intelligence: A meta-analysis.
Intelligence
,
36
,
289
305
.
Sonuga-Barke
,
E. J.
,
Taylor
,
E.
,
Sembi
,
S.
, &
Smith
,
J.
(
1992
).
Hyperactivity and delay aversion-I. The effect of delay on choice.
Journal of Child Psychology and Psychiatry
,
33
,
387
398
.
Sozou
,
P. D.
, &
Seymour
,
R. M.
(
2003
).
Augmented discounting: Interaction between ageing and time-preference behaviour.
Proceedings of the Royal Society B: Biological Sciences
,
270
,
1047
1053
.
Steinberg
,
L.
(
2010
).
A dual systems model of adolescent risk-taking.
Developmental Psychobiology
,
52
,
216
224
.
Steinberg
,
L.
,
Graham
,
S.
,
O'Brien
,
L.
,
Woolard
,
J.
,
Cauffman
,
E.
, &
Banich
,
M.
(
2009
).
Age differences in future orientation and delay discounting.
Child Development
,
80
,
28
44
.
Thaler
,
R. H.
(
1981
).
Some empirical evidence on dynamic inconsistency.
Economic Letters
,
8
,
201
207
.
Thorell
,
L. B.
(
2007
).
Do delay aversion and executive function deficits make distinct contributions to the functional impact of ADHD symptoms? A study of early academic skill deficits.
Journal of Child Psychology and Psychiatry
,
48
,
1061
1070
.
Tripp
,
G.
, &
Alsop
,
B.
(
1999
).
Sensitivity to reward frequency in boys with attention deficit hyperactivity disorder.
Journal of Clinical Child & Adolescent Psychology
,
28
,
366
375
.
Trope
,
Y.
, &
Liberman
,
N.
(
2003
).
Temporal construal.
Psychological Review
,
110
,
403
421
.
van den Bos
,
W.
, &
McClure
,
S. M.
(
2013
).
Towards a general model of temporal discounting.
Journal of the Experimental Analysis of Behavior
,
99
,
58
73
.
Van Leijenhorst
,
L.
,
Zanolie
,
K.
,
Van Meel
,
C. S.
,
Westenberg
,
P. M.
,
Rombouts
,
S. A.
, &
Crone
,
E. A.
(
2010
).
Cerebral Cortex
,
20
,
61
69
.
Watson
,
D.
, &
Tellegen
,
A.
(
1985
).
Toward a consensual structure of mood.
Psychological Bulletin
,
98
,
219
235
.
Watson
,
D.
,
Wiese
,
D.
,
Vaidy
,
J.
, &
Tellegen
,
A.
(
1999
).
The two general activation systems of affect: Structural findings, evolutionary considerations, and psychobiological evidence.
Journal of Personality and Social Psychology
,
76
,
821
838
.
Wimmer
,
G. E.
, &
Shohamy
,
D.
(
2012
).
Preference by association: How memory mechanisms in the hippocampus bias decisions.
Science
,
338
,
270
273
.
|
Find The Slope Of The Curve At The Given Point
Other times we are asked that the tangent line should be parallel to another given line. r = 1 − sin θ at θ = 0 For the polar equation r= 1-sinθ a) Sketch the graph for 0 ≤ t ≤ 2pi b) Find the points on the cardioid where the tangent line is horizontal c)Find the equation of the tangent line when theta=pi/3. The Slope Formula In this lesson, students are given the coordinates of two points, and are asked to find the slope of the line that passes through the points (without graphing). If you just need to roughly match the slope and you have access to the data points of the green curve (not just the green dot), you can use multiple points of the green curve in the fitting function together with the blue and red point. You have connected the points in your mind, so you "see" a curve. At the point where you need to know the gradient, draw a tangent to the curve. Find the equation of the tangent line to the graph of f at the point (1,1). The derivative of the function gives you the slope of the function at any point. Plot and label 2 points on the line, anywhere on the line. How to find the slope of a Curve at a Given point. Find the slope of the tangent line to the given polar curve at the point specified by the value of θ. Slope » This question involves a few major concepts across several lessons - for a deeper dive and for a full gambit of practice and common tricks, make sure to check out each Relevant Mister Math Lesson in the list above. In this case, the slope of the tangent line can be approximated through the use of a limit, , where is the horizontal distance between the point of tangency and another point. Thus the equation of the tangent line is. to find the slope (i. Question 1153149: Find the slope of the tangent line to the graph of the function at the given point. Let us choose several points P 1, P 2, and P 3 on the curve and draw the secant lines from these points to the given point (1,1). Example 2 : Find the equation of the tangent to the parabola x 2 + x − 2y + 2 = 0 at (1, 2) Solution : Equation of the given curve x 2 + x − 2y + 2 = 0. Step 2 Calculate the rise and run (You can draw it on the graph if it helps). You can find this by taking the derivative of the equation of the curve and then plugging in the x value of that point. From the point-slope form of the equation of a line, we see the equation of the tangent line of the curve at this point is given by y 0 = ˇ 2 x ˇ 2 : 2 We know that a curve de ned by the equation y= f(x) has a horizontal tangent if dy=dx= 0, and a vertical tangent if f0(x) has a vertical asymptote. So, all we need to do is construct the tangent and measure its gradient, Δy / Δx. Really clear math lessons (pre-algebra, algebra, precalculus), cool math games, online graphing calculators, geometry art, fractals, polyhedra, parents and teachers areas too. To find the vertical and horizontal distances between the two points, sometimes this can be seen by looking at the graph, but the vertical and horizontal distances between two points can always be found by subtracting the coordinates of the two points. Benigno Instructional Designer: Celia T. Hence the slope of the tangent line at the given point is 1. Additional Resources. Substitute the given x-value into the function to find the y-value or point. Simplify the formula to get a slope of ⅓. You can take whichever one you want, or even average the slopes on each side if you want. Find the slope of the tangent line to the given polar curve at the point specified by the value of θ. Finding the gradient of a general function. Click here👆to get an answer to your question ️ Find the slope of the tangent to the curve y = x^3 - 3x + 2 at the point whose x coordinate is 3. Find the slope of the curve at the point indicated. Your custom calculation is accidentally returning the inverse slope, the x and y values are reversed in the slope function (x1 -> y[i], etc). f(x) = 9x − 4x^2 at (−2, −34) m = Determine an equation of the tangent line. gradient(y) dose not fit any function we are passing just an array. The slope of a curve at a point is defined to be the slope of the tangent line. Average velocity is given by , which is the slope of a secant line through the points (a, f(a)) and (a+h, f(a+h)). m m = = The derivative of y = 3x2 +3x y = 3 x 2 + 3 x. 99 may result in greater increase in quantity demanded than decreasing it from 1. 2x-2y(dy/dx) = -2y(dy/dx) = -2x. The slope of the curve at point (0,1) : m = dy / dx = 1 / 2√ (1 + 4sin0) (0 + 4cos0) m = dy / dx = 1/2 (0+4) m = dy / dx = 1/2*4 = 2 m = 2. To find the slope of the tangent at the point (0, 9) we substitute the x-coordinate into dy/dx: Now we have the slope: -63. The slope on the graph is a visual representation of the variable y's rate of change with respect to x. Determine the slope of a line given two points on the line. find the slope. At the given point find the slope of the curve 10 points!!? At the given point find the slope of the curve, the line that is tangent to the curve, or the line that is normal to the curve as. 2 The Slope of a Curve at a Point notes by Tim Pilachowski Finding the slope of a line is fairly simple, once you get the hang of it, because the slope is the same everywhere on the line. Given the curve a. 2x + 1 - 2 (dy/dx) + 0 = 0. If the point P(x 0,y 0) is on the curve f, then the tangent line at the point P has a slope given by the formula: M tan = lim h→0 f(x 0 + h) – f(x 0 )/h. Slope of Sine x First, have a look at the interactive graph below and observe that the slope of the (red) tangent line at the point A is the same as the y -value of the point B. The distance between points A and B, the slope and the equation of the line through the two points will be calculated and displayed. Find normal at a given point on the curve. The slope of Plant 1's production possibilities curve measures the rate at which Alpine Sports must give up ski production to produce additional snowboards. (b) 1 : equation of tangent line ( ) ( ), 1,0 4 xy 3 dy dx = = An equation for the line tangent to the graph of. When you are given a slope-intercept form equation, then finding the slope is simple: since the slope-intercept form is y = mx + b, and m = slope, it would simply be the coefficient of x. This will be the equation for the slope of any line tangent to the curve. The design speed of a sag parabolic curve is 100 kph. Now drag the points "A" and "B" to the function line. Finding the slope and Y-intercept of a line; Deleting a line : 5. Find the point on the curve at which the tangent is parallel to the chord joining the points (2, 0) and (4, 4). For the following exercises, find all points on the curve that have the given slope. Define slope as the ratio of vertical rise to horizontal run. The calculator also has the ability to provide step by step solutions. Calculate the first derivative of f(x). Find the slope of the line passing through the points (–3, 5) and (4, –1). Coordinates of the curve alignment (such as 25 ft stationing) must be input into the Data Collector or computer with the off-the-curve control point coordinates. The derivative of your function is the slope of the moving tangent line. (a) The slope of the curve at P(1. for circles, the slope at a point is equal to the slope of the tangent at that point, we can find the tangent by producing the perpendicular radius of the circle to that point. : Slope of tangent line:: To make a tangent line out of a secant line, we. Serial number gun historyNow we will explain how we found the slope and intercept of our function The image below points to the Slope - which indicates how steep the line is, and the Intercept - which is the value of y, when x = 0 (the point where the diagonal line crosses the vertical axis). Find the slope of the curve at the given points. The circle and parabola: A circle of radius 1 is made to fit inside the parabola y = x 2 as shown in figure 9. Slope of the tangent drawn at the point on the curve = x/y -----(2) (1) = (2) x/y = 2. We can now use point-slope form in order to find the equation of our tangent line. Find the points on curve. Simple points are not a curve. ) It can be seen that the slope of the function depends on the position of P on the curve. By using this website, you agree to our Cookie Policy. f(x) = 9x − 4x^2 at (−2, −34) m = Determine an equation of the tangent line. y=5 - 7x?; P(-4, - 107). Find the Tangent at a Given Point Using the Limit Definition, The slope of the tangent line is the derivative of the expression. Find the coordinates of A. ) (b) The equation for the tangent line at P is (Type an equation. In this case, the slope of the tangent line can be approximated through the use of a limit, , where is the horizontal distance between the point of tangency and another point. Calculus 120, section 1. Solve it with our calculus problem solver and calculator. The least-squares curve-fitting method yields a best fit, not a perfect fit, to the calibration data for a given curve shape (linear. First you need to figure out the control points, which takes more time than evaluating the curve (because you have to do it in Python). 2 The Slope of a Curve at a Point notes by Tim Pilachowski Finding the slope of a line is fairly simple, once you get the hang of it, because the slope is the same everywhere on the line. We can determine the station and elevation of points A and B by reducing this unequal tangent problem to two equal tangent problems. If r = f (θ) is the polar curve, then the slope at any given point on this curve with any particular polar coordinates (r,θ) is f '(θ)sin(θ) + f (θ)cos(θ) f '(θ)cos(θ) − f (θ)sin(θ). The Slope Formula In this lesson, students are given the coordinates of two points, and are asked to find the slope of the line that passes through the points (without graphing). How to calculate Slope?. Find the equation of a curve passing through the point (0, 2), given that the sum of the coordinates of any point on the curve exceeds the slope of the tangent to the curve at that point by 5 - Mathematics and Statistics. A number which is used to indicate the steepness of a curve at a particular point. The equation point slope calculator will find an equation in either slope intercept form or point slope form when given a point and a slope. The point-slope formula. NCERT Solutions. uk 1 c mathcentre 2009. Find the equation of the curve that passes through the point(0,1 / 2)$and… 00:35. How to find elevation point of vertical. Slope of tangent at (3, 6) is m = 6/6 m = 1. Other names for f '(x): slope instantaneous rate of change speed velocity EX 2. This middle point is called the "midpoint". to find the slope (i. To find the tangent line to the curve y = f(x) at the point, we need to determine the slope of the curve. 1 shows points corresponding to$\theta$equal to$0$,$\pm\pi/3$,$2\pi/3$and$4\pi/3$on the graph of the function. Find the equation of the curve that passes through the point$(0,1 / 2)$and… 00:35. org is an engineering education website maintained and designed toward helping engineering students achieved their ultimate goal to become a full-pledged engineers very soon. Work 4 (1)-3=1 y-1=1 (x-1) y=x You can view more similar questions or ask a new question. @farzad: I can not define a straight line between two points on the curve to find slope as the slope changes at every point. Have a play with it first (move the point, try different slopes):. If r = f (θ) is the polar curve, then the slope at any given point on this curve with any particular polar coordinates (r,θ) is f '(θ)sin(θ) + f (θ)cos(θ) f '(θ)cos(θ) − f (θ)sin(θ). Then we subtract one from the exponent. I want to fit a curve through these points and then calculate the slope at different points. Therefore to find the slope at the given point, we need to find the derivative of the function using power rule. The marginal revenue curve thus crosses the horizontal axis at the quantity at which the total revenue is maximum. Parallel Lines. Benigno Instructional Designer: Celia T. find the derivative using implicit differentiation and solving for y' (remember to use the product rule on 3xy) 2x + (2y) y' + 3y + (3x) y' = 0. Find the slope of the tangent line to the given polar curve at the point specified by the value of θ. 2%) and that the intercept is 0. To find the equation of a line for any given two points that this line passes through, use our slope intercept form. To find the slope of the curve at any other point, we would need to draw a tangent line at that point and then determine the slope of that tangent line. Find the slope of the normal to the curve x = acos^3θ, y = asin^3θ at θ= π/4. Click here👆to get an answer to your question ️ Find the slope of the tangent to the curve y = x^3 - 3x + 2 at the point whose x coordinate is 3. 1 - Enter the x and y coordinates of two points A and B and press "enter". Uncertainty in the slope on a graph If one has more than a few points on a. the slope of the tangent is the limit of the slope of the secant as Q approaches P. Given the curve x + xy+2y —6. Finding slope and point-slope equation Added Aug 16, 2012 by lshemansky in Education Enter 2 points and you can choose to be given the slope or the point-slope equation. y = tan x at point (pi/4,1) Homework Equations The Attempt at a Solution step 1. Price points: decreasing the price from$2. This will be the equation for the slope of any line tangent to the curve. The larger the value is, the steeper the line. Now you have the slope of the tangent, and you have your point (9,3), so you can find the equation of the tangent line. Therefore to find the slope at the given point, we need to find the derivative of the function using power rule. Tan Language Editor : Amihan B. How To: Find a number given Its percent How To: Find a slope of a straight line with: Ax + By + C = 0 How To: Calculate Faster Than a Calculator How To: Find extra points for a parabola (quadractic equation) Use the quadratic equation: finding the mirror point. To find the tangent line to the curve y = f(x) at the point, we need to determine the slope of the curve. Favorite Answer Use implicit differentiation to find y' (which is the same as dy/dx). Find the slope of the tangent line to the given polar curve at the point specified by the value of θ. That should produce a curve that passes through all the points and pretty much match the slope in the green. By applying the value of x in the given curve, we get. Horizontal and Vertical Lines. - 10) by finding the limiting value of the slope of the secants through P. I assume that I have to square x where sec is pi/4. Hence the slope of the tangent line at the given point is 1. At the point where you need to know the gradient, draw a tangent to the curve. Example question: Find the slope of the tangent line to the curve f(x) = 2x 2 + 3x – 4 passing through the point P(-1, 5). Find the equation of the tangent line to the graph of f at the point (1,1). Finding the slope and Y-intercept of a line; Deleting a line : 5. @farzad: I can not define a straight line between two points on the curve to find slope as the slope changes at every point. Find the slope of the curve x^2+y^2–6x+10y+5+0 at point (1, 0). The Point-Slope equation is specifically designed to handle the trickiest type of questions, namely, how do you write an equation given two points? First, we take our two points and find the slope. When you don't have a graph to look at the best way to find where the slope is zero is to set the derivative equal to zero. Find the slope of the curve at the point indicated. ] Delta Notation. 5k points) integral calculus. By applying this formula, it can be said that, when at the fall of price by Re. The difference quotient should have a cape and boots because it has such a useful super-power: it gives you the slope of a curve at a single point. Note the X and Y value for each of the points. Benigno Instructional Designer: Celia T. Given two points (x 1,y 1) and (x 2,y 2) on a line, the slope m of the line is Through differential calculus, one can calculate the slope of the line to a curve at a point. Then, you want to find the slope at x = 9, so you would substitute that in to your derivative. You can use any two points on a line. For example, the slopes around element #2: leftSlope = (B (2)-B (1)) / (A (2)-A (1)). f(x) = 9x − 4x^2 at (−2, −34) m = Determine an equation of the tangent line. Slope of the Tangent Line to a Curve to a Given Point. Ocampo Evelyn L. At the given point, find the slope of the curve or the line that is tangent to the curve, as requested. For the following exercises, write the equation of the tangent line in Cartesian coordinates for the given parameter t. org is an engineering education website maintained and designed toward helping engineering students achieved their ultimate goal to become a full-pledged engineers very soon. Want to know how? First, look at this figure. Find the equation of a curve passing through the point (0, 2), given that the sum of the coordinates of any point on the curve exceeds the slope of the tangent to the curve at that point by 5. Given any point (x,y) we can use this to find the slope of our solution at that point. Solution: The slope of normal to a curve is given as, m = −1 / [dy/ dx] Here, the equation of the curve is, y = 2x^2 + 3 sinx ⇒ dy/ dx = 4x + 3 cosx. We'll show that the tangent lines to the curve y = x 3 – 3 x that are parallel the x-axis are at the points (1, –2. The equation of a line is typically written as y=mx+b where m is the slope and b is the y-intercept. Students are given four graphs and are required to find the gradient using a tangent at various points. I assume that I have to square x where sec is pi/4. For instance, you might need to find a line that bisects (divides into two equal halves) a given line segment. Finding the Slope of a Line (Given Two Points-No Graph)Worksheet 1 - Here is a ten problem worksheet where you will be asked to calculate the slope of a line. Find the slope of the line in the graph below. y' = - (2x + 3y) / (3x + 2y) so at the point (x,y) = (1,2), the slope of the tangent line is - (2 + 6) / (3 + 4) = -8/7. Use either definition of the derivative to determine the slope of the curve y=f(x) at the given point P. Find an equation of the curve whose tangent line has a slope of f'(x)=2x^(-14/15) given the point (-1,-9) The function - Answered by a verified Tutor We use cookies to give you the best possible experience on our website. Then, you want to find the slope at x = 9, so you would substitute that in to your derivative. Solution: The slope of normal to a curve is given as, m = −1 / [dy/ dx] Here, the equation of the curve is, y = 2x^2 + 3 sinx ⇒ dy/ dx = 4x + 3 cosx. By using this website, you agree to our Cookie Policy. In this video, I discuss one of the first few concepts that are learned in any Calculus course: the slope of a curve at a point. 8$$Note that this is an estimate of the slope at t=½h and we use it to find another estimate of y. Find the slope of the tangent line to the given polar curve at the point specified by the value of θ. See full list on mathsisfun. There are several methods for calculating the derivative, but the power rule is the simplest method and can be used for most basic polynomial. Parallel Lines. (a) The slope of the curve at P is (Simplify your answer. (C) Find the particular solution yfx= ( ) to the differential equation with the initial condition f ()01=. That should produce a curve that passes through all the points and pretty much match the slope in the green. ) an equation of a tangent line at P. 13 (a) Find the slope of the curve y=x°-11x at the given point P(1. In this case, we can take the derivative of y with respect to x, and plug in the desired value for x. condition. Example 2 : Find the equation of the tangent to the parabola x 2 + x − 2y + 2 = 0 at (1, 2) Solution : Equation of the given curve x 2 + x − 2y + 2 = 0. As you slide the point Q along the curve, towards the point P, the slope of the secant line will become closer to the slope of the tangent line. What is the curve equation to find the tangent line at a point (25,5) on it? EDIT: [As per the additional details, which I could relook only after about 9 hours of earlier presentation] 1) Differentiating the given one, dy/dx = 1/(2√x) 2) At x = 25, dy/dx = 1/(2√25) = 1/10; this is the slope of the tangent line at the given point; this is by the geometrical definition of differentiation] 3. : Slope of secant line: : Using the slope formula and simplification. Suppose that r = f(q) is a polar curve. 1 - Enter the x and y coordinates of two points A and B and press "enter". At the point of maximum total revenue m the slope of the total revenue curve is zero and the marginal revenue is therefore also zero. Step 2 Calculate the rise and run (You can draw it on the graph if it helps). To find the slope of the curve at a given point: (1) Identify two points on the line, (2) Select one to be (x1 , y1) and the other to be (x2, y2), then (3) use the equation: The tangent line contains the points B (1, 1) and C (3, 2). This time we know nothing special about the P 1 P 2 P 3 (1, 1) geometry of the curve, so we adopt a different procedure. Given the curve x + xy+2y —6. The slope of f at x = a is the same as the slope of the tangent line to f at x = a, so it is: Return To Top Of Page. Slope of tangent at (3, 6) is m = 6/6 m = 1. For parametric curves, we also can identify. Find the Equation of a Line Given That You Know Two Points it Passes Through. This middle point is called the "midpoint". Finding the Slope of a Line from a Graph. Plot and label 2 points on the line, anywhere on the line. \left(x^{2}+y^{2}\right… 03:37. At any given point on the budget line, For example, at point E, the slope of budget line = intercept on y-axis / intercept on x-axis or, slope of budget line at point E = 3/6 = 1/2. Horizontal and Vertical Lines. How To: Find the equation of a tangent line ; How To: Solve for the area under a curve in calculus ; How To: Connect slopes and derivatives, For Dummies ; How To: Find the equation of a circle given: center & tangent ; How To: Find the slope of a line given 2 points ; How To: Find the equation of a line in point-slope form. At this point, you can find the slope of the tangent line at point (2,-4) by inserting 2 into the above equation, which would be 4-6*(2)=-8 You know that the slope of tangent line is -8, but you should also find the value of y for that tangent line. Find the equation of the tangent line to the graph of f at the point (1,1). In this case, we can take the derivative of y with respect to x, and plug in the desired value for x. From the point (3,2), we can draw a small line segment with slope 2. The slope physically represents how fast the graph is going up. We could plot the points on grid paper, then count out the rise and the run, but there is a way to find the slope without graphing. Point Elasticity. Give the slope of the curve at the point (1, 1): y=(x^3/4)-2x+1. In an earlier answered question, I had asked how to find the intersection between a line segment defined by (x1,y1),(x2,y2) and an infinite line for which I had a single point on the line and its slope or angle in degrees. Computation of the slope of the tangent line to a curve at a point. This case involves the use of the point-slope formula. - 10) is Find an equation of the tangent to the curve at the point corresponding to the given. Find the equation of the tangent line Answered by Penny Nom. find the equation of another line f in slope intercept form, parallel to g passes through (5,3). Note the X and Y value for each of the points. See full list on tutorial. We will begin our study of calculus by looking at limits. Have a play with it first (move the point, try different slopes):. This worksheet has been made for the new GCSE specification. The marginal revenue curve thus crosses the horizontal axis at the quantity at which the total revenue is maximum. It is the limit of the curve's equation as it approaches the indicated point. This is not as good as the slope because the slope essentially uses all the data points at once. 1 : solution curve through 0, 2 2 : 1 : solution curve through 1, 0 Curves must go through the indicated points, follow the given slope lines, and extend to the boundary of the slope field. Find the slope of the curve at the given points. Slope of tangent at (3, 6) is m = 6/6 m = 1. Example question: Find the slope of the tangent line to the curve f(x) = 2x 2 + 3x – 4 passing through the point P(-1, 5). To find the tangent line at the point p= (a, f(a)), consider another nearby point q= (a+ h, f(a+ h)) on the curve. Its a nonlinear curve. You have connected the points in your mind, so you "see" a curve. This time we know nothing special about the P 1 P 2 P 3 (1, 1) geometry of the curve, so we adopt a different procedure. You can see that the slope of the parabola at (7, 9) equals 3, the slope of the […]. By taking the derivative,. Step 2: Click the blue arrow to submit. Finding the gradient of a curve To find the gradient of a curve, you must draw an accurate sketch of the curve. Benigno Instructional Designer: Celia T. Average velocity is given by , which is the slope of a secant line through the points (a, f(a)) and (a+h, f(a+h)). What the slope of the tangent line is at times before and after this point is not known yet and has no bearing on the slope at this particular time, $$t$$. 1 : solution curve through 0, 2 2 : 1 : solution curve through 1, 0 Curves must go through the indicated points, follow the given slope lines, and extend to the boundary of the slope field. Tangent at a particular point on the curve is unique and hence its slope. (b) Find an equation of the tangent line to the curve at P(1. The curve y = x/(1 + x^2) is called a serpentine. Example question: Find the slope of the tangent line to the curve f(x) = 2x 2 + 3x – 4 passing through the point P(-1, 5). Finding the Slope of a Tangent Line: A Review. Find the slope of the tangent line to the given polar curve at the point specified by the value of θ. Now drag the points "A" and "B" to the function line. A) FInd the slope of the curve y=x^3-14x at the given point P(2,-20) by finding the limiting value of the slope of the secants through P. This line is called a tangent line, or sometimes simply a tangent. For example, the linear calibration example just given in the previous section, where the "true" value of the slope was 10 and the intercept was zero, this spreadsheet (whose screen shot shown on the right) predicts that the slope is 9. Equation of a curve with given equation of slope and passing through a point Problem What is the equation of the curve passing through the point (3, -2) and having a slope at any point (x, y) equal to (x 2 + y 2 ) / (y 3 - 2xy)?. Given: = 360 ft2, =6ft, , -2deg, , , ,,, , , , ,and Rectangular wing implies , and for all n To find the neutral point we need to find the lift-curve slopes of the wing and tail, and the change in downwash with respect to angle of attack (it was given, but it could have been estimated which will be done here, ie verify the value given!). This worksheet is designed to allow students to practise the skill before moving onto application. In this case, we can take the derivative of y with respect to x, and plug in the desired value for x. Find an equation of the tangent line to the curve at the given point. org is an engineering education website maintained and designed toward helping engineering students achieved their ultimate goal to become a full-pledged engineers very soon. Given y = f(x) = x 3 - 12x + 1 f '(x) = 3x 2 - 12 The derivative of f(x) at x = x 1 (and y = y 1) is equal to the slope of the tangent line to the curve f(x), at (x 1, y 1). Posted one month ago number 90 please Show transcribed image text Multiple descriptions Which of the following parametric. When you are given a slope-intercept form equation, then finding the slope is simple: since the slope-intercept form is y = mx + b, and m = slope, it would simply be the coefficient of x. Consider the curve given by y2 = 2+xy. To solve the problems in this lesson, students use the slope formula, which states that m = (y2 - y1) / (x2 - x1). Eventually, the point Q will be so close to P, that the slopes of the tangent and secant lines will be approximately. Benigno Instructional Designer: Celia T. Find the slope of the tangent to the curve five 𝑥 over two 𝑦 minus two 𝑦 over 𝑥 equals negative four at the point two, five. The individual is consuming more of both goods at point B than at point C. The slope at point A is 1/2, or. Find the gradient of the curve y = x² at the point (3, 9). ) Deleting a line Click and drag a point from the line off of the graph. Plot and label 2 points on the line, anywhere on the line. To find the tangent line at the point p = (a, f(a)), consider another nearby point q = (a + h, f(a + h)) on the curve. One of the major purposes of Calculus is to find the slope of a curvy function at a specific point. The formula to find the Slope when radius is given is Slope = (y2-y1)/(x2-x1) where x1,y1 and x2,y2 are the two given points. Power rule says that we take the exponent of the "x" value and bring it to the front. If y = f(x) is the eqaution of the curve the f"(x) represents the gradient of the curve and f'(a) is the slope of the tangent to the curve at the point where x = a. To find the tangent line at the point p= (a, f(a)), consider another nearby point q= (a+ h, f(a+ h)) on the curve. Carter, Suppose that a tangent to the curve y = -x 2 + 1 at the point P on the curve with coordinates (a, b) passes through (2, 0). 2%) and that the intercept is 0. We can determine the station and elevation of points A and B by reducing this unequal tangent problem to two equal tangent problems. Two parallel lines have the same slope, so from the given line, we can obtain the slope. Find the slope of the tangent line to the given polar curve at the point specified by the value of θ. f(x) = 9x − 4x^2 at (−2, −34) m = Determine an equation of the tangent line. Hence the slope of the tangent line at the given point is 1. ΔY / ΔX = slope of the curve. The equation of a line is typically written as y=mx+b where m is the slope and b is the y-intercept. With first and or second derivative selected, you will see curves and values of these derivatives of your function, along with the curve defined by your function itself. We can now use point-slope form in order to find the equation of our tangent line. is the lowest point of the curve. For this equation y = 3x + 2 is in slope intercept form. Therefore to find the slope at the given point, we need to find the derivative of the function using power rule. Correct! To find the slope of two given points, you can use the point-slope formula of (y2 - y1) / (x2 - x1). Recent surveys of professional economists also point to a lower probability of a recession in the next year than the model based on the unadjusted slope of the yield curve. How can you find the slope of a curve at a given point if 2 points are needed to make a line? some curves are easier than others. Consider the differential equation given by dy x dx y =. The smallest slope of a curve means the point at which the derivative (the slope) is minimal. This abstract concept has a variety of concrete realizations, like finding the velocity of a particle given its position and finding the rate of a reaction given the concentration as a function of time. The formula: m= lim(h approa. 13 (a) Find the slope of the curve y=x°-11x at the given point P(1. We have now found the tangent line to the curve at the point (1,2) without using any Calculus!. In this case, we can take the derivative of y with respect to x, and plug in the desired value for x. When the demand curve is a straight line, this occurs at the middle point of the curve, at a. From the point-slope form of the equation of a line, we see the equation of the tangent line of the curve at this point is given by y 0 = ˇ 2 x ˇ 2 : 2 We know that a curve de ned by the equation y= f(x) has a horizontal tangent if dy=dx= 0, and a vertical tangent if f0(x) has a vertical asymptote. Slope of. The slope of a demand curve, whether it is flat or steep, is based on absolute changes in price and quantity, that is, Slope of demand curve = ∆p/∆q = 1/ ∆q/∆p On the other hand, the price elasticity of demand is concerned with relative changes in price and quantity, that is, E p = ∆ q/q / ∆ p/p. PLEASE HELP!! i dont know where to start thank you!!!!. This worksheet has been made for the new GCSE specification. The numpy calculation is the correct one to use, but may be a bit tricky to understand how it is calculated. Suppose that a curve is given as the graph of a function, y = f(x). 2x + 1 - 2 (dy/dx) + 0 = 0. ) an equation of a tangent line at P. The formal definition of the limit can be used to find the slope of the tangent line: If the point P(x 0,y 0) is on the curve f, then the tangent line at the point P has a slope given by the formula: M tan = lim h→0 f(x 0 + h) - f(x 0)/h. Find the slope of the curve at the point indicated. The other way to find the slope, is a very down and dirty way, which preceded the method that uses limits. In Blender drivers however, I think that would be the wrong approach. (b) Find an equation of the tangent line to the curve at P(1. Hence the slope of the line perpendicular (or orthogonal) to this tangent is which happens to be the slope of the tangent line to the orthogonal curve passing by the point (x,y). The slope is basically the amount of slant a line has, and can have a positive, negative, zero or undefined value. Read on for another quiz question. The vertical curve equation can be expressed as: y = e pvc + g 1 x + [(g 2 − g 1) × x² / 2L] Where, y represents the vertical elevation of point, e pvc refers to initial elevation, g 1 refers to initial grade, g 2 is the final grade, and. Find the slope of the curve x^2+y^2-6x+10y+5+0 at point (1, 0). Find the slope of the curve at the given point P and an equation of the tangent line at P. find the slope. Tan Language Editor : Amihan B. By applying this formula, it can be said that, when at the fall of price by Re. The Slope of a Tangent Line to a Curve. y= -x-1 9 O B. f(x)=$$4 x ^ { 2 }$$-7x+5; P(2, 7) - Slader. Determine the equation of the tangent by substituting the gradient of the tangent and the coordinates of the given point into the gradient-point form of the straight line equation, where Equation of normal is Y - y = ( dy. Find the Equation of a Line Given That You Know Two Points it Passes Through. Suppose that a curve is given as the graph of a function, y = f(x). Find the slope of the curve at the given point P and an equation of the tangent line at P. Introduction Consider a function f(x) such as that shown in Figure 1. Finding the Tangent Line Equation with Implicit Differentiation. Program to check if three points are collinear; Program to find slope of a line. 7 Slope of Curve 2 EX 1 Find the slope of the curve at (2,-6) hint: Calculate the slope between (2,-6) and (2+h, f(2+h)) Definition: The slope of a function, f, at a point x = (x, f(x)) is given by m = f '(x) = f '(x) is called the derivative of f with respect to x. Attachments 5 REASONS to buy your textbooks and course materials at SAVINGS: Prices up to 75% off, daily coupons, and free shipping on orders over 25 CHOICE: Multiple format options including textbook, eBook and eChapter rentals CONVENIENCE: Anytime, anywhere access of eBooks or eChapters via mobile devices SERVICE: Free eBook access while your text ships,. (See the figure. Since this demand curve is a straight line, the slope of the curve is. Find the slope of the line passing through the points (–3, 5) and (4, –1). dy/dx = f'(x) = sec 2 x (Slope of tangent). y-y,=m(x-x,) y-(-3)=-4(x-1) y=-4x+1 which is the equation of the tangent line. Serial number gun historyNow we will explain how we found the slope and intercept of our function The image below points to the Slope - which indicates how steep the line is, and the Intercept - which is the value of y, when x = 0 (the point where the diagonal line crosses the vertical axis). For example, if the slope = (3 - 5) ÷ (2 - 3), then slope = -2 ÷ -1 = 2. How To: Find the equation of a tangent line ; How To: Solve for the area under a curve in calculus ; How To: Connect slopes and derivatives, For Dummies ; How To: Find the equation of a circle given: center & tangent ; How To: Find the slope of a line given 2 points ; How To: Find the equation of a line in point-slope form. In this case, I don't have to find the points, because they've already given them to me. The graph is sketched by first locating the y-axis intercept or crossing. Use either definition of the derivative to determine the slope of the curve y=f(x) at the given point P. Find the slope of the curve at the given point. Example 2 : Find the equation of the tangent to the parabola x 2 + x − 2y + 2 = 0 at (1, 2) Solution : Equation of the given curve x 2 + x − 2y + 2 = 0. 7 Slope of Curve 2 EX 1 Find the slope of the curve at (2,-6) hint: Calculate the slope between (2,-6) and (2+h, f(2+h)) Definition: The slope of a function, f, at a point x = (x, f(x)) is given by m = f '(x) = f '(x) is called the derivative of f with respect to x. y-y,=m(x-x,) y-(-3)=-4(x-1) y=-4x+1 which is the equation of the tangent line. (P can be at any point along the curve. Find slope of tangent line to r(q) = 2 + 3 cos(8q) at q = 3 p/4. The curve y = x/(1 + x^2) is called a serpentine. 1) y = x 2 + 11x - 15, P(1, - 3) Use the graph to evaluate the limit. Draw the "max" line -- the one with as large a slope as you think reasonable (taking into account error bars), while still doing a fair job of representing all the data. or The slope at a Point. 0 (1 ratings) Download App for Answer. r = cos (? Find the slope of the. (20 votes) See 2 more replies. Given two points (x 1,y 1) and (x 2,y 2) on a line, the slope m of the line is Through differential calculus, one can calculate the slope of the line to a curve at a point. We'll show that the tangent lines to the curve y = x 3 – 3 x that are parallel the x-axis are at the points (1, –2. (a) Find dy dx. But that slope must be equal to zero, thus:. To find the slope, you divide the difference of the y-coordinates of 2 points on a line by the difference of the x-coordinates of those same 2 points. You have connected the points in your mind, so you "see" a curve. Free slope calculator - find the slope of a curved line, step-by-step This website uses cookies to ensure you get the best experience. Read on for another quiz question. Using these points we calculate the slope at point A to be:. Compute the distance of the lowest point of the curve from the P. Given equation of curve :-y = x^(1/4) Now, the slope of the tangent to the curve at a point (a,b) is given by the value of y'(x) at that point. But a slope is not a line, but represents the direction or angle of that line. Find the equation of the curve whose slope at any point is given by f'(x) = (x^(1/2)) + (x^2) and which passes through - Answered by a verified Tutor We use cookies to give you the best possible experience on our website. (B) Sketch a solution curve that passes through the point (0, 1) on your slope field. @farzad: I can not define a straight line between two points on the curve to find slope as the slope changes at every point. Really clear math lessons (pre-algebra, algebra, precalculus), cool math games, online graphing calculators, geometry art, fractals, polyhedra, parents and teachers areas too. Now, equation of a normal line at point (1, -3) and with slope -1 is. Price points: decreasing the price from 2. Computation of the slope of the tangent line to a curve at a point. A tangent line touches the curve at one point and has the same slope as the curve does at that point. To find the tangent line to the curve y = f(x) at the point, we need to determine the slope of the curve. The individual is consuming more of both goods at point B than at point C. The formula: m= lim(h approa. y=1/x P: (-4, -¼) Expert Answer 100% (1 rating) Previous question Next question Get more help from Chegg. At the given point find the slope of the curve 10 points!!? At the given point find the slope of the curve, the line that is tangent to the curve, or the line that is normal to the curve as. Suppose that a curve is given as the graph of a function, y= f(x). Measure the slope of this line. To find the gradient of a curve, you must draw an accurate sketch of the curve. Then, to find out what the maximum value is, we still need to plug x = 6 and y = 3 back into the objective function. Between those points, the slope is (4-8)/(4-2), or -2. A line normal to a curve at a given point is the line perpendicular to the line that's tangent at that same point. What is the curve equation to find the tangent line at a point (25,5) on it? EDIT: [As per the additional details, which I could relook only after about 9 hours of earlier presentation] 1) Differentiating the given one, dy/dx = 1/(2√x) 2) At x = 25, dy/dx = 1/(2√25) = 1/10; this is the slope of the tangent line at the given point; this is by the geometrical definition of differentiation] 3. Click here for the answer. Students are given four graphs and are required to find the gradient using a tangent at various points. PLEASE HELP!! i dont know where to start thank you!!!!. How to find the slope of a Curve at a Given point. Note again that the slope is negative because the curve slopes down and to the right. Find the equation of the curve given that it passes through (-2,1) Class 12. (c)Find the equations for the tangent lines to the curve at all points where the slope of the tangent line is 8. Mathematically, the slope of a curve is represented by rise over run or the change in the variable on the vertical axis divided by the change in the variable on the horizontal axis. Different words, same formula. The Point slope method uses the X and Y co-ordinates and the slope value to find the equation. r = 5 sin(0), 0 = t/6. Find the slope of the tangent line to the given polar curve at the point specified by the value of θ. See full list on tutorial. Determine the equation of the tangent by substituting the gradient of the tangent and the coordinates of the given point into the gradient-point form of the straight line equation, where Equation of normal is Y - y = ( dy. Find the slope intercept equation of a line (y=mx+b or y=mx+c) from two points with this slope intercept form calculator. Instantaneous velocity is given by , which is the slope of the tangent line to the curve at (a, f(a)). Slope Formula Calculator (Free online tool calculates slope given 2 points) The slope of a line characterizes the direction of a line. Find all points on the curve y = x 3 - 3 x where the tangent line is parallel to the x-axis. Find the slope of the curve at the given point P and an equation of the tangent line at P. State the formula for slope as: Compare slopes of graphs in terms of "more steep," "less steep," etc. No points possible; undefined expression. The Slope Formula In this lesson, students are given the coordinates of two points, and are asked to find the slope of the line that passes through the points (without graphing). Note that when \theta=\pi the curve hits the origin and does not have a tangent line. (See the figure. In effect, this would be the slope of the tangent line, as a. The Slope of a Tangent Line to a Curve. Horizontal and Vertical Lines. Ocampo Evelyn L. Find the points on curve. Find the equation of the tangent line to the graph of f at the point (1,1). A tangent line touches the curve at one point and has the same slope as the curve does at that point. asked in Science & Mathematics Mathematics · 8 years ago Find the slope of the tangent line to the given polar curve at the point specified by the value of θ. You can take whichever one you want, or even average the slopes on each side if you want. Finding the Slope of a Line from a Graph. The angle between the tangents at ant point P and the line joining P to the original, where P is a point on the curve in (x 2 + y 2) = c tan − 1 x y , c is a constnt, is View solution Find the slope of the tangent to the curve x = t 2 + 3 t − 8 , y = 2 t 2 − 2 t − 5 a t t = 2. y=5 - 7x?; P(-4, - 107). Then, you want to find the slope at x = 9, so you would substitute that in to your derivative. The slope at point A is 1/2, or. A number which is used to indicate the steepness of a curve at a particular point. The point is given, the only missing quantity is the slope. A line normal to a curve at a given point is the line perpendicular to the line that's tangent at that same point. Given that the curve y=x^3 has a tangent line that passes through point (0,2). ] Delta Notation. Find the slope of the tangent line to the given polar curve at the point specified by the value of \theta. y=x^2-5, P(2, -1) By signing up,. Designate the X and Y value for points 1 and 2. find the derivative of tan x, which sec^2 x step 2. Price points: decreasing the price from 2. One answer suggested using parametric line equations to find the intersection between two infinite lines and then resolving if the intersection point fell on the given line. To get a viewing window containing the specified value of x, that value must be between Xmin and Xmax. 1 shows points corresponding to \theta equal to 0, \pm\pi/3, 2\pi/3 and 4\pi/3 on the graph of the function. Find an equation of the curve that passes through the point (0, 1) and whose slope at (x, y) is 9xy?. The slope is 1/2 throughout the budget line. 1 - Enter the x and y coordinates of two points A and B and press "enter". Find an equation of the tangent line to this curve at the point(2, 0. The graph of z 1 shown in Lesson 13. Given that the curve y=x^3 has a tangent line that passes through point (0,2). From the point (3,2), we can draw a small line segment with slope 2. Answer: Again, we know that the slope of the tangent line at any point (x;y) on the curve is given by y0(x) = 3x2 4: Therefore, a point (x 0;y 0) on the curve has a tangent. \left(x^{2}+y^{2}\right… 03:37. Finding the Slope of a Line from Two Points 2 - Cool Math has free online cool math lessons, cool math games and fun math activities. We'll show that the tangent lines to the curve y = x 3 - 3 x that are parallel the x-axis are at the points (1, -2. Given the function, Y = 4 + 2x2, the first derivative gives us a slope of the tangent at a given point. The slope of a curve at a point is defined to be the slope of the tangent line. For the above example, the slope of the solution at the point (3,2) is 2 + 4 y' = = 2 3. Bring points "A" and "B" near the point where you want to find the slope. Plug the ordered pair into the derivative to find the slope at that point. Slope of the Tangent Line to a Curve to a Given Point. Finding a Tangent Line to a Graph. The larger the value is, the steeper the line. B) find the coordinates of the points on the curve where the tangents are vertical C) at the point (0,3) find the rate of change in the slope. Given that the curve y=x^3 has a tangent line that passes through point (0,2). Calculate the first derivative of f (x). Let's go through an example. 2 The Slope of a Curve at a Point notes by Tim Pilachowski Finding the slope of a line is fairly simple, once you get the hang of it, because the slope is the same everywhere on the line. Find out the coordinates of the points for which the slope of the tangent line to the curve y = x 3 - 12x + 1 is zero. To find the slope (derivative) of a function at a specified value of x, perform the following steps: Graph the function in a viewing window that contains the specified value of x. Tan Language Editor : Amihan B. Given the function, Y = 4 + 2x2, the first derivative gives us a slope of the tangent at a given point. Calculate the elevation point of the vertical curve with the given curve length, initial and final grade and the initial elevation. How do I find the slope of a curve at a point? The slope of a curve of y=f (x) at x=a is f' (a). (b) Sketch a solution curve that passes through the point (0, 1) on your slope field. From the point (3,2), we can draw a small line segment with slope 2. Compute the distance of the lowest point of the curve from the P. Using the exponential rule we get the following derivative,. Substitute both the point and the slope from steps 1 and 3 into point-slope form to find the equation for the tangent line. The results of the equation provide the slope of the line at a given point. Method 1 - use uncertainty of data points I could get the ratio of C/d by just looking at each data point. Using these points we calculate the slope at point A to be:. With the points plugged in, the formula looks like (3 - 2) / (4 - 1). To find the equation of a line we need to know a point on that line and the slope of that line (point slope form) y - y1 = m*(x - x1) (x1, y1) is the point on the line. x 2-y 2 = 2. 𝑥^2/2 + C 𝑦^3/3. Linear curves are simple, but how do we find the slope of any curve, y(x) at the point x? The gradient of the curve at point A is the same as that of the tangent at point A. If we know the slope m, but we do not know the coordinates of the point where the line is tangent to the curve, we can clear the x from the previous formula. Calculate the uncertainty in the slope as one-half of the difference between max and min slopes. Find the equation of a curve passing through the point (0, 2), given that the sum of the coordinates of any point on the curve exceeds the slope of the tangent to the curve at that point by 5 - Mathematics and Statistics. y=x^2-5, P(2, -1) By signing up,. This is the Point-Slope Format:. Finding the Equation of a Line Given a Point and a Slope 1 - Cool Math has free online cool math lessons, cool math games and fun math activities. This is the slope of the curve only at point A. Other names for f '(x): slope instantaneous rate of change speed velocity EX 2 Find the derivative of f(x) = 4x - 1. Its a nonlinear curve. The Slope Formula In this lesson, students are given the coordinates of two points, and are asked to find the slope of the line that passes through the points (without graphing). Plug what we've found into the equation of a line. So, we find equation of normal to the curve drawn at the point (π/4, 1). Replace x and y in that equation with. Recent surveys of professional economists also point to a lower probability of a recession in the next year than the model based on the unadjusted slope of the yield curve. To find the slope of the tangent at the point (0, 9) we substitute the x-coordinate into dy/dx: Now we have the slope: -63. This is curve sketching: being given a function and using that function to find the different properties of the function graph using the first derivative and second derivative to find: critical points, increasing/decreasing, points of inflection, and concavity. Finding a Tangent Line to a Graph. We will begin our study of calculus by looking at limits. Find the curve if it is required to pass through the point (1,1). The derivative of the function gives the slope of the tangent line at a given point. Finding a Normal Line to a Graph. It sometimes is useful to calculate the price elasticity of demand at a specific point on the demand curve instead of over a range of it. A tangent is a straight line which touches the curve at one point only. This is because it is the change in the y-coordinates divided by the corresponding change in the x -coordinates between two distinct points on the line. r = 5 sin(0), 0 = t/6. Substitute the given x-value into the function to find the y-value or point. Finding a Tangent Line to a Graph. Let us find the slope of f (x)=x^3-x+2 at x=1. To solve the problems in this lesson, students use the slope formula, which states that m = (y2 - y1) / (x2 - x1). 1 shows points corresponding to \theta equal to 0, \pm\pi/3, 2\pi/3 and 4\pi/3 on the graph of the function. The equation point slope calculator will find an equation in either slope intercept form or point slope form when given a point and a slope. @farzad: I can not define a straight line between two points on the curve to find slope as the slope changes at every point. Tan Language Editor : Amihan B. m m = = The derivative of y = 3x2 +3x y = 3 x 2 + 3 x. We need to find this slope to solve many applications since it tells us the rate of change at a particular instant. How to find the slope of a Curve at a Given point. The slope of the curve at point (0,1) : m = dy / dx = 1 / 2√ (1 + 4sin0) (0 + 4cos0) m = dy / dx = 1/2 (0+4) m = dy / dx = 1/2*4 = 2 m = 2. First you need to figure out the control points, which takes more time than evaluating the curve (because you have to do it in Python). Finding the equation of a line tangent to a curve at a point always comes down to the following three steps: Find the derivative and use it to determine our slope m at the point given Determine the y value of the function at the x value we are given. Enter the point and slope that you want to find the equation for into the editor. State the formula for slope as: Compare slopes of graphs in terms of "more steep," "less steep," etc. ? y = x 2 + 11x - 15, P(1,-3) My teacher is asking us to find the answer without the use of derivatives. Because the production possibilities curve for Plant 1 is linear, we can compute the slope between any two points on the curve and get the same result. the chosen chord length. One way of finding the slope at a given point is by finding the derivative. Choose a chord length (c), usually 25 or 50 feet 3. To find the tangent line at the point p= (a, f(a)), consider another nearby point q= (a+ h, f(a+ h)) on the curve. The vertical change between two points is called the rise, and the horizontal change is called the run. Substituting The Coordinates Of The Point Before Solving For dy / dx. There is no such thing as the "slope of a curve" per se; what you have to find is the slope of the line that hugs the curve closely at a given point, called the tangent line at that point. For this equation y = 3x + 2 is in slope intercept form. Before we can use the calculator it is probably worth learning how to find the slope using the slope formula. You are correct on that 2 points define a line. Find slope of tangent line to r(q) = 2 + 3 cos(8q) at q = 3 p/4. Graph functions, plot points, visualize algebraic equations, add sliders, animate graphs, and more. We can determine the station and elevation of points A and B by reducing this unequal tangent problem to two equal tangent problems.$${k_2} = - 2y_1 = -4. (b) Find an equation of the tangent line to the curve at P(1. The point-slope formula. 197 with a standard deviation 0. •calculate the equation of the normal to a curve at a given point Contents 1. 33333 falls between -3/2 and -1, so the optimal solution would be at the point (6,3). •calculate the equation of the normal to a curve at a given point Contents 1. Let's go through an example. Find the slope of the tangent line to the given polar curve at the point specified by the value of θ. Finding the slope of a curve at a point is one of two fundamental problems in calculus. The derivative of the function gives you the slope of the function at any point. Local Linearization: take normal slope of two points given to find the approximate slope at a certain point Linear Approximation: Find the slope using two points, write an equation, plug in the point you are trying to find. One of the major purposes of Calculus is to find the slope of a curvy function at a specific point. Click here👆to get an answer to your question ️ Find the slope of the tangent to the curve y = x^3 - 3x + 2 at the point whose x coordinate is 3. (a) Find dy dx. So, it is logical to think that the slope is zero at that "bottom" point and therefore the derivative is zero at that point too. When it comes to finding the slope $$(m)$$ of a curve at a particular point, you need to differentiate the equation of the curve. We would also like to be able to talk about the slope of a curve, but we will have to realize that the slope is not the same at different points on the curve. 1 : solution curve through 0, 2 2 : 1 : solution curve through 1, 0 Curves must go through the indicated points, follow the given slope lines, and extend to the boundary of the slope field. Learn how to find the slope and equation of the normal line to the Graph at particular Point.
|
# How to compute the first eigenvalue of Hyperbolicconeover$${S^n}\left({\frac{1}{2}} \right)$$[M=R\times{}_{\cosht}N\]
$${S^n}\left( M = R \times N$$with the warped product metric$$d{s^2} = d{t^2} + {\frac{1}{2}} \cosh ^2}\left( t \right)$$denotes an n-dim sphere right)ds_N^2$$where N(dimN=n-1) is a compact manifold with radius 1/2,How$$Ric \ge - \left( {n - 2} \right)$$It should be mentioned that M may not be a Riemannian manifold but an Alexandrov space.So how to compute the first eigenvalue of Hyperbolic cone over M?If we restrict to the case$${S^n}\left( \$N = {S^{n - 1}}\left( {\frac{1}{2}} \right)$$.right)$$an n-1 dim sphere with radius 1/2,then the result?
# How to compute the first eigenvalue of Hyperbolic cone over $${S^n}\left( {\frac{1}{2}} \right)$$
$${S^n}\left( {\frac{1}{2}} \right)$$denotes an n-dim sphere with radius 1/2,How to compute the first eigenvalue of Hyperbolic cone over $${S^n}\left( {\frac{1}{2}} \right)$$.
|
## 17Calculus Infinite Series - Practice Problem 2438
Infinite Series Practice Problem 2438
$$\displaystyle{\sum_{n=1}^{\infty}{\frac{n}{n^2-\cos^2(n)}}}$$
Determine the convergence or divergence of the series.
If the series converges,
- determine the value the series converges to, if possible; and
- determine if the series converges absolutely or conditionally.
Choosing Which Tests To Try
Rejecting the Obvious
1. We can tell right away that this is not an alternating series, telescoping series or one of the special series.
2. We can tell that the limit of $$a_n$$ is zero. So the divergence test will not help us.
3. The form of $$a_n$$ contains $$\cos^2(n)$$ in the denominator and a polynomial in the denominator. This combination is a problem to integrate.
What We Are Left With
4. We are left with the ratio test, the comparison tests and the root test. The root test will give us quite a complicated term in the denominator. So we will not try that one unless nothing else works.
Applying The Tests
### Ratio Test
$$\displaystyle{ a_n = \frac{n}{n^2-\cos^2(n)} }$$ and $$\displaystyle{ a_{n+1} = \frac{n+1}{(n+1)^2-\cos^2(n+1)} }$$
$$\displaystyle{ \begin{array}{rcl} \displaystyle{\lim_{n \to \infty}{ \left| \frac{a_{n+1}}{a_n} \right| }} & = & \displaystyle{\lim_{n \to \infty}{ \left| \frac{n+1}{(n+1)^2-\cos^2(n+1)} \frac{n^2-\cos^2(n)}{n} \right| }} \\ & = & \displaystyle{\lim_{n \to \infty}{ \left| \frac{n+1}{n} \frac{n^2-\cos^2(n)}{(n+1)^2-\cos^2(n+1)} \right| }} \\ & = & \displaystyle{\lim_{n \to \infty}{ \left| \left[ 1 + \frac{1}{n} \right] \frac{n^2-\cos^2(n)}{(n+1)^2-\cos^2(n+1)}\frac{1/n^2}{1/n^2} \right| }} \\ & = & \displaystyle{\lim_{n \to \infty}{ \left| \left[ 1 + \frac{1}{n} \right] \frac{1-\cos^2(n)/n^2}{(1+1/n)^2-\cos^2(n+1)/n^2} \right| } } \end{array} }$$
Now let's look at each piece.
$$\displaystyle{ \lim_{n \to \infty}{ 1/n } = 0 }$$
Using the pinching theorem, we know that $$\displaystyle{ \lim_{n \to \infty}{ \frac{\cos^2(n)}{n^2} } =0 }$$ and $$\displaystyle{ \lim_{n \to \infty}{ \frac{\cos^2(n+1)}{n^2} } =0 }$$. The details are in the section below.
So now we have $$\displaystyle{ \lim_{n \to \infty}{ \left| \left[ 1 + \frac{1}{n} \right] \frac{1-\cos^2(n)/n^2}{(1+1/n)^2-\cos^2(n+1)/n^2} \right| } = (1+0)\frac{1-0}{1-0} = 1 }$$
Conclusion
Since the limit $$\displaystyle{ \lim_{n \to \infty}{ \left| \frac{a_{n+1}}{a_n} \right| } = 1}$$ the ratio test is inconclusive.
### Limit Comparison Test
$$\displaystyle{ a_n = \frac{n}{n^2-\cos^2(n)} }$$
For large $$n$$ the $$n^2$$ term dominates in the denominator since $$\cos^2(n) \leq 1$$. In the numerator, we have $$n$$. So let's compare this series with the test series $$\sum{t_n}$$ where $$\displaystyle{t_n = \frac{1}{n} }$$.
Since $$t_n$$ is a p-series with $$p=1$$, the test series diverges. The limit comparison test requires us to set up the limit $$\displaystyle{ \lim_{n \to \infty}{\frac{a_n}{t_n}} }$$.
$$\begin{array}{rcl} \displaystyle{\lim_{n \to \infty}{\frac{a_n}{t_n}}} & = & \displaystyle{\lim_{n \to \infty}{\frac{n}{n^2-\cos^2(n)} \frac{n}{1} }} \\ & = & \displaystyle{\lim_{n \to \infty}{\frac{n^2}{n^2-\cos^2(n)} \frac{1/n^2}{1/n^2} }} \\ & = & \displaystyle{\lim_{n \to \infty}{\frac{1}{1-\cos^2(n)/n^2} }} \\ & = & \displaystyle{\frac{1}{1-0}} = 1 \end{array}$$
Since the limit is finite and positive and the test series diverges, the series $$\sum{a_n}$$ also diverges.
Conclusion
The series $$\sum{a_n}$$ diverges by the limit comparison test.
We evaluated the limit $$\displaystyle{ \lim_{n \to \infty}{\frac{\cos^2(n)}{n^2}} }$$ using the pinching theorem. The details can be found in the section below.
### Direct Comparison Test
$$\displaystyle{ a_n = \frac{n}{n^2-\cos^2(n)} }$$
For large $$n$$ the $$n^2$$ term dominates in the denominator since $$\cos^2(n) \leq 1$$. In the numerator, we have $$n$$. So let's compare this series with the test series $$\sum{t_n}$$ where $$\displaystyle{t_n = \frac{1}{n} }$$.
Since $$t_n$$ is a p-series with $$p=1$$, the test series diverges. So the direct comparison test requires us to set up the inequality as $$t_n \leq a_n$$.
$$\begin{array}{rcl} t_n & \leq & a_n \\ \displaystyle{\frac{1}{n}} & \leq & \displaystyle{\frac{n}{n^2 - \cos^2(n)}} \\ n^2 - \cos^2(n) & \leq & n^2 \\ -\cos^2(n) & \leq & 0 \\ \cos^2(n) & \geq & 0 \end{array}$$
The last inequality is always true since a squared term is always greater than or equal to zero. So the series diverges.
Conclusion
The series $$\sum{a_n}$$ diverges by the direct comparison test.
Evaluating Limit With The Pinching Theorem
$$\begin{array}{rcccl} 0 & \leq & \displaystyle{\frac{\cos^2(n)}{n^2}} & \leq & \displaystyle{\frac{1}{n^2}} \\ \displaystyle{\lim_{n \to \infty}{0}} & \leq & \displaystyle{\lim_{n \to \infty}{\frac{\cos^2(n)}{n^2} } } & \leq & \displaystyle{\lim_{n \to \infty}{\frac{1}{n^2}}} \\ 0 & \leq & \displaystyle{\lim_{n \to \infty}{\frac{\cos^2(n)}{n^2} }} & \leq & 0 \end{array}$$
By the pinching theorem, $$\displaystyle{ \lim_{n \to \infty}{\frac{\cos^2(n)}{n^2} } = 0 }$$
Using the same logic, we can say that $$\displaystyle{ \lim_{n \to \infty}{\frac{\cos^2(n+1)}{n^2} } = 0 }$$
Test Summary List and Answer
test/series works? notes no limit is zero, so test is inconclusive no not a p-series no not a geometric series no not an alternating series no not a telescoping series no inconclusive yes yes best test no not integrable no too complicated
The series $$\displaystyle{\sum_{n=1}^{\infty}{\frac{n}{n^2-\cos^2(n)}}}$$ diverges.
Notes
The direct comparison test is probably the best test to use here since we did not have to evaluate a limit that involved using the pinching theorem. However, the limit comparison test worked well too.
Really UNDERSTAND Calculus
### Topics You Need To Understand For This Page
all infinite series topics L'Hopitals Rule
### Search Practice Problems
Do you have a practice problem number but do not know on which page it is found? If so, enter the number below and click 'page' to go to the page on which it is found or click 'practice' to be taken to the practice problem.
free ideas to save on bags & supplies
As an Amazon Associate I earn from qualifying purchases.
When using the material on this site, check with your instructor to see what they require. Their requirements come first, so make sure your notation and work follow their specifications.
DISCLAIMER - 17Calculus owners and contributors are not responsible for how the material, videos, practice problems, exams, links or anything on this site are used or how they affect the grades or projects of any individual or organization. We have worked, to the best of our ability, to ensure accurate and correct information on each page and solutions to practice problems and exams. However, we do not guarantee 100% accuracy. It is each individual's responsibility to verify correctness and to determine what different instructors and organizations expect. How each person chooses to use the material on this site is up to that person as well as the responsibility for how it impacts grades, projects and understanding of calculus, math or any other subject. In short, use this site wisely by questioning and verifying everything. If you see something that is incorrect, contact us right away so that we can correct it.
We use cookies on this site to enhance your learning experience.
|
A Tamper-Free Semi-Universal Communication System for Deletion Channels
# A Tamper-Free Semi-Universal Communication System for Deletion Channels
Shahab Asoodeh, Yi Huang, and Ishanu Chattopadhyay Computation Institute and Institute of Genomics and System Biology, The University of Chicago, Chicago, IL 60637 [email protected] The University of Chicago, Chicago, IL [email protected] Institute, Chicago, IL [email protected]
###### Abstract
We investigate the problem of reliable communication between two legitimate parties over deletion channels under an active eavesdropping (aka jamming) adversarial model. To this goal, we develop a theoretical framework based on probabilistic finite-state automata to define novel encoding and decoding schemes that ensure small error probability in both message decoding as well as tamper detecting. We then experimentally verify the reliability and tamper-detection property of our scheme.
## I Introduction
The deletion channel is the simplest point-to-point communication channel that models synchronization errors. In the simplest form, the inputs are either deleted independently with probability or transmitted over the channel noiselessly. As a result, the length of channel output is a random variable depending on . Surprisingly, the capacity of deletion channel has been one of the outstanding open problems in information theory [1]. A random coding argument for proving a Shannon-like capacity result for deletion channel (in general for all channels with synchronization errors) was given by Dobrushin [2] which is recently improved by Kirsch and Drinea [3] to derive several lower bounds. Readers interested in most recent results on deletion channels are referred to the recent survey by Mitzenmacher [4] that provides a useful history and known results on deletion channels.
As the problem of computing capacity of deletion channels is infamously hard, we focus on another problem in deletion channels. In this paper, we study the behavior of the deletion channel under an active eavesdropper attack. Secrecy models in information theory literature, initiated by Yamamoto [5], assume that there exists a passive eavesdropper who can observe the symbols being transmitted over the channel. The objective is to design a pair of (randomized) encoder and decoder such that the message is decoded with asymptotically vanishing error probability at the legitimate receiver while ensuring that the eavesdropper gains negligible information about the message. In all secrecy models (see, e.g., [6, 7, 8, 9, 10, 11, 12]) the crucial assumption is that the eavesdropper can neither jam the communication channel between legitimate parties nor can she modify any messages exchanged between them. However, in many practical scenarios, the eavesdropper can potentially change the channel, for instance, add stronger noise to change the crossover probability of a binary symmetric channel or the deletion probability of a deletion channel.
In our adversarial model, we assume that two parties (say Alice and Bob) wish to communicate over a public deletion channel while an eavesdropper (say Eve) can potentially tamper the statistics of the channel. We focus on deletion channel and assume that Eve can have possibly more bits deleted, and hence increases the deletion probability of the channel. The objective is to allow a reliable communication between Alice and Bob (with vanishing error probability) regardless of the eavesdropper’s action. To this goal, we design (i) a randomized encoder using probabilistic finite-state automata which, given a fixed message, generates a random vector as the channel input and (ii) a decoder which generates an estimate of the message only when the channel is not tampered. In case the channel is indeed tampered, the decoder can declare it with asymptotically small Type I and Type II error probabilities. It is worth mentioning that the rate of our scheme is (almost) zero and hence we do not intend to study capacity of deletion channels.
Unlike the classical channel coding where the set of all possible channel inputs (aka, codebook) must be available at the decoder, our scheme requires that only the set of PFSA’s used in the encoder to be available at the decoder. This model, that we call semi-universal, is contrasted with universal channel coding [13] where neither channel statistics nor codebook are known and the decoder is required to find the pattern of the message.
The rest of the paper is organized as follows. In Section II, we discuss briefly the notion of PFSA and its properties required for our scheme. Section III specifies the channel model, encoder, decoder, and different error events. In Section IV, we discuss the effects of deletion channels on PFSA. Section V concerns the thoeretical aspects of our coding scheme and Section VI contains several experimental results.
Notation We use calligraphic uppercase letters for sets (e.g. ), sans serif font for functions (e.g. ), uppercase letters for matrices (e.g. ), bold lower case letters for vectors (e.g. ). Throughout, we use to denote a PFSA and and to denote its state and symbol, respectively. We use for a sequence of symbols or interchangeably, if its size is clear in context. Also, for th entry of vector , and for the th row or column of the matrix , respectively. We use to denote a vector with the entry indexed by and a matrix with the column indexed by . Finally, .
## Ii Probabilistic finite state automata
In this section, we introduce a new measure of similarity between two vectors. To do this, we first need to define probabilistic finite-state automata (PFSA).
###### Definition 1 (Pfsa).
A probabilistic finite-state automaton is a quadruple , where is a finite state space, is a finite alphabet with , is the state transition function, and specifies the conditional distribution of generating a symbol conditioned on the state.
In fact, a PFSA is a directed graph with a finite number of vertices (i.e., states) and directed edges emanating from each vertex to the other. An edge from state to state is specified by two labels: (i) a symbol that updates the current state from to , that is, , and (ii) the probability of generating when the system resides in state , i.e., . For instance, in the PFSA described in Fig. 1, thus, the system residing in states evolve to state with probability and it generates symbol . Clearly, for all .
Given two symbols and , one can define the transition function for the concatenation as . Letting denote the set of all possible concatenation of finitely many symbols from , one can easily proceed to define as above for each and . We say that a PFSA is strongly connected if for any pair of distinct states and , there exists a sequence such that . Let be the set of all strongly connected PFSAs. The significance of strongly connected PFSAs is that their corresponding Markov chains (i.e., the Markov chain with state space and transition matrix whose entry is ) has a unique stationary distribution (thus initial state can be assumed to be irrelevant).
###### Definition 2 (Γ-expression for PFSA).
We notice that a PFSA is uniquely determined by given by
(Γg,x)i,j={Pg(si,x),Tg(si,x)=sj,0,otherwise.
The state-to-state transition matrix is defined as
Pg=∑x∈XΓg,x, (1)
and the state-to-symbol transition matrix is given by
˜Pg=(Γg,x1|S|)x∈X,
where is the length- all-one vector.
For the PFSA illustrated in Fig. 1, we have
Γg,0=⎛⎜ ⎜ ⎜⎝.300000.60.800000.50⎞⎟ ⎟ ⎟⎠,Γg,1=⎛⎜ ⎜ ⎜⎝0.700000.40.200000.5⎞⎟ ⎟ ⎟⎠,
Pg=⎛⎜ ⎜ ⎜⎝.3.70000.6.4.8.20000.5.5⎞⎟ ⎟ ⎟⎠,\leavevmode\nobreak \leavevmode\nobreak \leavevmode\nobreak and\leavevmode\nobreak \leavevmode\nobreak \leavevmode\nobreak ˜Pg=⎛⎜ ⎜ ⎜⎝.3.7.6.4.8.2.5.5⎞⎟ ⎟ ⎟⎠.
###### Definition 3 (Generalized PFSA).
Generalized PFSA is a PFSA whose can have more than one non-zero (positive) entries. In this case, we still have
(Γg,x1|S|)i=Pg(si,x).
However, might not be deterministic, and instead it is a probability distribution.
Shannon [14] appears to be the first one who made use of PFSAs to describe stationary and ergodic sources. Given , first a state is chosen randomly according to the stationary distribution, then a symbol is generated with probability which takes the system from state to state . A new symbol is then generated with probability . Letting this process run for time units, we obtain a sequence . In this case, we say that is a realization of . According to Shannon, each state captures the "residue of influence" of the preceding symbol on the system.
For , we denote by the fact that generates .
## Iii System Model and Setup
Suppose Alice has a message which takes value in a finite set and seeks to transmit it reliably to Bob over a deletion channel with deletion probability . The communication channel is assumed to be public, that is, an active eavesdropper, say Eve, can access and possibly tamper the channel. For simplicity, we assume that Eve may delete extra bits and thus changing the channel from to with .
The objective is to design a pair of encoder and decoder that enables Alice and Bob to reliably communicate over only when he is ensured that the channel is not tampered. In classical information theory, the decoder must be tuned with the channel statistics. Hence, reliable communication occurs only when Bob knows the deletion probability . However, Eve might have tampered the channel and increased deletion probability to , and since Bob’s decoding policy was tuned to , this might cause a decoding error –regardless of Bob’s decoding algorithm. Therefore, reliability of the decoding must be always conditioned on the fact that the channel has not been tampered during communication.
Motivated by this observation, we propose the following coding scheme. We first propose a two-step encoder: each message is first sent to a function which maps to a PFSA in , then another function generates a realization of PFSA and sends it over the memoryless channel . Therefore, the encoder function is the composition (see Fig. 2). Unlike the classical setting, Bob need not know the set of all channel inputs for each (aka codebook). Instead, we assume Bob knows (thus the name semi-universal scheme). The output of the channel is an -valued random vector whose length is a binomial random variable (corresponding to how many elements of are deleted). Upon receiving , Bob applies to generate where is an estimate of Alice’s message and specifies whether or not the channel has been tampered. He then declares as the message only when . Therefore, the goal is to design such that for sufficiently large
Pr(T=0\leavevmode\nobreak |\leavevmode\nobreak channel\leavevmode% \nobreak\ is\leavevmode\nobreak\ tampered)+Pr(T=1\leavevmode\nobreak |\leavevmode\nobreak channel\leavevmode\nobreak\ is\leavevmode\nobreak\ % not\leavevmode\nobreak\ tampered)<ε, (2)
and simultaneously
Pr(M≠^M|T=0)≤ε, (3)
for any uniformly chosen message . We say that the reliable tamper-free communication is possible if (2) and (3) hold simultaneously for any .
## Iv PFSA through deletion channel
In this section, we study the channel effect on PFSA’s by monitoring the change of the likelihood of being generated by a PFSA at the channel output. To do this, we first study the likelihood when in Section IV-A, and then move on to the case of positive in Section IV-B. One of the main results in this section is to show that the output of (i.e., ) can be equivalently generated by a generalized PFSA whose and state-to-state transition matrix follow simple closed forms (cf. Theorem 1). In section IV-C, we discuss some basic properties of that will be useful for later development. We conclude this section by introducing the class M2 of PFSAs which is closed under deletion. For notational brevity, we remove the subscript when it is is clearly understood from context.
### Iv-a PFSA over W(0): no deletion
Let a sequence of symbols be given. We define (or simply ) to be the probability that generates . Then we have
p(xn)=p(x1)p(x2|x1)⋯p(xn|xn−1),
where is the conditional probability of generating given that generated . It is clear from section II that
p0=p (4) p(x1)=(pT0˜P)x1, \leavevmode\nobreak pT1=pT0Γx1∥∥pT0Γx1∥∥1, p(x2|x1)=(pT1˜P)x2, \leavevmode\nobreak pT2=pT1Γx2∥∥pT1Γx2∥∥1, ⋮ p(xn−1|xn−2)=(pTn−2˜P)xn−1, \leavevmode\nobreak pTn−1=pTn−2Γxn−1∥∥pTn−2Γxn−1∥∥1,
and finally, , where denotes matrix transpose.
It is clear from the above update rule that any sequence induces two probability distribution: one on the state space , i.e., and the other one on . Let denote the former by and the latter by . Update rules in (4) imply that and . More precisely, since
∥∥pTg(x)Γg,x∥∥1=pTg(x)Γg,x1|S|=pTg(x)(˜Pg)⋅,x=(pTg(x)˜Pg)x=p(x|x),
we have
pT(x|x)pg(xx)=pTg(x)Γg,x. (5)
We also call the symbolic derivative of induced by .
### Iv-B PFSA over W(δ): deletion with probability δ>0
Now we move forward to investigate the effect of deletion probability on PFSA transmission. The following result is a ket for our analysis.
###### Theorem 1.
Let be a channel input and be a channel output with positive deletion probability . Then , where is a generalized PFSA identified by for all , where is the state-to-state transition matrix of and is as defined in (6).
###### Proof.
Assume Bob has observed . Then we have
p(xi|xi−1) = (1−δ)(pTi−1˜P)xi+δ(1−δ)(pTi−1P˜P)xi+δ2(1−δ)(pTi−1P2˜P)xi+⋯ = (1−δ)(pTi−1(∞∑i=0δiPi)˜P)xi = (pTi−1Q(P,δ)˜P)xi,
where
Q(P,δ)=(1−δ)∞∑i=0δiPi=(1−δ)(I−δP)−1. (6)
Analogous to (4), we can define the follwoing distribution induced on
pi=pTi−1Q(P,δ)Γxi∥∥pTi−1Q(P,δ)Γxi∥∥1. (7)
Comparing (7) with expressions in (4), the result follows. ∎
###### Remark 1.
Notice that while the row-stochastic matrix may not be invertible, is non-singular for all , as the the eigenvalues of are less than or equal to . Moreover, it is clear from (6) that is also a row-stochastic matrix with being its eigenvector corresponding to eigenvalue one. We will give a closer look at the eigenvalues of in the next section.
### Iv-C Properties of the generalized PFSA
We start by analyzing the eigenspace of the state-to-state transition matrix of . Note that it follows from (1) that .
###### Theorem 2.
Let be the stationary distribution of strongly connected . Then the generalized PFSA is also strongly connected with stationary distribution .
###### Proof.
Let be an eigenvalue of . Then is an eigenvalue of . Define . Then the result follows from the following observations:
1. For , for all , and hence .
2. For , for all , and furthermore, . ∎
Then following is an immediate corollary.
###### Corollary 1.
We have for all
pg(x)=pg(δ)(x).
###### Proof.
We have
pTg(δ)˜Pg(δ)=pTg˜Pg(δ)=pTgQ(Pg,δ)˜Pg=pTg˜Pg.\qed
A natural question is what happens when . Letting denote the machine corresponding to , we now show that, quite expectedly, is a single-state machine.
###### Theorem 3.
is a single-state PFSA.
###### Proof.
First note that the observations given in the proof of Theorem 2 imply that
limδ→1Q(Pg,δ)=1|S|pTg,
and consequently is a PFSA specified by for .
Suppose is observed. Following the argument given in section IV-B, we get
pg(1)(xx) = pT(1pTΓx1)(1pTΓx2)⋯(1pTΓxn)(˜Pg(1))⋅,x = pT(1pTΓx1)(1pTΓx2)⋯(1pTΓxn)(1pTΓx1) = (pT1)(pTΓx11)⋯(pTΓxn1)(pTΓx1),
and hence, by induction, for all . Since an i.i.d. process corresponds to a single-state PFSA, we conclude that is in fact a single-state PFSA. ∎
### Iv-D M2 Class of PFSA
We note that of a PFSA is not necessarily a PFSA. As an example, the -expression of the generalized PFSA for being the PFSA described in Fig. 1 is
Nevertheless, we introduce M2 a class of PFSAs which is closed under deletion, i.e. implies for all . As this class is instrumental in our experimental results, we shall study it in more details.
M2 is the collection of -state PFSAs on a binary alphabet: with is specified by a quadruple , where and
Γg(μ,ν),0=(μ0ν0),Γg(μ,ν),1=(01−μ01−ν).
Fig. 3 illustrates and its corresponding , which is obtained from Theorem 1. Since has exactly the same form – containing a single column of non-zero entries for all , it is clear that .
Since each is specified by two numbers, we can parametrize M2 by a square in . In Fig. 4, we show the effect of deletion probability on M2 machines. The key observation is that deletion probability drives machines to line.
## V The convergence of likelihood
The goal of this section is to lay the theoretical ground for our algorithms for decoding and tamper detecting with PFSAs. In Section V-C, we employ maximum likelihood framework to decode the generating PFSA given the channel output. We show that likelihood is closely related to entropy rate and KL divergence of PFSAs (to be defined and calculated in V-A and V-B).
### V-a Entropy rate of PFSA
Let be a PFSA. We define as the following:
Hn(g)\coloneqq−∑|x|=npg(x)logpg(x).
Then the entropy rate of is defined as
H(g)\coloneqqlimn→∞1nHn(g).
Note that is in fact the entropy rate of the stochastic process corresponding to [15]. In the next theorem, we show that the above limit exists and and the entropy rate has a simple closed form.
###### Theorem 4.
We have
H(g)=∑s∈S(pg)sH((˜Pg)s,⋅)
###### Proof.
See Appendix -A. ∎
It readily follows from the theorem above that the entropy rate for is
H(g(μ,ν))=νhb(μ)¯μ+ν+¯μhb(ν)¯μ+ν,
where and is the binary entropy function for any .
Next, we show that deletion increases entropy rate, which will be critical for tamper detection purpose.
###### Theorem 5.
The map is monotonically increasing when .
###### Proof.
We have
μ(δ)=μ−δ(μ−ν)1−δ(μ−ν),ν(δ)=ν1−δ(μ−ν),
and
H(g(μ,ν)(δ))=ν1−μ+νhb(μ−δ(μ−ν)1−δ(μ−ν))+1−μ1−μ+νhb(ν1−δ(μ−ν)).
We can then write
ddδH(g(μ,ν)(δ))=α¯μν(1−αδ)2¯αlog(μ−δα)(¯ν−δα)¯μν,
where . It’s straightforward to check that the derivative is always positive when . ∎
### V-B KL divergence of two PFSAs
Let . The -th order KL divergence between and is the KL divergence on the space of length- sequences, i.e.
Dn(g1∥g2)=∑|x|=npg1(x)logpg1(x)pg2(x).
Analogous to entropy rate, we can define the KL divergence between and as
DKL(g1∥g2)\coloneqqlimn→∞1nDn(g1∥g2).
We show in Theorem 6 below shows that the limit exists and also derived a closed form for the KL divergence between two PFSAs. But before we can state the theorem, we need to introduce a very useful construction on two PFSAs, called synchronous composition.
###### Definition 4 (synchronous composition).
Let and be two PFSAs with the same alphabet and let be the probabilistic automata specified by the quadruple where
Sc =S1×T={(s,t)}s∈S1,t∈T
is the Cartesian product of and , and
Tc((s,t),x) =(T1(s,x),T2(t,x)), Pc((s,t),x) =P1(s,x),
for all , , and . Then the synchronous composition is defined to be any absorbing strongly connected component of , i.e. strongly connected component without any out-going edges.
It is not clear that there is only one absorbing strongly connected component in . However, as proved in Theorem 8 in Appendix -B, is equivalent to irrespective of the choice of absorbing strongly connected component, i.e., for .
In Figs. 6, 7, 8, and 9, we provide examples of synchronous compositions for several and which shed light on the fact that the synchronous composition of two strongly connected PFSA might not be strongly connected.
###### Theorem 6.
Let and be the stationary distribution of . Then we have
###### Proof.
See Appendix -B. ∎
In light of this theorem, one can easily show
DKL(g1∥g2)=ν1DKL(μ1∥μ2)¯μ1+ν1+¯μ1DKL(ν1∥ν2)¯μ1+ν1.
### V-C Convergence of log likelihood
According to Shannon-McMillan-Breiman Theorem [15, Theorem 16.8.1], we have for any sequence . A natural question is that what the log-likelihood converges to if is generated by a different machine. The following theorem states that the log-likelihood converges to entropy of generating machine plus the KL divergence which accounts for the mismatch.
###### Theorem 7.
For any , we have with probability one
−1nn∑i=1logpg′(xi|xi−1)→H(g)+DKL(g∥∥g′),
for any PFSA .
###### Proof.
First note that
−1nn∑i=1logpg′(xi|xi−1)=−1nlogpg(x)+1nn∑i=1logpg(xi|xi−1)pg′(xi|xi−1). (8)
Clearly, the first term in the above sum converges to . To show the convergence of the second term, let . Notice that for any PFSA in M2 and for , equals for all with , and to for all with , and hence the process is a Markov process. Let and denote the set of indices such that and , respectively. Then we have
1nn∑i=1Zi=1n∑i∈Z0Zi+1n∑i∈Z1Zi. (9)
It is straightforward to show that for all
Zi=1{xi=0}logμgμg′+1{xi=1}log¯μg¯μg′,
and for all
Zi=1{xi=0}logνgνg′+1{xi=1}log¯νg¯νg′.
It follows from (9) that
1nn∑i=1Zi = 1n(logμgμg′)n∑i=11{xi−1=0,xi=0}+1n(log¯μg¯μg′)n∑i=11{xi−1=0,xi=1}+1n(logνgνg′)n∑i=11{xi−1=1,xi=0} +1n(log¯νg¯νg′)n∑i=11{xi−1=1,xi=1} \lx@stackreln→∞⟶
For ease of presentation, we define
L(g′,xn←g)\coloneqq−1nn∑i=1logpg′(xi|xi−1).
When the generating machine is not known, we use to identify likelihood of generating .
## Vi Algorithm and simulation
### Vi-a Decoding
In this and the following section, we assume that we have a set of PFSAs , with for all . We will briefly discuss heuristics on how to generate a set of PFSAs that are good for tamper detecting and decoding in SectionVI-C.
We saw in Theorem 7 that
L(gj(δ),xn←gi(δ))→H(gi(δ))+DKL(gi(δ)∥∥gj(δ)), (10)
which motivates the following definition for the decoding function in Fig. 2
ψ(x)=argminm∈ML(gm(δ),xn).
We apply this decoding strategy in Fig. 5 when and two different message sets with or .
### Vi-B Tamper detecting
We assume that active eavesdropper tampers the channel in such a way that with some . Following Theorems 5 and 7, we get
L(gj(δ),x←gi(δ′)) →H(gi(δ′))+DKL(gi(δ′)∥∥gj(δ)) ≥H(gi(δ)), (11)
where the inequality is due to Theorem 5. Hence, tampering the channel results in an increase in the likelihood. This leads to our temper detecting procedure detailed in Algorithm 1.
### Vi-C Generate machines with good separation
For fixed number of messages, we need to choose a set of M2 PFSAs with the best decoding and tamper detection performance. It is important to indicate that (1) decoding error will be significantly lowered by increasing according to (10), and (2) the tampering detection error will be improved by making sure is large for , according to (VI-B). However, there is a trade-off here – to increase pairwise KL divergence, we want the machines to be spread more evenly in the parameter space while, according to Theorem 5, to increase , we need the machines to stay away from being single-state, i.e. away from the line.
Here, we describe briefly how we design
|
Tunnel and Underground Space. December 2018. 651-669
https://doi.org/10.7474/TUS.2018.28.6.651
# MAIN
• 1. INTRODUCTION
• 2. GENE EXPRESSION PROGRAMMING (GEP)
• 3. PARTICLE SWARM OPTIMIZATION (PSO)
• 4. THE GEP-PSO ALGORITHM AND THE RESULTS
• 4.1
• 4.2
• 5. THE EFFECT OF TIP, ATTACK, AND SKEW ANGLES ON SPECIFIC ENERGY
• 6. CONCLUSIONS
## 1. INTRODUCTION
The commonly used tunnelling methods may be grouped into two categories; “mechanical excavation” and “drilling and blasting”. While mechanical excavation is capable of providing higher advance rates under certain conditions, more control on strata, and safer work environment, drilling and blasting performs better in short lengths and abrasive rocks (Robbins, 2000). In order to choose between those two categories and/or between the different methods of mechanical excavation, one has to consider several factors such as feasibility, installation problems, ability of negotiation with adverse geological conditions, total cost and advance rate (Terezopoulos, 1987). Those factors finally influence the tunnelling process in two general forms: Blasting/Cutting performance and drill bit/cutter wear (Thuro and Plinninger, 2003). Therefore, determining the performance of mechanical excavators is of crucial importance from the very early phase of feasibility studies.
Although there are different empirical performance prediction equations for each type of commonly used mechanical excavators, the specific energy (SE) based approach serves as a standard procedure that is applicable to all excavation machines (Rostami, 2011). SE is defined as the energy required for cutting a unit volume of rock (Rostami, 2011). The SE based approach may be mathematically explained as follows (Rostami et al., 1994):
$$IPR\;=\;\frac{HP\times\eta}{SE}$$ (1)
where IPR is instantaneous production rate in m3/hour, HP is the machine power in kW, is the efficiency of the machine, and SE is specific energy in kWh/m3 for full-scale cutting tools (disc, pick, etc.). HP is one of the machines capacity indices (e.q. thrust and torque) and is determined by the type of machine. On the other hand, the SE is affected by the cutting conditions (i.e. depth of penetration and cut spacing) and rock properties (i.e. strengths of rock, brittleness of rock, weakness plane, etc.). The SE is generally obtained from the full-scaled linear cutting machine (LCM) test, which is applicable to control the cutting conditions during the rock cutting by using full-scaled cutting tools (Jeong et al., 2011). The rock specimen of the LCM is typically made on an up to preconditioned piece of rock fixed in a stiff frame. The rock surface is conditioned so that it can be representative of the actual excavation surface. Depending on the desired experiment plan, the same cutting process is repeated using different spacings between the adjacent cuts and different depths of cut. Fig. 1 shows how SE reacts to changes in depth of cut and cut spacing. As the size of the cutter is the same as that of a real one, the results of this increasingly popular test bear minimum amount of uncertainty and can cover anomalous aspects of rock behavior that are rather inexplicable by its physical properties (Bilgin et al., 2013). The outcomes of this test may be used to either predict the performance of the mechanical excavators or enhance it through choosing the optimum cut spacing to depth of cut ratio (Bilgin et al., 2013). Eq. 2 shows how SE is determined based on the cutting force measured as a result of full-scale cutting experiment (Pomeroy, 1963; Roxborough, 1973):
$$SE\;=\;\frac{FC}{Q\times3.6}$$ (2)
where FC is the cutting force in kN, Q is defined as yield or the volume of rock cut in unit length of cutting (m3/km), and SE is the specific energy in kWh/m3.
##### Fig. 1.
Effect of line spacing and depth of cut on specific energy (Bilgin et al., 2013)
The full-scaled LCM directly measures the cutting force and the volume of cut rock during rock cutting process, but it requires a large amount of time and effort for preparing the full-scale specimen and performing the test (Jeong et al., 2012). To overcome the limitation of the test, the statistical analysis could be a reasonable way to estimate the SE. However, it is essential to establish the sufficient database and it is important to determine an appropriate statistical analysis method for estimation of SE.
The purpose of this research, is to investigate the effect of intact rock properties, i.e. Uniaxial Compressive Strength (UCS) and Brazilian Tensile Strength (BTS), and operational properties, i.e. penetration depth (P) and cut spacing (S), on SE. Within the context of the present research, a total of 46 full-scale linear cutting test results, carried out using pick cutters having a conical shape, on various types of rock was used. The data was obtained from Copur et al. (2003), Balci et al. (2004) and Wang et al. (2018). Table 1 shows the descriptive statistics for the database used in this study. Copur et al. (2012) categorized the different types of cutters with respect to the maximum UCS values for the rocks to be cut. According to their classification, pick cutters may be used for cutting rocks with UCS values of up to 120 MPa. Fig. 2 shows the distribution of the UCS values used in this study over the range of application for pick cutters. For collecting all of the SE data used in this research, the cutting speed has been constant. It has been 12.7 mm/s for the data generated by Copur et al. (2003) and Balci et al. (2004), and 13 mm/s for the data generated by Wang et al. (2018). It should be added that, according to He and Xu (2016), for the constant cutting speeds in the range 4-20 mm/s, the effect of the cutting speed on SE may be safely ignored.
Table 1. Descriptive statistics for the data base used in this study
Number Minimum Maximum Mean Standard Deviation UCS (MPa) 46 6 174 47.53 37.17 BTS (MPa) 46 0.2 11.6 3.94 2.41 P (mm) 46 3 12 7.61 2.25 S (mm) 46 9 45 23.93 10.52 S/P 46 2 5 3.15 0.92 SE (MJ/m3)* 46 4.14 57.24 22.03 15.24
P: Depth of Cut; S: Cut Spacing; SE: Specific Energy; Tip angle= 80, Attack Angle= 55.5, Skew Angle= 0; Data Source: Copur et al. (2003), Balci et al. (2004), Wang et al. (2018)
*1kWh=3.6MJ
##### Fig. 2.
The distribution of UCS values used in this study and their corresponding rock type. The highlighted part is the range of application for pick cutters according to Copur et al. (2012)
After preliminary analysis of the established database, it was revealed that the conventional multiple linear regression (MLR) method is not capable of producing results with the desired accuracy (Eq. 3).
$$SE=(2.77\times BTS)+(0.18\times UCS)-(2.779\times P)+23.721,\;R^2=76.7\%,\;MSE=52.94$$ (3)
$$MSE=\frac{\sum_{i=1}^n\;(y_i-{\widehat y}_i)^2}n$$ (4)
where is the ith measured valued, is the ith predicted value, and n is the number of cases. Hence, a new genetic algorithm-based method called Gene Expression Programming (GEP) was implemented for data analysis (Ferreira, 2006). The advantage of GEP is that it can fit a highly accurate mathematical equation to the proposed data without any need to prior knowledge of the structure of the equation. That advantage makes GEP a better choice in comparison to artificial neural networks as they are intrinsically not capable of generating a differentiable equation for a function fitting problem.
In addition, GEP’s ability to autonomously evolve many different structures as candidate fitting functions makes it more favorable in comparison to optimization techniques (such as particle swarm optimization), which require a predefined function structure, such as y=alog(x)+bx+c, where a, b, and c are the optimizable parameters. Compared to genetic programming (GP), GEP has proven to be more efficient mainly because it is not prone to generating syntactically incorrect genes. Different genetic operators, such as mutation or recombination (see section 2, “Gene Expression Programming”, disturb the structure of chromosomes/genes in both genetic programming (GP) and GEP. While GP produces many instances of syntactically incorrect genes after application of genetic operators, GEP does not have that problem. Thus, the genes/chromosomes generated by GEP can evolve in a more efficient way using more diverse genetic operators than GP (Ferreira, 2006).
Çanakcı et al. (2009), Khandelwal et al. (2016), Faradonbeh et al. (2016), and Faradonbeh et al. (2017) showed the reliability and the high accuracy of GEP for solving an assortment of rock mechanics function fitting problems. Çanakcı et al. (2009) used GEP for predicting compressive and tensile strength of Gaziantep basalt from Turkey; Khandelwal et al. (2016) developed a model for predicting flow of air in a single rock joint; Faradonbeh et al. (2016) investigated the issue of predicting the rock flying distance in blasting operations in mines; and Faradonbeh et al. (2017) used GEP for predicting the performance of roadheaders.
## 2. GENE EXPRESSION PROGRAMMING (GEP)
GEP is a model/program generating evolutionary algorithm that can generate a population of chromosomes/individuals which are interpreted to mathematical equations and are visually expressed as tree structures. Chromosomes/Individuals are able to evolve through generations such that the best fitness for each generation is at least as good as that of the previous generation. For a function fitting problem, the “fitness” can be defined as one, or a weighted combination of, mean squared error (MSE), root mean squared error (RMSE), correlation coefficient (R), determination coefficient (R2), etc.
Each chromosome is consisted of a number of genes that are connected to each other by linking functions, which can be any mathematical operator such as addition, division, etc. Each gene has a head and a tail part. While the head can be constructed by a combination of members of functions set and terminals set, the tail part can only be consisted of terminals set subdivisions and feeds the functions in the head along with the terminals in the head itself. Functions and terminals sets are sets of mathematical functions, such as + or ×, and matrices of numerical values (e.g. values of UCS or BTS for all samples), respectively. The number of the units that form the tail part is determined based on the number of the units in the head, which is defined by the user. Fig. 3 shows a chromosome with two genes. In the expression tree, each circle is called a node. Fig. 4 shows the expression tree for the chromosome’s first gene, and its mathematical expression.
Similar to what happens in the nature, each generation, except the first one, is created by the children of the individuals of the previous generation. The individuals forming the first generation are generated randomly. Within each generation, the individuals are given the chance of reproduction proportional to their fitness. The evolution procedure is carried out by means of genetic operator(s), which can be one, or a combination of, mutation, inversion, different types of transposition, and different types of recombination. Evolution continues for a certain number of times or until the desired fitness is reached (Ferreira, 2006).
##### Fig. 3.
A schematic view of a chromosome with two genes
##### Fig. 4.
Expression tree and mathematical expression of the genes shown in Figure 3; “a” is a numerical value.
Depending on the mutation rate defined by the user, mutations are free to happen at any randomly selected unit across a gene, given that the resulting gene’s head can be consisted of functions and/or terminals while its tail is exclusively made of terminals (Fig. 5). Although it can take any value between zero to one, the mutation rate is usually set such that mutation happens for two units in each chromosome. It should be noted that mutation occurs within all of the individuals in the new generation (Ferreira, 2006).
##### Fig. 5.
An example for mutation in a gene
Through the inversion process, a randomly selected sequence from head of a randomly selected gene is inverted. It should be noticed that the start and end points of the sequence are randomly selected within gene’s head. Inversion is restricted to only the head part of the genes (Fig. 6).
##### Fig. 6.
An example of inversion operation in a gene
This restriction helps the structural form of the chromosome to be maintained. In other words, by limiting inversion to the head, there will be no risk of having a function at the tail of a gene. Across each generation, the number of the chromosomes that undergo inversion depends on the arbitrarily defined inversion rate (Ferreira, 2006).
Transposition operator, in simple words, randomly selects a piece of a gene and relocates it to a randomly selected place across the same chromosome. Three different types of transposition are defined in GEP algorithm. The first type is Insertion Sequence (IS) elements, and it is shown in Fig. 7. This type of transposition involves randomly selecting the start and ending positions (both positions can be either in head or tail of the gene) of a piece of a randomly selected gene and inserting that piece into the head of the same gene (except the first unit of the head or root) within the same chromosome. The second type is Root Insertion Sequence (RIS) elements, and it is shown in Fig. 8. The RIS element is the same as the first with two differences. The selected piece of gene in RIS elements type, unlike in IS elements, has to begin with a function and is inserted to the root of the same gene.
##### Fig. 7.
An example for transposition of insertion sequence elements operation. Please note that the part highlighted in light grey is the selected insertion sequence and the part highlighted in dark grey is removed at the end of the head
##### Fig. 8.
An example for transposition of root insertion sequence elements operation. Please note that the part highlighted in light grey is the selected sequence and the part highlighted in dark grey is removed at the end of the head
The third type, gene transposition (shown in Fig. 9), works in a larger scale than the other types. Gene transposition randomly selects a gene in a randomly selected chromosome and inserts it to the place of the first gene in that chromosome. Obviously, this operator makes a difference only for non-commutative linking functions.
##### Fig. 9.
An example for gene transposition
The number of times that each of the different types of transposition are repeated depends on the different types of transposition rates defined by the user (Ferreira, 2006). One-point, two-point, and gene recombination are the three different types of recombination that GEP algorithm uses as genetic operators. All of them involve randomly selecting two different chromosomes that are supposed to exchange some portions with each other. One-point recombination is carried out by randomly selecting a position along the chromosome and exchanging the parts of the chromosomes that start at that point and end at the end of the chromosome (Fig. 10). In two-point recombination, two positions are randomly selected along the chromosome and the portions limited to those two points are exchanged (Fig. 11). In gene recombination, whole randomly selected genes are exchanged between the two randomly selected chromosomes (Fig. 12). The number of times that each of the different types of recombination are repeated depends on the different types of recombination rates defined by the user (Ferreira, 2006).
The functions set used in this study is:
Functions Set= {+, ×, -, ÷, Power, ln, Exp, Sin, Cos, Tan}
where “ln(x)” is the natural logarithm of the variable “x” and “Power” and “Exp” are defined as follows:
$$\mathrm{Power}\;(x)=Ax^B,\;A\;and\;\;B\;\mathrm{are}\;\mathrm{real}\;\mathrm{numbers}$$ (5)
$$\mathrm{Exp}\;(x)=Ce^{Dx},\;C\;and\;\;D\;\mathrm{are}\;\mathrm{real}\;\mathrm{numbers}$$ (6)
##### Fig. 10.
An example of one-point recombination. The parts highlighted in grey are exchanged between chromosomes
The terminals set used in this study is defined as follows:
Terminals Set= {UCS, BTS, P, S, S/P, Random Numbers}
where P, S, S/P are 46 × 1 matrices containing the values of cutting depth, cut spacing, and cut spacing to cutting depth ration, respectively. Random Numbers is also a 46 × 1 matrix the elements of which are a repeated numerical value. Although GEP is theoretically capable of generating some random integers in the genes’ outputs (please refer to Fig. 4, Gene 1’s output), Random Numbers are added to the terminals set in order to provide the algorithm with more freedom for creating models/genes with a wide range of numerical constants including integer and non-integer numbers.
##### Fig. 11.
An example of two-point recombination. The parts highlighted in grey are exchanged between chromosomes
##### Fig. 12.
An example of gene recombination. The genes highlighted in grey are exchanged between chromosomes
Finally, Fig. 13 shows the flowchart for the basic GEP algorithm according to the processes mentioned above.
##### Fig. 13.
The standard flowchart of GEP algorithm (Ferreira, 2006)
## 3. PARTICLE SWARM OPTIMIZATION (PSO)
The presence of parameters such as A, B, C, D (Eqs. 5 and 6), and random numbers in genes creates an opportunity for application of an optimization method, which will lead to a more efficient evolution process for GEP. To take that opportunity, Particle Swarm Optimization (PSO) method was applied wherever those parameters appeared in a chromosome that had already reached an acceptable level of fitness using only GEP.
PSO is a population based optimization method inspired by movements of swarms such as a flock of birds or a school of fish that are in search of food or getting away from a threat, etc. (Kennedy and Eberhart, 1995). In this algorithm, a swarm of “particles/potential answers” is randomly distributed over a search space with number of dimensions equal to the number of optimizable parameters, e.g. A-B plane as the search space if the chromosomes contains only the “Power” function (Eq. 5). Each member is regarded as a candidate solution for the optimization problem at hand (Fig. 14).
##### Fig. 14.
Random distribution of particles or candidate solutions (P1-P10) over A-B plane
Then, an iteration process starts. Through the process, the particles move over the search space with velocities defined for each of them according to their fitness. The search space is explored until the satisfactory answer is found or the maximum number of iterations is reached. “Fitness” can be defined depending on the user’s preference. In this research the fitness was defined by Mean Squared Error (MSE) (Eq. 4) associated with each swarm member/candidate solution. The velocity of each particle is updated with reference to the best particle in the population (the bird closest to the source of food or the lowest MSE among all of the particles) and the particle’s best (the closest each bird has been to the source of food through its moves or the lowest MSE the particle has experienced) within all of the iterations. In other words, in each iteration, a particle moves toward its own best and the best particle across all iterations which may be called Globally Best Particle. For a particle P(x,y) in x-y surface, the updated velocity is defined by the following equation:
$$\begin{array}{l}V_{updated}=(Inertia\times V)=(rand\;()\times LRPI\times\lbrack P(x_{Particl'e\;best},\;y_{Particl'e\;best})-P(x,y)\rbrack+\\\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(rand()\;\times\;LRSI\;\times\;\lbrack P(x_{Globally\;Best\;Particle},\;\;y_{Globally\;Best\;Particle})-P(x,y)\rbrack\end{array}$$ (7)
Where is the updated particle’s velocity for the current iteration, V is particle’s velocity in the previous iteration, Inertia is a user defined quantity that adjusts the extent of V’s impact on and is usually set to a number from [0.9-1.2], rand( ) is a random number from [0,1], LRPI is the Learning Rate for Personal Impact and it is a quantitative parameter recommended to be set as 2, LRSI is the Learning Rate for Social Impact and is also a quantitative parameter recommended to be set as 2 (Kennedy and Eberhart, 1995). The flowchart of PSO is presented by Fig. 15.
##### Fig. 15.
The standard flowchart for Particle Swarm Optimization algorithm
## 4. THE GEP-PSO ALGORITHM AND THE RESULTS
For the current research, a customized combination of GEP and PSO algorithms was used to find an equation that relates UCS, BTS, P, S, and S/P to SE. Fig 16 shows a flowchart of the algorithm used in this study. At first, GEP is applied. The evolution of GEP continues until it can find at least one equation/chromosome with acceptable associated MSE value, which is smaller than/equal to the MSE value associated with the equation generated by MLR (Eq. 3). At this stage, unless for the initial generation, the values for A, B, C, D (Eqs. 5 and 6), and Random Numbers are set to one. In this manner, an extra emphasis is put on finding the correct structure of the equation/chromosome rather than the best values for those parameters. When the acceptable MSE is reached, the chromosomes producing acceptable results are subjected to PSO optimization, to check whether the MSE can be improved to at least half of the MSE associated with MLR method (Eq. 3). In other words, structural evolution is carried out by GEP and PSO is used for numerical evolution of A, B, C, D, and Random Numbers. If the final goal is reached after optimization, the algorithm stops, otherwise it will start again by creating a new initial population for GEP (Fig. 16).
##### Fig. 16.
The hybrid GEP-PSO algorithm used in this study
The difference between the basic GEP algorithm and the modified GEP algorithm is that all the genes are connected to each other by the same function, the genes outputs were used as terminals for another chromosome that uses the following functions set :
Functions Set= {+, ×, -, ÷}
In this manner, the algorithm is enabled to investigate all the different combinations of the genes and linking functions and select the best rather than merely adding or multiplying the genes outputs to calculate the final chromosome output. Fig. 17 shows the simple example of the difference between the chromosomes output with dynamic and static linking functions. And Table 2 summarizes the settings used in the GEP-PSO algorithm.
##### Fig. 17.
An example of the difference between the outputs of the chromosomes with dynamic and static linking functions
Table 2. The setting used in GEP-PSO code
Gene Expression Programming Particle Swarm Optimization Population Size (Number of Chromosomes) 100 Inversion Rate 0.1 Number of Particles 50 Number of Genes Per Chromosome 4 Insertion Sequence Transposition Rate 0.1 Inertia 0.2 Head Length 7 Root Insertion Sequence Transposition Rate 0.1 Learning Rate for Personal Influence 2 Tail Length 8 Gene Transposition Rate 0.1 Learning Rate for Social Influence 2 Functions Set {+, ×, -, ÷, Power, ln, Exp, Sin, Cos, Tan} One-point Recombination Rate 0.4 Maximum Number of Unsuccessful Iterations 40 Terminals Set {UCS, BTS, P, S, S/P, Random Numbers} Two-point Recombination Rate 0.2 Selection Range for A and C (Eqs. 5 and 6) and Random Numbers [-1000,1000] Functions Set used for Dynamic Linking Function Selection {+, ×, -, ÷} Gene Recombination Rate 0.1 Selection Range for B and D (Eqs. 5 and 6) [-10,10] Mutation Rate 0.2 Fitness Function Eq. 4 Fitness Function Eq. 4
The GEP-PSO algorithm, written in Matlab (2017), generated the following equation, which explains the variation in Specific Energy (SE) based on variations of UCS, BTS, depth of cut (P), and cut spacing (S):
$$\begin{array}{l}{\mathrm{Gene}\;1}=UCS/P\\{\mathrm{Gene}\;2}=(-6.26\times\ln(P)^-2.53)+(\frac SP)\times2.6(-0.14-BTS)\\{\mathrm{Gene}\;3}=0.54\times\exp(-1.55\times\cos((\;\left(830.92\times\left(\frac SP\right)^{9.93\;}\right)+\left(1000\times\left(\frac SP\right)^{-6.79\;}\right)\times\tan(UCS))\\{\mathrm{Gene}\;4}=\ln(S)\\SE=({\mathrm{Gene}\;1}\times{\mathrm{Gene}\;3)-(\frac{{\mathrm{Gene}\;}2}{\mathrm{Gene}\;4}})\end{array}$$ (8)
The results generated by the GEP-PSO code are compared against those generated by MLR in Table 3. As Table 3 shows, the MSE associated with MLR model is more than two times larger than the MSE associated with Eq. 8. In addition, the R2 value associated with Eq. 8, is about 0.13 higher than that of Eq. 3. Therefore, it may be concluded that the GEP-PSO algorithm generates a significantly better result in comparison to the conventional MLR method. The predicted SE values by the prediction models (from MLR and GEP-PSO) are visually compared against the measured values from LCM in Fig. 18.
Table 3. The results generated by MLR and the GEP-PSO code
Statistical Model Coefficient of Determination (R2) Mean Squared Error (MSE) Maximum Percentage Error Average Percentage Error Eq. 3 (MLR) 0.77 52.94 180.73 % 39.17 % Eq. 8 (GEP-PSO) 0.90 21.77 114.77 % 24.76 %
##### Fig. 18.
Prediction results of specific energy by (a) MLR and (b) GEP-PSO models
## 5. THE EFFECT OF TIP, ATTACK, AND SKEW ANGLES ON SPECIFIC ENERGY
Besides the basic cutting parameters (i.e., penetration depth and cut spacing) considered in this study, many other parameters affect to the cutting performance of a pick cutter (Fig. 19). Jeong (2017) showed that the skew and attack angles have a significant effect on specific energy and cutting forces of a pick cutter. Changes in the values of those angles can change the area of the cutter in contact with rock. Therefore, they can consequently change the amount of stress transmitted to rock by cutters. Although a similar graph could not be provided for tip angle due to shortage of proper data, it can be assumed to have a similar effect on SE as tip angle can similarly change the area of the cutter in contact with rock. Therefore, it can be inferred that if the data, over which a SE prediction functions is fitted, contains a variation of those angles, the generated prediction models will be more reliable. In the present research, the effect of those angles is not taken into consideration because the established database is generated using only one value for each of the angles (Tip angle =80°, Attack angle=55.5°, and Skew angle=0). Based on the provided explanations, at least a part of the error associated with the developed GEP-PSO model (Eq. 8) can be attributed to not using those angles in the database that fed the algorithm.
##### Fig. 19.
Definitions for Tip (Φ), Attack (α), Tilt (φ), and Skew (θ) angles (Jeong and Jeon, 2018)
## 6. CONCLUSIONS
In conclusion, it may be stated that the developed GEP-PSO model is capable of predicting SE values with a dramatically higher accuracy than the conventional multiple linear regression technique. The proposed model and approaches may be used as a useful tool for estimating the cutting performance and the operational parameters of the mechanical excavators. Because it may be helpful in reducing the efforts for the full-scaled LCM tests to find the preferable design parameters of the mechanical excavators (i.e., P, S and SE). The range of application for the developed model is limited to the ranges of the data it has been trained with (minimum and maximum values for UCS, BTS, P, S, and S/P shown in Table 1 while tip angle, attack angle, and skew angle are 80, 55.5, and 0 degrees, respectively).
Considering the significant effect of tip, skew, and attack angles on SE, for the future work, it will be added more testing data to the database used in this study in order to investigate the effect of variations in those angles on SE. It is hoped that, for the future work, the GEP-PSO algorithm can be improved such that even more accurate models with less sophisticated mathematical equation can be reached.
## Acknowledgements
This study was funded by the Korea Agency for Infrastructure Technology Advancement under the Ministry of Land, Infrastructure and Transport in Korea (Project No.: 18SCIP-B105148-04).
## References
1
Balci, C., M.A., Demircin, H., Copur and H., Tuncdemir, 2004, Estimation of optimum specific energy based on rock properties for assessment of roadheader performance, J. South African Inst. Min. Metall. 104, 633-641.
2
Bilgin, N., H., Copur and C., Balci, 2013, Mechanical Excavation in Mining and Civil Industries, CRC Press, London, 89p.
3
Çanakcı, H., A., Baykasoğlu and H., Güllü, 2009, Prediction of compressive and tensile strength of Gaziantep basalts via neural networks and gene expression programming, Neural Comput. Appl. 18, 1031.
10.1007/s00521-008-0208-0
4
Copur, H., C., Balci, N., Bilgin, D., Tumac and E., Avunduk, 2012, Predicting cutting performance of chisel tools by using physical and mechanical properties of natural stones, Proc. of European Rock Mechanics Symposium 2012, May, Stockholm.
5
Copur, H., N., Bilgin, H., Tuncdemir and C., Balci, 2003, A set of indices based on indentation tests for assessment of rock cutting performance and rock properties, J. South African Inst. Min. Metall, 589-600.
6
Ferreira, C., 2006, Gene expression programming: mathematical modeling by an artificial intelligence, Springer, Berlin.
10.1007/3-540-32849-1_2
7
He, X. and C., Xu, 2016, Specific energy as an index to identify the critical failure mode transition depth in rock cutting, Rock Mech. Rock Eng. 49, 1461-1478.
10.1007/s00603-015-0819-6
8
Jeong, H.Y., S.W., Jeon, J.W., Cho, S.H., Chang and G.J., Bae, 2011, Assessment of cutting performance of a TBM disc cutter for anisotropic rock by linear cutting test. Tunn. Undergr. Sp. 21, 508-517.
9
Jeong, H.Y., S.W., Jeon and J.W., Cho, 2012, A study on punch penetration test for performance estimation of tunnel boring machine, Tunn. Undergr. Sp. 22, 144-156.
10.7474/TUS.2012.22.2.144
10
Jeong, H.Y., 2017, Assessment of rock cutting efficiency of pick cutters for the optimal design of a mechanical excavator, Ph. D thesis, Seoul National University, Seoul.
11
Jeong, H.Y. and S.W., Jeon, 2018, Characteristics of size distribution of rock chip produced by rock cutting with a pick cutter, Geomechanics and Engineering. 15.3, 811-822.
12
Kennedy, J. and R., Eberhart, 1995, Particle swarm optimization, Proc. ICNN’95 - International Conference on Neural Networks, Vol. 4, 1942-1948.
10.1109/ICNN.1995.488968
13
Khandelwal, M., D.J., Armaghani, R.S., Faradonbeh, P.G., Ranjith and and S., Ghoraba, 2016, A new model based on gene expression programming to estimate air flow in a single rock joint, Environ. Earth Sci. 75, 739.
10.1007/s12665-016-5524-6
14
Matlab, 2017, Version R2017b, The MathWorks, Inc.,Natick, Massachusetts, United States.
15
Pomeroy, C.D., 1963, The breakage of coal by wedge action; factors influencing breakage by any given shape of tool, Colliery Guard, November, 672-677.
16
Robbins, R.J., 2000, Mechanization of underground mining: a quick look backward and forward, Int. J. Rock Mech. Min. Sci. 37, 413-421.
10.1016/S1365-1609(99)00116-1
17
Rostami, J., 2011, Mechanical Rock Breaking, in: SME Mining Engineering Handbook, Third Edition, Dearbon, 388p.
18
Rostami, J., L., Ozdemir and D.M., Neil, 1994, Performance prediction: a key issue in mechanical hard rock mining, Min. Eng. 46, 1263-1267.
19
Roxborough, F.F., 1973, Cutting rock with picks, Min. Eng. 132, 445-454.
20
Faradonbeh, R.S., D., Jahed and M., Monjezi, 2016, Genetic programming and gene expression programming for flyrock assessment due to mine blasting, Int. J. Rock Mech. Min. Sci. 88, 254-264.
10.1016/j.ijrmms.2016.07.028
21
Faradonbeh, R.S., A., Salimi, M., Monjezi, A., Ebrahimabadi, and C., Moormann, 2017, Roadheader performance prediction using genetic programming (GP) and gene expression programming (GEP) techniques, Environ. Earth Sci. 76, 584.
10.1007/s12665-017-6920-2
22
Terezopoulos, N.G., 1987, Influence of geotechnical environments on mine tunnel drivage performance, in: Advances in Mining Science and Technology. Elsevier, Amsterdam, pp. 139-156.
23
Thuro, K. and R.J., Plinninger, 2003, Hard rock tunnel boring, cutting, drilling and blasting: rock parameters for excavatability, Proc. 10th of Int. Congr. on Rock Mech., September, Sandton.
24
Wang, X., Q.F., Wang, Y.P., Liang, O., Su and L., Yang, 2018, Dominant Cutting Parameters Affecting the Specific Energy of Selected Sandstones when Using Conical Picks and the Development of Empirical Prediction Models, Rock Mech. Rock Eng. 51.10, 1-18.
10.1007/s00603-018-1522-1
|
5
# The current in 100 watt lightbulb is 0.710 A The filament inside the bulb 280 mm diameterPart AWhat is the current density in the filament? Express your answer to t...
## Question
###### The current in 100 watt lightbulb is 0.710 A The filament inside the bulb 280 mm diameterPart AWhat is the current density in the filament? Express your answer to three significant figures and include the appropriate units:ValueUnitsSubmitRequest AnswerPart BWhat is the lectron current in the filament?Express your answer using three significant figures_electrons secunSubmitRequest AnswerAEQ
The current in 100 watt lightbulb is 0.710 A The filament inside the bulb 280 mm diameter Part A What is the current density in the filament? Express your answer to three significant figures and include the appropriate units: Value Units Submit Request Answer Part B What is the lectron current in the filament? Express your answer using three significant figures_ electrons secun Submit Request Answer AEQ
#### Similar Solved Questions
##### (SGptis? Draw 3 structure of the product of the reaction below_ What type reaction8(spts Write the name of the compound from the structure given{
(SGptis? Draw 3 structure of the product of the reaction below_ What type reaction 8 (spts Write the name of the compound from the structure given {...
##### I3 2 1 I 8 1 1 1 9 J 50 1 3 # 3 Uei 1 443 48 2 1 28
I3 2 1 I 8 1 1 1 9 J 50 1 3 # 3 Uei 1 443 4 8 2 1 28...
##### EBoonAssigncdHotncio 4DUE: 2 daysQUE 7 048A2I-JICUE: 2 da73DMF 7 @aNDUF 7 Ca DUE 1 Conas7 CeOue 7 DolaD^ 4o=JaUnanswercd5
EBoon Assigncd Hotncio 4 DUE: 2 days QUE 7 048 A2I-JI CUE: 2 da73 DMF 7 @aN DUF 7 Ca DUE 1 Cona s7 Ce Oue 7 Dola D^ 4o= Ja Unanswercd 5...
##### Find f" ' (x) for the function.04 fx) =2x8/2_ ~6x1 /20A 1.5x 412 +1.5x 1/2 0 B. 3x1/2 _ 3x"1/2 3x " 1/2 + 3x 3/2 1.5x 1/2 +1.5x-3/204ThisClick to select your answer:enmsonuseErvarcykkollaypermissiomnsconracaus
Find f" ' (x) for the function. 04 fx) =2x8/2_ ~6x1 /2 0A 1.5x 412 +1.5x 1/2 0 B. 3x1/2 _ 3x"1/2 3x " 1/2 + 3x 3/2 1.5x 1/2 +1.5x-3/2 04 This Click to select your answer: enmsonuse Ervarcykkollay permissiomns conracaus...
##### 5) Find an equation for the tangent plane to the graph of g(1,y) = IOz Vy - sin (4x + y) at (-1,4) . Write your answer in the form z = ax + by + €
5) Find an equation for the tangent plane to the graph of g(1,y) = IOz Vy - sin (4x + y) at (-1,4) . Write your answer in the form z = ax + by + €...
##### (Kn)_ show that is AH9. (Divisibility:) Let n € N In this problem aTe going integer for all k € N: Show the k case_ Hint: Binomial coefficient?) Frove the_general Gase (Hint: Try induction (d 360 Qour deriation:vou Caw fund Ci"t"10. (Elementary Number Theory:) Show thau for aIl /eN "2 diviclos Hint: Usc Binomial Erpansion to eqmd (" + and argue (hat n2 divides ech tcrm
(Kn)_ show that is AH 9. (Divisibility:) Let n € N In this problem aTe going integer for all k € N: Show the k case_ Hint: Binomial coefficient?) Frove the_general Gase (Hint: Try induction (d 360 Qour deriation: vou Caw fund Ci"t" 10. (Elementary Number Theory:) Show thau for ...
##### Let M(z) = 2 Vre on € > 0. Fill the values or intervals below. interval(s) on which M is increasing: interval(s) on which M is decreasing: maximum of M:(ordered pair the form (a,6). )minimum of M:(ordered pair in the form (a,6). interval(s) on which M is concave upward:intervalls) on which M is concave downward:Make a plor that verhes your results. Label key feacures on che ploc:
Let M(z) = 2 Vre on € > 0. Fill the values or intervals below. interval(s) on which M is increasing: interval(s) on which M is decreasing: maximum of M: (ordered pair the form (a,6). ) minimum of M: (ordered pair in the form (a,6). interval(s) on which M is concave upward: intervalls) on wh...
##### 4. Consider 26-3 fractional factorial design with the generators 12 , 5 23 , and 6 = 123. Assume that third- and higher-order interaction effects are negligible(a) Display the linear combinations of effects that can be estimated from this design_(b) The experimental results in part (a) were somewhat ambiguous; therefore, a second fractional factorial design, the mirror image design, is to be performed_ What are the generators for one possible mirror image? Display the linear combinations of effe
4. Consider 26-3 fractional factorial design with the generators 12 , 5 23 , and 6 = 123. Assume that third- and higher-order interaction effects are negligible (a) Display the linear combinations of effects that can be estimated from this design_ (b) The experimental results in part (a) were somewh...
##### Adrug supplied by a pharmaceutical company is administered to 3 patients, and the randor variable X represents the number of cures that occur: The probabilities are as follows : 0 cures, 1 cure; 2 cures,and 3 cures are found to be0.125,0.375,0.375,and 0.125 respectively: Find the mean: (1 decimal place)Answer:
Adrug supplied by a pharmaceutical company is administered to 3 patients, and the randor variable X represents the number of cures that occur: The probabilities are as follows : 0 cures, 1 cure; 2 cures,and 3 cures are found to be0.125,0.375,0.375,and 0.125 respectively: Find the mean: (1 decimal pl...
##### In Problems 1-6, expand the given function in a Laurent series valid for the given annular domain.$$f(z)=frac{cos z}{z}, 0<|z|$$
In Problems 1-6, expand the given function in a Laurent series valid for the given annular domain. $$f(z)=frac{cos z}{z}, 0<|z|$$...
##### Thres point chargcs JnC, ZnC and ~JnC uc placed o0] the comcr isosccles tnangle 4s shown in thc fgurc Thevalue of the clectnc potential (in V) at point P (midpoint of the triangle basc) is:ZnC0.05m _0.0SmJJnC 0.03m 0.03m 3nCSc €5onc:
Thres point chargcs JnC, ZnC and ~JnC uc placed o0] the comcr isosccles tnangle 4s shown in thc fgurc Thevalue of the clectnc potential (in V) at point P (midpoint of the triangle basc) is: ZnC 0.05m _ 0.0Sm JJnC 0.03m 0.03m 3nC Sc €5onc:...
##### (IThe percentage of population that has hcard certain TUINOT time in days.P(t)whcre 3/1-361 + [Wlat percentage of the population heard the Tumor at time = 0 dars?4l 2.0 Determine forwula for AII| simplily iuto single fractionEvalute the limnits Iitn ()d and lit '(() nud imterpwet thcm in tlc context of the problem
(I The percentage of population that has hcard certain TUINOT time in days. P(t) whcre 3/1-361 + [ Wlat percentage of the population heard the Tumor at time = 0 dars? 4l 2.0 Determine forwula for AII| simplily iuto single fraction Evalute the limnits Iitn ()d and lit '(() nud imterpwet thcm i...
1 P(A)= 1 1 1...
##### In a population of 100 individuals, 36% are of the NN blood type; What percentage is expected to be MN assuming Hardy ~Weinberg equilibrium conditions? 4896 4096 4296 4596 Cannot t0 be determined
In a population of 100 individuals, 36% are of the NN blood type; What percentage is expected to be MN assuming Hardy ~Weinberg equilibrium conditions? 4896 4096 4296 4596 Cannot t0 be determined...
##### Find the reference angle of each angle. $$\frac{15 \pi}{4}$$
find the reference angle of each angle. $$\frac{15 \pi}{4}$$...
##### True or False? In Exercises $97-104$ , use the figure to determine whether the statement is true or false. Justify your answer. (GRAPH NOT COPY) $$\mathbf{a}+\mathbf{w}=-2 \mathbf{d}$$
True or False? In Exercises $97-104$ , use the figure to determine whether the statement is true or false. Justify your answer. (GRAPH NOT COPY) $$\mathbf{a}+\mathbf{w}=-2 \mathbf{d}$$...
|
# zbMATH — the first resource for mathematics
Collected works of John Tate. Part I (1951–1975). Edited by Barry Mazur and Jean-Pierre Serre. (English) Zbl 1407.01030
Providence, RI: American Mathematical Society (AMS) (ISBN 978-0-8218-9092-9/hbk; 978-1-4704-3021-4/ebook). xxvii, 716 p. (2016).
Even someone only vaguely familiar with the work of John Tate will be able to guess that his collected works begin with his “Fourier analysis in number fields and Hecke’s zeta-functions”, Tate’s thesis written in 1950 and first published in the Brighton proceedings [in: J. W. S. Cassels (ed.) and A. Fröhlich (ed.), Algebraic number theory. London etc.: Academic Press. 305–347 (1967)], where Tate worked out Emil Artin’s suggestion to derive the functional equation of Hecke’s zeta functions using the newly developed tool of ideles.
Later, Tate worked on the Galois cohomology of number fields (where he formulated a generalization of Artin’s reciprocity law as an isomorphism of Tate cohomology groups), function fields, elliptic curves and abelian varieties; the keywords here are Tate cohomology groups, Poitou-Tate duality, and Tate-Shafarevich groups. The Galois-cohomological approach to global class field theory is summarized in his survey [in: Algebraic number theory. London etc.: Academic Press. 162–203 (1967; Zbl 1179.11041)] in the Brighton proceedings.
In between fundamental work on Lubin-Tate formal groups, $$p$$-divisible groups, group schemes and rigid analytic spaces, Tate published on some concrete problems, such as symbols in arithmetic [in: Actes Congr. Intern. Math. 1970, 1, 201–211 (1971; Zbl 0229.12013)] and Milnor groups [with H. Bass, Lect. Notes Math. 342, 349–446 (1973; Zbl 0299.12013)], the non-existence of elliptic curves defined over the rationals with rational torsion points of order $$13$$ [with B. Mazur, Invent. Math. 22, 41–49 (1973; Zbl 0268.14009)] or his beautiful survey on the arithmetic of elliptic curves [Invent. Math. 23, 179–206 (1974; Zbl 0296.14018)]. The last 70 pages of this first volume of Tate’s collected works present letters of Tate to Dwork, Serre, Springer, Birch and Atkin. A detailed review of Tate’s collected works was published by J. S. Milne [Bull. Am. Math. Soc., New Ser. 54, No. 4, 551–558 (2017; Zbl 1369.00040)]; see also [J. S. Milne, in: The Abel Prize 2008–2012. Heidelberg: Springer. 259–340 (2014; Zbl 1317.01011)].
The individual articles are: “Fourier analysis in number fields and Hecke’s zeta-functions”, “A note on finite ring extensions” (with E. Artin) [Zbl 0043.26701], “On the relation between extremal points of convex sets and homomorphisms of algebras” [Zbl 0043.11403], “Genus change in inseparable extensions of function fields” [Zbl 0047.03901], “On Chevalley’s proof of Lüroth’s theorem” (with S. Lang) [Zbl 0047.03802], “The higher dimensional cohomology groups of class field theory” [Zbl 0047.03703], “The cohomology groups in algebraic number fields” [in: Proceedings of the international congress of mathematicians 1954. Amsterdam, September 2–9. Vol. II. Short lectures. Groningen: Erven P. Noordhoff N. V.; Amsterdam: North-Holland Publishing Co. 66–67 (1954)], “On the Galois cohomology of unramified extensions of function fields in one variable” (with Y. Kawada) [Zbl 0068.03402], “On the characters of finite groups” (with R. Brauer) [Zbl 0065.01401], “Homology of Noetherian rings and local rings” [Zbl 0079.05501], “WC-groups over $$p$$-adic fields” [Zbl 0091.33701], “On the inequality of Castelnuovo-Severi” (with A. Mattuck) [Zbl 0081.37604], “On the inequality of Castelnuovo-Severi, and Hodge’s theorem” [unpublished], “Principal homogeneous spaces over abelian varieties” (with S. Lang) [Zbl 0097.36203], “Principal homogeneous spaces for abelian varieties” [Zbl 0116.38201], “A different with an odd class” (with A. Fröhlich and J.-P. Serre) [Zbl 0105.02903], “Nilpotent quotient groups” [Zbl 0125.01503], “Duality theorems in Galois cohomology over number fields” [Zbl 0126.07002], “Ramification groups of local fields” (with S. Sen) [Zbl 0136.02702], “Formal complex multiplication in local fields” (with J. Lubin) [Zbl 0128.26501], “Algebraic cycles and poles of zeta functions” [Zbl 0213.22804], “Elliptic curves and formal groups” (with J. Lubin and J.-P. Serre) [unpublished], “On the conjectures of Birch and Swinnerton-Dyer and a geometric analog” [Zbl 0199.55604], “Formal moduli for one-parameter formal Lie groups” (with J. Lubin) [Zbl 0156.04105], “The cohomology groups of tori in finite Galois extensions of number fields” [Zbl 0146.06501], “Global class field theory” [Zbl 1179.11041], “Endomorphisms of Abelian varieties over finite fields” [Zbl 0147.20303], “The rank of elliptic curves” (with I. R. Shafarevich) [Zbl 0168.42201], “Residues of differentials on curves” [Zbl 0159.22702], “$$p$$-divisible groups” [Zbl 0157.27601], “The work of David Mumford” [Zbl 0333.01015], “Classes d’isogénie des variétés abéliennes sur un corps fini (d’après Z. Honda)” [Zbl 0212.25702], “Good reduction of abelian varieties” [Zbl 0172.46101], “Group schemes of prime order” (with F. Oort) [Zbl 0195.50801], “Symbols in arithmetic” [Zbl 0229.12013], “Rigid analytic spaces” [Zbl 0212.25601], “The Milnor ring of a global field” [Zbl 0299.12013], “Appendix to The Milnor ring of a global field” [unpublished], “Letter from Tate to Iwasawa on a relation between $$K_2$$ and Galois cohomology” [Zbl 0284.12004], “Points of order $$13$$ on elliptic curves” (with B. Mazur) [Zbl 0268.14009], “The arithmetic of elliptic curves” [Zbl 0296.14018], “The 1974 Fields Medals. I: An algebraic geometer” [Zbl 1225.01087], “Algorithm for determining the type of a singular fiber in an elliptic pencil” [Zbl 1214.14020].
##### MSC:
01A75 Collected or selected works; reprintings or translations of classics 11-03 History of number theory 14-03 History of algebraic geometry 11Gxx Arithmetic algebraic geometry (Diophantine geometry) 14Gxx Arithmetic problems in algebraic geometry; Diophantine geometry 14Kxx Abelian varieties and schemes
|
Seminars and talks
# Seminars and talks
Over the years I must have given hundreds of seminars, talks, and short research courses. Most of these are lost forever. In January 03 I started to keep a record of these. Here is a table of recent seminars where you can find links to the slides and relevant notes where available.
Here is a brief description of the content these talks.
#### List of seminars given in reverse chronological order
27-may-07 (Manchester) Galois connections done properly -- and -- Classical rings of fractions (two separate topics)
Slides available for first part.
15-May-07 and 22-may-07 (Manchester) From rings of fractions to localizations
A wander around these topics as part of a longish teaching seminar.
Slides available
02-May-07 (Leeds) Is the Ackermann function optimal?
A standard construction, originally due to Ackermann, produces a recursive function that is not primitive recursive. This can be relativized to produce a jump on the poset of degrees up to primitive recursive equivalence. What is there between a degree and its jump? Quite a lot.
Slides and a write-up available.
18-April-07 (Swansea - BMC) Is the Ackermann function optimal?
See next talk, above.
20-February-07 and 27-February-07 (Manchester) Ranking techniques for posets, with an eye on modules
A short course of of two 90 minute sessions. Slides are available.
02-December-06 (Cambridge) The topological content of Monotone Bar Induction and the Fan Principle
Joint work with Nicola Gambino. A short account of the classical meaning of the two choice principles, ending with an observation that distinguished between them.
28-November to 01-December-06 (Cambridge) An introduction to ordinal notations
A short course of about four hours. Slides are available and I intend to write an account of the topic.
16-November-06 (MFG) Ordinal notations
A run through of part of the material for the short course above.
23-May-06 (Lisbon) Syntactic measures of complexity for numeric functions
There is a standard class of first order functions over the natural numbers -- the Grzegorczyk hierarchy of moderate length -- that can be stratified by the ordinals up to epsilon_0. These functions can be seen in different ways.
• The functions dominated by some member of a fast growing hierarchy
• The provably total functions
• The functions named in a certain lambda calculus, a version of G\"odels T
By modifying the lambda-calculus -- tiering the types -- we severely weaken its strength. The ordinal measure epsilon_0 drops to 3, that is we stay within the Kalmar elementary functions. I will explain the background to this technique and try to indicate why it works.
19-May-06 (Coimbra) Tiering without tears.
This talk is essentially the same as the one in Lisbon above.
15-May-06 (Coimbra) One talk with three titles.
Boolean reflections of frames.
Higher level assemblies of frames.
Ranking techniques for frames.
The category Frm of frames is an algebraic (point-free) modifications of the category of topological spaces. In particular, each topology is a frame. A study of Frm gives us many algebraqic techniques not available in the point-sensitive setting. The category Frm includes the category CBA of complete boolean algebras and complete morphisms. At first sight is seems that CBA is a reflective subcategory of Frm, but there is a mysterious set theoretic obstruction. Some frames can not be reflected into CBA, and some not. The category Frm is much richer than the category of spaces. In particular, each frame A has an associated larger frame NA, is assembly, the frame of all nuclei on A. When A is the topology of a space S, a nucleus on A is essentially a Grothendieck topology for S. The assembly construction N can be iterated through the ordinals
A --> NA --> N^2A --> N^3A --> ...
and this tower stabilizes precisely when A has a boolean reflection. Some properties of this tower can be measured by an extension of the Cantor-Bendixson process on topological spaces. This extension seems to be new even for spaces. I will explain what I know about this tower, finishing with an example where N^3A is boolean but N^2A is not. Not a lot seems to be known beyond this level.
30-March-06 (MFG) How long is a piece of string?
I describe a method of generating arbitrarily long strings of poset adjunctions using pairwise distinct monotone maps. I show that a particular instance of this method produces the simplicial category.
09-March-06 (MFG) Styles of derivation for Propositional Calculus, and beyond.
After a brief mention of the Hilbert style I describe the Gentzen and Natural styles, what they are good for, how they interact, and how they relate to other parts of mathematics.
02-March-06 (MFG) How big is the continuum?
Some undergraduate' material leading up to K\"onig's little theorem that, for instance, the continuum can not have size aleph_omega.
07-December-05 (MFG) Trees, fans, football, and freedom of choice.
The first part of a joint seminar with Nicola Gambino on the use of choice principles to prove spatiality.
27-July-05 (MFG) What is a Grothendieck topology?
I may write up a set of notes covering this and the previous talk (below).
20-July-05 (MFG) What is a sheaf?
I may write up a set of notes covering this and the next talk (above).
14-July-05 (MFG) A rational approach to enumeration.
03-May-05 (MFG) From G\"odel's Theorem to Topological Spaces with some Modal Logic for softies.
Spring-Summer-04
During this period the MFG had a working seminar on aspects of lattice constructions and event structures. This was to help various research students with their work, hence the rather odd mixture of topics. I gave quite a few seminars/lectures in these sessions, as listed below. There were no slides or written material for these sessions.
• 01-July-04 Ordinals done informally
• 24-June-04 The quest for an adjunction
• 17-June-04 Simple event structures an morphisms
• 09-June-04 Introduction to the coverage technique for posets
• 03-June-04 More on event structures
• 27-May-04 More on lattice completions
• 20-May-04 Event structures based on consistency properties
• 13-May-04 Universal algebra for Sup-semilattices
• 06-May-04 Event structures based on conflict relations
• 28-April-04 Various completions of posets
11-February-04 (MFG) Ehrenfeucht-Fra\"{\i}ss\'e games for quantification
18-December-03 (MFG christmas oration) A constructive' proof of the completeness of propositional calculus
24-October-03 (MFG) Scatological spaces
A short account of equilogical spaces as generalizations of topological spaces based as neighbourthood spaces.
05-September-03 (St Andrews) The intrinsic complexity of natural number functions
Abstract: In 1936 Turing (who later spent some time in Manchester) characterized the computable natural number functions'. He did this using a model of computation' by describing what we now call a turing machine. Since then many other models of computation have been described, some easier to use than others. Arising out of this we now have a highly structured notion of the complexity' of natural number functions, at least in the first order setting. At the higher levels of complexity, that is beyond the turing computable functions, this is concerned with the turing degrees. At the lower levels, that is within the turing computable functions, this complexity is usually attached to some model of computation and measures how much resources are needed to evaluate the function.
Around the same time that Turing was working in England, a group (Church, Kleene, Rosser, and others) were working on the same problem in Princeton. Originally they developed a $\lambda$-calculus to analyse the notion, but abandoned that approach when Turing's result appeared. Since then until quite recently that tool has been in abeyance (but used for other purposes).
This $\lambda$-calculus approach is quite attractive since it seems to lead to a notion of intrinsic complexity' that is a measure that doesn't depend on a model of computation. It also handles higher order functions quite easily. Essentially the very syntax used to specify the functions is a (rather too fine) measure of the complexity.
In this talk I will outline the milestones in this approach, describe some of the recent developments, and suggest some possible paths for future research.
28-August-03 (MFG) Why some people can't count (Tiering without tears)
A run through of the above talk.
07-August-03 (MFG) Quantales and linear logic
A talk based on part of the paper LINK
05-June-03 (MFG) Categories of contexts
30-May-03 (MFG) Uniform Type Systems
08-may-03 (Bath) The tiering method of controlling complexity
Abstract: If left to their own devices recursions (on the natural numbers) can very quickly produce extremely fast growing and complex functions. However, when set up in a formal way, the very syntax of the specifying recursions gives an accurate calibration of the complexity of the specified function. It would be nice if this kind of calibration method could be used for function that never get much beyond the feasible ones. In recent year several methods of taming recursion have been investigated, one of which is tiering (sometimes known as stratification).
I will first outline the pre-history of this topic (the untyped and the simply typed lambda calculus, and G\"odel's T), and then I will describe how tiering works.
If you don't know anything about the lambda calculi, the first part of the talk can be seen as an introduction to these.
07-May-03 (Swansea) Notations for iterations
Abstract: Some of you may have seen some of the topic of ordinal notations' and been impressed, or perhaps not. The standard approach to this topics obscures a lot of what is going on, and often misses the point. In fact, it is about the nesting of iterations and how complicated thees can become. In this talk I will explain some of the background from the stand point of iterations. I will then go on to describe an applied lambda-calculus with explication notations for these gadgets. Finally, I will indicate how these notations are related to the standard Bachmann notations.
The first part of the talk should be understandable without a knowledge of ordinals.
06-May-03 (Swansea) Forms of recursion
Abstract: Recursion theory as currently understood is concerned with the nalysis of Turing degrees. In this topic any two Turing computable functions are deemed to be equivalent. However, there is an earlier, less expensive, set of material which is concerned with recursion and is in danger of being forgotten.
Some of the better know forms of recursion go by the names Primitive recursion', Course of value recursion', Tail recursion', and the like. These are all concerned with converting given natural number functions into natural number functions. However, there are many other kinds of recursion, not all concerned with natural number functions, and some concerned with higher order functions.
In this talk I will try to put these method into a general setting, and suggest ways that this material could be turned into at least one course that could be taught at a postgraduate level (and perhaps earlier).
16-April-03 (MFG) What is the second order lambda calculus?
20-February-03 (MFG) !What is Linear Logic?
31-January-03 (Birmingham) What is and what is not Point-free and Point-sensitive topology
Abstract: In this context point-sensitive topology is what is usually called point-set topology (as developed in Kelley's book). This is part of a larger subject topology' which includes algebraic topology, homotopy theory, homological algebra, perhaps some algebraic geometry, and other things. Of course, it impinges on many other topics, and it is not clear where topology' ends and other subjects begin.
Along side this there is another area of mathematics which is even less well defined. It impinges on such topics as ring theory, algebraic number theory, module theory, algebraic geometry, and some parts of `topology'. For instance, the study and use of quantales is a part of this area. This area of mathematics uses entirely algebraic methods. Point-free topology is a part of this area, and consist of the study of certain lattices called frames.
There is a direct relationship between point-free and point-sensitive, but the information flow goes entirely one way. I will show, by an example, how the entirely algebraic view of the point-free subject leads to a much deeper understanding to that part of the point-sensitive subject which instigated, and made necessary, its development. The secret is to remember a part of the larger algebraic area which seems to have nothing to do with topology.
Before this I did not keep systematic records. The following is what I can remember doing or is recorded elsewhere (and neither of these implies the other).
16-May-02 (MFG) Some observations on the Vietoris construction
Since then I have since written a set of notes on this, GIVE LINK.
Early 02 (MFG) As part of a short series on monads and triples I gave two talks
• 08-March-02 The $\beta$-triple, including a lesson in the history of topology
• 15-Februrary-02 A quick survey of adjunctions
23-November-01 (MFG) Number structures and recursion SORT OUT
30-August,10,14,18-September-01 (MFG-short course)
$\Omega$-valued sets
Notes available on web page -- SORT OUT
Harold Simmons
|
# The FixedPoint Class¶
Examples are just a click away
Boxes like this link to example code.
class fixedpoint.FixedPoint(init, /, signed=None, m=None, n=None, *, overflow='clamp', rounding='auto', overflow_alert='error', implicit_cast_alert='warning', mismatch_alert='warning', str_base=16)
Parameters
• init (float or int or str or FixedPoint) – Initial value. This argument is required and positional, meaning it cannot be keyworded and must come first in the list of arguments.
• signed (bool) – Signedness, part of the Q format specification. When left unspecified, sign() is used to deduce signedness. This argument can be keyworded.
• m (int) – Number of integer bits, part of the Q format specification. When left unspecified, min_m() is used to deduce initial integer bit width, after which trim() is used after rounding to minimize integer bits. This argument can be keyworded.
• n (int) – Number of fractional bits, part of the Q format specification. When left unspecified, min_n() is used to deduce fractional bit width. This argument can be keyworded.
• overflow (str) –
Specifies what shall happen when the value overflows its integer bit width. Valid options are:
• 'clamp' (default when left unspecified)
• 'wrap'
• rounding (str) –
Specifies how superfluous fractional bits are rounded away. Valid options are:
• 'convergent' (default for signed when left unspecified)
• 'nearest' (default for unsigned when left unspecified)
• 'in'
• 'out'
• 'up'
• 'down'
Specifies the notification scheme when overflow occurs. Valid options are:
• 'error' (default when left unspecified)
• 'warning'
• 'ignore'
Specifies the notification scheme when 2 FixedPoints with non-matching properties undergo arithmetic. Valid options are:
• 'error'
• 'warning' (default when left unspecified)
• 'ignore'
Specifies the notification scheme when implicit casting is performed and the resultant FixedPoint is not valued the same as the original number. Valid options are:
• 'error'
• 'warning' (default when left unspecified)
• 'ignore'
• str_base (int) –
Casting a FixedPoint to a str generates a bit string in the base specified by str_base. Valid options are:
• 16 (default when left unspecified)
• 10
• 8
• 2
Raises
• if init is a str and any of signed, m, or n are not specified.
• if more than m + n bits are present in init (when init is a str).
• if an invalid Q format is specified.
• TypeError – if init is not an int, float, str, or FixedPoint and cannot be cast to a float.
• FixedPointOverflowError – if overflow_alert is 'error' and m is too small to represent init.
from_int(val)
Parameters
val (int) – Value to set the FixedPoint to.
Set the value of the FixedPoint from an integer value. Affects only integer bits (since integer require no fractional bits). Must fit into the Q format already designated by the object, otherwise overflow will occur.
from_float(val)
Parameters
val (float) – Value to set the FixedPoint to.
Set the value of the FixedPoint. Must fit into the Q format already designated by the object, otherwise rounding and/or overflow will occur.
from_string(val)
from_str(val)
Parameters
val (str) – Value to set the FixedPoint bits to.
Directly set the bits of the FixedPoint, using the Q format already designated by the object. May be a decimal, binary, octal, or hexadecimal string, the latter three of which require a '0b', '0o', or '0x' radix, respectively.
FixedPoint Properties
signed
Type
bool
Getter
True for signed, False for unsigned.
Setter
Set signedness.
Raises
Change signedness of number. Note that if the MSb is 0, the value of the number will not change. Overflow occurs if the MSb is 1.
m
Type
int
Getter
Number of integer bits in the FixedPoint number.
Setter
Set the number of integer bits in the FixedPoint number.
Raises
When the number of integer bits increases, sign extension occurs for signed numbers, and 0-padding occurs for unsigned numbers. When then number of integer bits decreases, overflow handling may occur (per the overflow property) if the FixedPoint value is too large for the new integer bit width.
n
Type
int
Getter
Number of fractional bits in the FixedPoint number.
Setter
Set the number of fractional bits in the FixedPoint number.
Raises
When the number of fractional bits increases, 0s are appended to the fixed point number. When the number of fractional bits decreases, rounding may occur (per the rounding property), which in turn may cause overflow (per the overflow property) if the integer portion of the rounded result is too large to fit within the current integer bit width.
str_base
Type
int
Getter
Base of the string generated by str.
Setter
Set the base of the string generated by str.
Using the builtin python str function on a FixedPoint casts the object to a string. The string is the bits of the FixedPoint number in the base specified by str_base, but without the radix. Must be one of:
• 16
• 10
• 8
• 2
overflow
Type
str
Getter
The current overflow scheme.
Setter
Set the overflow scheme.
Overflow occurs when the number of bits required to represent a value exceeds the number of integer bits available (m). The overflow property of a FixedPoint specifies how to handle overflow. Must be one of:
• 'clamp'
• 'wrap'
rounding
Type
str
Getter
The current rounding scheme.
Setter
Set the rounding scheme.
Rounding occurs when fractional bits must be removed from the object. Some rounding schemes can cause overflow in certain circumstances. Must be one of:
• 'convergent'
• 'nearest'
• 'in'
• 'out'
• 'up'
• 'down'
overflow_alert
Type
str
Getter
The current overflow_alert scheme.
Setter
Set the overflow_alert scheme.
When overflow occurs, the overflow_alert property indicates how you are notified. Must be one of:
• 'error'
• 'warning'
• 'ignore'
mismatch_alert
Type
str
Getter
The current mismatch_alert scheme.
Setter
Set the mismatch_alert scheme.
When 2 FixedPoints interact to create another FixedPoint, the properties assigned to the new object must be resolved from the 2 original objects. Whenever properties between these 2 objects do not match, the mismatch_alert property indicates how you are notified. Must be one of:
• 'warning'
• 'error'
• 'ignore'
implicit_cast_alert
Type
str
Getter
The current implicit_cast_alert scheme.
Setter
Set the implicit_cast_alert scheme.
Some operations allow a FixedPoint to interact with another object that is not a FixedPoint. Typically, the other object will need to be cast to a FixedPoint, and is done so internally in the class method. If error exists after the cast to FixedPoint, the implicit_cast_alert property indicates how you are notified. Must be one of:
• 'warning'
• 'error'
• 'ignore'
bits
Type
FixedPointBits
Getter
Bits of the fixed point number.
This is the read-only bits of the FixedPoint, stored as an integer.
Indexing, slicing, and mapping is available with the FixedPointBits class.
bitmask
Type
int
Getter
Bitmask of the FixedPoint number.
Integer bitmask, equivalent to $$2^{m + n} - 1$$.
clamped
Type
bool
Getter
True if the value of the FixedPoint number is equal to it minimum or maximum value. False otherwise.
qformat
Type
str
Getter
Q format of the FixedPoint number.
The string takes the form UQm.n, where:
• U is only present for unsigned numbers
• m is the number of integer bits
• n is the number of fractional bits
Arithmetic Operators
__add__(augend)
__iadd__(augned)
__radd__(addend)
Note
These are the + and += operators.
Parameters
Returns
Return type
FixedPoint
Raises
Note
$$\it{sum} = \it{addend} + \it{augend}$$
Addition using the + and += operators are full precision; bit growth will occur:
If both augend or addend are unsigned, the result is unsigned, otherwise the result will be signed.
__sub__(subtrahend)
__isub__(subtrahend)
__rsub__(minuend)
Note
These are the - and -= operators.
Parameters
Returns
Difference of minuend and subtrahend
Return type
FixedPoint
Raises
Note
$$\it{difference} = \it{minuend} - \it{subtrahend}$$
Subtraction using the - and -= operators are full precision; bit growth will occur.
If both minuend or subtrahend are unsigned, the result is unsigned, otherwise the result will be signed.
Overflow can occur for unsigned subtraction.
__mul__(multiplier)
__imul__(multiplier)
__rmul__(multiplicand)
Note
These are the * and *= operators.
Parameters
Returns
Product of multiplicand and multiplier
Return type
FixedPoint
Raises
Note
$$\it{product} = \it{multiplicand} \times \it{multiplier}$$
Multiplication using the * and *= operators are full precision; bit growth will occur.
If both multiplicand or multiplier are unsigned, the result is unsigned, otherwise the result will be signed.
__pow__(exponent)
__ipow__(exponent)
Note
These are the ** and **= operators.
Parameters
exponent (int) – The exponent to the FixedPoint base. Must be positive.
Returns
Result of the base raised to the exponent power.
Return type
FixedPoint
Note
$$\it{result} = \it{base}^{\it{exponent}}$$
Exponentiation using the ** and **= operators are full precision; bit growth will occur.
The result has the same signedness as the base.
Only positive integers are supported as the exponent.
Comparison Operators
__lt__(other)
__le__(other)
__gt__(other)
__ge__(other)
__eq__(other)
__ne__(other)
Note
These are the <, <=, >, >=, == and != operators.
Parameters
other (FixedPoint or int or float) – Numeric object to compare to
Returns
True if the comparison is true, False otherwise
Return type
bool
__cmp__(other)
Parameters
other (FixedPoint or int or float) – Numeric object to compare to
Returns
• a negative number if the object is < other
• 0 if the object == other
• a positive number if the object is > other
Return type
int
Generic comparison object. Not used for comparisons in python 3 but used internally by all other comparisons.
Bitwise Operators
__lshift__(nbits)
__ilshift__(nbits)
Note
These are the << and <<= operators.
Parameters
nbits (int) – Number of bits to shift left.
Return type
FixedPoint
Bit shifting does not change the FixedPoint’s Q format. The nbits leftmost bits are discarded.
To keep bits after shifting, multiply the object by $$2^{nbits}$$ instead of using the << or <<= operator.
If nbits < 0, bits are shifted right using >> or >>= by abs(nbits) instead.
__rshift__(nbits)
__irshift__(nbits)
Note
These are the >> and >>= operators.
Parameters
nbits (int) – Number of bits to shift right.
Returns
Original FixedPoint with bits shifted right.
Return type
FixedPoint
Bit shifting does not change the FixedPoint’s Q format. The nbits rightmost bits are discarded.
To keep bits after shifting, multiply the object by $$2^{-nbits}$$ instead of using the >> or >>= operator.
For signed numbers, sign extension occurs.
If nbits < 0, bits are shifted right using << or <<= by abs(nbits) instead.
__and__(other)
__iand__(other)
__rand__(other)
Note
These are the & and &= operators.
Parameters
other (int or FixedPoint) – Object to bitwise AND with
Returns
Original object’s bits bitwise ANDed with other’s bits.
Return type
FixedPoint
When ANDing 2 FixedPoints, the binary point is not aligned.
After ANDing, the result is masked with the leftmost FixedPoint.bitmask and assigned to the bits of the return value.
__or__(other)
__ior__(other)
__ror__(other)
Note
These are the | and |= operators.
Parameters
other (int or FixedPoint) – Object to bitwise OR with
Returns
Original object’s bits bitwise ORed with other’s bits.
Return type
FixedPoint
When ORing 2 FixedPoints, the binary point is not aligned.
After ORing, the result is masked with the leftmost FixedPoint.bitmask and assigned to the bits of the return value.
__xor__(other)
__ixor__(other)
__rxor__(other)
Note
These are the ^ and ^= operators.
Parameters
other (int or FixedPoint) – Object to bitwise XOR with
Returns
Original object’s bits bitwise XORed with other’s bits.
Return type
FixedPoint
When XORing 2 FixedPoints, the binary point is not aligned.
After XORing, the result is masked with the leftmost FixedPoint.bitmask and assigned to the bits of the return value.
Unary Operators
__invert__()
Note
This is the unary ~ operator.
Returns
Copy of original object with bits inverted.
Return type
FixedPoint
__pos__()
Note
This is the unary + operator.
Returns
Copy of original object.
Return type
FixedPoint
__neg__()
Note
This is the unary - operator.
Returns
Negated copy of original object negated.
Return type
FixedPoint
Raises
In an attempt to minimize user error, unsigned numbers cannot be negated. The idea is that you should be doing this very intentionally.
Built-in Function Support
__abs__()
Note
This is the built-in abs() function.
Returns
Absolute value.
Return type
FixedPoint
Raises
FixedPointOverflowError – if the absolute value of a negative-valued number is larger than the Q format allows (raised only if overflow_alert is 'error').
Signedness does not change.
__int__()
Note
This is the built-in int function.
Returns
Only the integer bits of the FixedPoint number.
Return type
int
Fractional bits are ignored, which is the same as rounding down.
__float__()
Note
This is the built-in float function.
Returns
Floating point cast of the FixedPoint number.
Return type
float
When casting to a float would result in an OverflowError, float('inf') or float('-inf') is returned instead.
Warning
A typical Python float follows IEEE 754 double-precision format, which means there’s 52 mantissa bits and a sign bit (you can verify this by examining sys.float_info). Thus for FixedPoint word lengths beyond 52 bits, the float cast may lose precision or resolution.
__bool__()
Note
This is the built-in bool function.
Returns
False if FixedPoint.bits are non-zero, True otherwise.
Return type
bool
__index__()
Note
This is the built-in hex(), oct(), and bin() functions.
Returns
Bits of the FixedPoint number.
Return type
int
Calling hex(), oct(), or bin() on a FixedPoint generates a str with the FixedPoint.bits represented as a hexadecimal, octal, or binary string. The radix prepends the bits, which do not contain any left-padded zeros.
__str__()
Note
This is the built-in str function.
Returns
Bits of the FixedPoint number, left padded to the number of bits in the number, in the base specified by the str_base property.
Return type
str
Calling str() will generate a hexadecimal, octal, or binary string (according to the str_base property setting) without the radix, and 0-padded to the actual bit width of the FixedPoint number. Decimal strings are not 0-padded.
This string represents the bits of the number, thus will always be non-negative.
Signedness does not change.
__format__()
Note
This is the built-in str.format() and format() function, and also applies to f-strings.
Returns
Formatted string, various formats available.
Return type
str
A FixedPoint can be formatted as a str, float, or int would, depending on the format string syntax.
Table 1 Standard Format Specifier Parsing Summary
format_spec
type
Formatted Type
Formatted Value
(given x = FixedPoint(...))
's'
str
str(x)
(depends on x.str_base)
'q'
x.qformat
'b'
(binary)
int
x.bits
'd'
(decimal)
'o'
(octal)
'x'
(lowercase
'X'
(uppercase
'...m' 1
x.bits['int']
(integer bits only)
'...n' 1
x.bits['frac']
(fractional bits only)
'e'
float
float(x)
'E'
'f'
'F'
'g'
'G'
'%'
1 Append to the specifier of another formatted int. E.g., 'bn' would format the fractional bits of x in binary.
__len__()
Note
This is the built-in len() function..
Returns
Number of bits in the FixedPoint.
Return type
int
__repr__()
Note
This is the built-in repr() function, which is also the output shown when a FixedPoint is not assigned to a variable.
Returns
Python executable code; a str representation of the object.
Return type
str
This generates a code string that will exactly reproduce the FixedPoint’s value and properties.
Bit Resizing Methods
resize(m, n, /, rounding=None, overflow=None, alert=None)
Parameters
Raises
FixedPointOverflowError – if resizing causes overflow (raised only if alert - or overflow_alert if alert is not specified - is 'error').
Fractional bits are resized first, them integer bits. Bit sizes can grow or shrink from their current value.
Rounding, overflow handling, and overflow alert notification severity can be temporarily modified within the scope of this method. I.e., specifying the rounding, overflow, or alert arguments will only take effect within this method; it will not permanently change the property settings of the object. If left unspecified, the current property setting is used.
trim(ints=None, fracs=None)
Parameters
• ints (bool) – Set to True to trim off superfluous integer bits
• fracs (bool) – Set to True to trim off superfluous fractional bits
Trims off excess bits, including:
• up to n trailing 0s
• for unsigned numbers:
• for signed numbers:
• up to m - 1 leading 0s for positive numbers, leaving one leading 0 in front of the first 1 encountered
• up to m - 1 leading 1s, for negative numbers, leaving one leading 1 in front of the first 0 encountered
Resultant Q format is always valid. For the FixedPoint value of 0, resulting Q format is [U]Q1.0.
Opt to trim off only fractional bits or only integer bits by setting fracs or ints, respectively, to True. When left unspecified, both integer and fractional bits are trimmed.
Rounding Methods
__round__(n)
Note
This is the built-in round() function.
Parameters
n (int) – Number of bits remaining after round
Returns
A copy of the FixedPoint rounded to n bits.
Return type
FixedPoint
Raises
FixedPointOverflowError – if rounding causes overflow (raised only if the overflow_alert property setting is 'error').
Rounds a copy of the FixedPoint using the rounding scheme specified by the rounding property setting.
Refer to FixedPoint.resize() for more details.
__floor__()
Note
This is the built-in math.floor() function. It does not modify the object given to it, but creates a copy and operates on it instead.
Return type
FixedPoint
Rounds to the integer closest to $$-\infty$$, but does not modify the fractional bit width.
__ceil__()
Note
This is the built-in math.ceil() function. It does not modify the object given to it, but creates a copy and operates on it instead.
Return type
FixedPoint
RaisesFixedPointOverflowError
if the integer value of the FixedPoint is already at its maximum possible value (raised only if overflow_alert is 'error')
Rounds to the integer closest to $$+\infty$$, leaving 0 fractional bits. For values other than 0, this requires m to be non-zero.
__trunc__()
Note
This is the built-in math.trunc() function. It does not modify the object given to it, but creates a copy and operates on it instead.
Return type
FixedPoint
Rounds to the integer closest to $$-\infty$$, leaving 0 fractional bits. If m is 0, it is changed to 1, otherwise m is not modified.
round(n)
Parameters
n (int) – Number of fractional bits remaining after rounding
Raises
FixedPointOverflowError – if rounding causes overflow (raised only if overflow_alert is 'error')
Rounds the FixedPoint using the rounding scheme specified by the rounding property setting.
convergent(n)
round_convergent(n)
Parameters
n (int) – Number of fractional bits remaining after rounding
Raises
FixedPointOverflowError – if rounding causes overflow (raised only if overflow_alert is 'error')
Rounds to n fractional bits, biased toward the nearest value with ties rounding to the nearest even value.
round_nearest(n)
Parameters
n (int) – Number of fractional bits remaining after rounding
Raises
FixedPointOverflowError – if rounding causes overflow (raised only if overflow_alert is 'error')
Rounds the FixedPoint to n fractional bits, biased toward the nearest value with ties rounding to $$+\infty$$.
round_in(n)
Parameters
n (int) – Number of fractional bits remaining after rounding
Rounds the FixedPoint to n fractional bits toward 0.
round_out(n)
Parameters
n (int) – Number of fractional bits remaining after rounding
Raises
FixedPointOverflowError – if rounding causes overflow (raised only if overflow_alert is 'error')
Rounds the FixedPoint to n fractional bits, biased toward the nearest value with ties rounding away from 0.
round_down(n)
Parameters
n (int) – Number of fractional bits remaining after rounding
Rounds the FixedPoint to n fractional bits toward $$-\infty$$.
round_up(n)
Parameters
n (int) – Number of fractional bits remaining after rounding
Raises
FixedPointOverflowError – if rounding causes overflow (raised only if overflow_alert is 'error')
Rounds the FixedPoint to n fractional bits toward $$+\infty$$.
keep_msbs(m, n, /, rounding=None, overflow=None, alert=None)
Parameters
Raises
FixedPointOverflowError – if rounding causes overflow (raised only if alert - or overflow_alert if alert is not specified - is 'error')
Rounds away LSb(s), leaving m + n bit(s), using the rounding scheme specified, then interprets the result with m integer bits and n fractional bits.
The rounding, overflow handling, and overflow alert notification schemes can be temporarily modified within the scope of this method. I.e., specifying the rounding, overflow, or alert arguments will only take effect within this method; it will not permanently change the property settings of the object. The current property setting for any of these unspecified arguments is used.
While other rounding functions cannot round beyond the fractional bits in a FixedPoint, keep_msbs() will keep an arbitrary number of the FixedPoint’s most significant bits, regardless of its current Q format. The resulting Q format must be valid.
Overflow Handling
clamp(m, /, alert=None)
Parameters
Raises
FixedPointOverflowError – if new integer bit width is too small to represent the FixedPoint object value (raised only if alert - or overflow_alert if alert is not specified - is 'error')
Reduces the number of integer bits in the FixedPoint to m, clamping to the minimum or maximum value on overflow.
The overflow alert notification scheme can be temporarily modified within the scope of the method by using the alert argument. When left unspecified, the overflow_alert property setting is used.
wrap(m, /, alert=None)
Parameters
Raises
FixedPointOverflowError – if new integer bit width is too small to represent the FixedPoint object value (raised only if alert - or overflow_alert if alert is not specified - is 'error')
Reduces the number of integer bits in the FixedPoint to m, masking away the removed integer bits.
The overflow alert notification scheme can be temporarily modified within the scope of the method by using the alert argument. When left unspecified, the overflow_alert property setting is used.
keep_lsbs(m, n, /, overflow=None, alert=None)
Parameters
Raises
FixedPointOverflowError – if new m + n bits is too small to represent the FixedPoint value (raised only if alert - or overflow_alert if alert is not specified - is 'error')
Removes MSb(s), leaving m + n bit(s), using the overflow scheme specified, then interprets the result with m integer bits and n fractional bits.
The overflow handling and overflow alert notification schemes can be temporarily modified within the scope of this method. I.e., specifying the overflow or alert arguments will only take effect within this method; it will not permanently change the property settings of the object. The current property setting for any of these unspecified arguments is used.
While other overflow handling functions cannot remove MSbs beyond their integer bits in a FixedPoint, keep_lsbs() will keep an arbitrary number of the FixedPoint’s least significant bits, regardless of its current Q format. The resulting Q format must be valid.
Context Management
__enter__()
__exit__(exc_type, *args)
__call__(*, safe_retain=False, **props)
Note
This is the built-in with statement in conjunction with the () operator.
Parameters
• safe_retain (bool) – Set to True to retain the changes made within the context as long as no exceptions were raised. Set to False (or leave unspecified) if the the changes made within the context are to be undone when the context exits.
• props
Any keyword-able argument from the FixedPoint constructor, including:
• signed (bool)
• m (int)
• n (int)
• overflow (str)
• rounding (str)
• str_base (int)
Raises
While the __call__ method is not typically associated with the context manager, the FixedPoint class uses this method to assign attributes temporarily (or permanently, with appropriate use of the safe_retain keyword) to the FixedPoint called, within the context of the with statement.
Using the __call__ method is optional when safe_retain does not need to be True.
static enable_logging()
Enables logging to fixedpoint.log, located in the root directory of the fixedpoint module.
On initial import, logging is disabled.
Any time this method is called, fixedpoint.log is erased.
static disable_logging()
Disables logging to fixedpoint.log.
classmethod sign(val)
Parameters
val (FixedPoint or int or float) – Value from which to discern the sign.
Returns
• -1 if val < 0
• +1 if val > 0
• 0 if val == 0
Return type
int
Determine the sign of a number.
classmethod min_m(val, /, signed=None)
Parameters
• val (int or float) – Value to analyze
• signed (bool) – True if signed, False if unsigned
Returns
Minimum value for FixedPoint.m for which val can be represented without overflow.
Return type
int
Calculate the minimum value for FixedPoint.m for which va can be represented without overflow. If signed is not specified, it is deduced from the value of val. When val < 0, signed is ignored.
Worst case rounding is assumed (e.g., min_m(3.25) returns 3, in case 3.25 needs to be rounded up to 4).
classmethod min_n(val)
Parameters
val (float) – Value to analyze
Returns
Minimum value for FixedPoint.n for which val can be represented exactly.
Return type
int
|
# DPAT/last.txt at master · clr2of8/DPAT · GitHub
Pluggakuten.se / Forum"
it gives a formula to count objects, where two objects that are related by a symmetry (rotation or reflection, for example) are not to be counted as distinct. 3. statement of the lemma Usage and proof of Burnside's lemma:The number of objects equals the average number of symmetrical pictures.Also known as Burnside's counting theorem, the Ca Burnsides lemma eller Burnsides formel, även kallat Cauchy-Frobenius lemma, är ett resultat inom gruppteori.. Låt G vara en ändlig grupp som verkar på en mängd X, och för varje g i G, låt beteckna fixpunktsmängden till g.Burnsides lemma säger då att antalet banor, r, är = | | ∑ ∈ | | med andra ord är antalet banor lika med det aritmetiska medelvärdet av storleken på 2018-11-12 2018-10-13 Pólya-Burnside Lemma reduces all problems of symmetry to simply counting the number of invariant elements for each permutation. The key is that for many puzzles, this counting is significantly easier than any other equivalent problem-solving technique. So it makes sense to first consider a … Burnside's lemma. Now that all preparations are done, Burnside's lemma gives a straight-up formula for the answer of the problem: That's it!
Apr 3, 2010 Then in 1904, Burnside published his representation theoretic proof of Lemma 4.1.2. [4], from which it easily follows that all groups of order paqb following lemma uses the only fact about permutation group theory needed of Burnside's Lemma is an immediate corollary, since B'(g) is the inventory. Apr 16, 2011 and the solvability of finite groups of order divisible by at most two distinct primes; far behind would come the so-called “Burnside lemma”, Answers to Selected Problems on Burnside's Theorem. 1. Determine the number of ways in which the four corners of a square can be colored with two colors. Burnsides lemma kan användas för att beräkna antalet unika färgningar (oberoende av rotation) av en kub med tre färger.
The Burnside Lemma · The identity element e acts as the identity on S. That is, for each s in S, e.s=s · The group actions Aug 14, 2020 (Burnside's Lemma) The number of orbits in a G-set S is |S/G|=1|G|∑g∈G|Sg|. ( Note that there's ample evidence that Burnside didn't actually In this talk, we'll examine one such tool: “The Lemma that is not Burnside's”, first discovered by Augustin Cauchy in 1845.
## PDF A New International Engagement Framework for North
Ändligt genererade moduler över en huvudidealring med tillämpning på Jordans normalform. Here's another problem of some previous Ad Infinitum contest on Hackerrank. These Ad Infinitum contests are math-based contests so it is likely that Burnside's Lemma has appeared in them, although I could find only this one. Burnside’s lemma Nguyễn Trung Tuân Algebra , College Math , Combinatorics , Mathematical Olympiad March 25, 2010 May 13, 2020 4 Minutes Cho là một tập hợp và là một nhóm.
### Satser: Lemma, Cantors Sats, Godels Ofullstandighetssats
Färgerna? Nej, banor? 2021-01-25 · Burnside’s Lemma is also sometimes known as orbit counting theorem.
Burnside's lemma is a result in group theory that can help when counting objects with symmetry taken into account.
China index avanza
T h ese all 2019-09-18 2 Burnside’s Lemma 2.1 Group Theory We will rst clarify some basic notation.
This is nice in multiple ways. 2018-10-13 · Burnside’s Lemma: Orbit-Stabilizer Theorem Problem: Given a 3 by 3 grid, with 5 colors. How many different ways to color the grid, given that two configurations are considered the same if they can be reached through rotations ( 0, 90, 180, 270 degrees )?
Fernell hogan
köpekontrakt tomtmark
dreyfusaffaren
anders engström tandläkare
matte 4 trigonometriska ekvationer
magelungen skola farsta strand
jobbcoach utbildningscoach lön
### Lika Unika Låt - Bit Coin Wunder
GRUPPVERKAN PÅ M¨ANGDER. • G:s verkan på |G| = p2, p primtal, är G abelsk). • Burnsides lemma.
Martin prieto beaulieu
teckenspråkstolk lön
Mathemaniac.
## Burnsides lemma – Wikipedia
Polynomfaktorisering. Irreducibla polynom. Ändliga kroppar. Felrättande linjära binära koder. Losning: Vi anvander oss av det sa kallade Burnsides lemma som sager att antalet banor som uppstarnar en grupp G verkar pa en mangd S ges av formeln|O| Gruppverkan på mängder. Burnsides lemma. Ringar och kroppar.
It provides a formula to count the num-ber of objects, where two objects that are symmetric by rotation or re Burnside's Lemma states that the number of orbits $|X/G|$ of a set $X$ under the action of a group $G$ is given by: $$|X/G| = \frac{1}{|G|}\sum_{g \in G}|X^g|$$ where $X^g$ denotes the set of elements in $X$ fixed under the action of $g$. Om man inte räknar spegelvända armband som samma, så går uppgiften att lösa med Burnsides Lemma i det generella fallet (vilket är universitetsmatte), och i fall då antalet pärlor i armbandet är ett primtal (p) så är antalet armband lika med. Så till exempel för talet 5 blir svaret: Burnsides lemma. Vill lösa c- uppigften med Brunsides lemma (förstår att man kan nog lösa den med vanlig kombinatorik) men VILL lära mig Burnsides lemma. Jag har ju tre boxar? Det är väl det som menas med S_6 ? Så måste jag nu hitta de elementen, G G, för den här uppgiften.
|
# Why is the stress-energy tensor symmetric?
The relativistic stress-energy tensor $T$ is important in both special and general relativity. Why is it symmetric, with $T_{\mu\nu}=T_{\nu\mu}$?
As a secondary question, how does this relate to the symmetry of the nonrelativistic Cauchy stress tensor of a material? This is apparently interpreted as being due to conservation of angular momentum, which doesn't seem connected to the reasons for the relativistic quantity's symmetry.
-
In the most general context of special relativity, one may define the tensor so that it's not symmetric. There are various special situations in which the symmetry may be proven. In GR, $T^{\mu\nu} =\partial {\mathcal L}_{\rm matter} / \partial g_{\mu\nu}$ which is manifestly symmetric. – Luboš Motl Jun 19 '13 at 14:28
@LubošMotl: Why not make that into an answer so I can upvote it? – Ben Crowell Jun 19 '13 at 14:29
We had redundant tags for stress-energy-tensor and energy-momentum-tensor. Many questions had both tags. I went through and changed all the energy-momentum-tensor tags to stress-energy-tensor. – Ben Crowell Jun 19 '13 at 14:44
@Ben Crowell: Please read my message in hbar chat. – Qmechanic Jun 19 '13 at 14:53
## 1 Answer
Here is my own answer to the first part of the question. I don't know the answer to the second part.
Let's pick a local set of Minkowski coordinates $(t,x,y,z)$. Then $T_{\mu\nu}$ represents a flux of the $\mu$ component of energy-momentum through a hypersurface perpendicular to the $\nu$ axis. For example, say we have a bunch of particles at rest in a certain frame, and consider $T_{tt}$. The time component $p_t$ of the energy-momentum vector is mass-energy. Since these particles are at rest, their mass-energy is all in the form of mass. If we make a hypersurface perpendicular to the $t$ axis, i.e., a hypersurface of simultaneity, then all these particles' world-lines are passing through that hypersurface, and that's the flux that $T_{tt}$ measures: essentially, the mass density.
This makes it plausible that $T$ has to be symmetric. For example, let's say we have some nonrelativistic particles. If we have a nonzero $T_{tx}$, it represents a flux of mass through a hypersurface perpendicular to $x$. This means that mass is moving in the $x$ direction. But if mass is moving in the $x$ direction, then we have some $x$ momentum $p_x$. Therefore we must also have a $T_{xt}$, since this momentum is carried by the particles, whose world-lines pass through a hypersurface of simultaneity.
More rigorously, the Einstein field equations say that the Einstein curvature tensor $G$ is proportional to the stress-energy tensor. Since $G$ is symmetric, $T$ must be symmetric as well.
-
|
# How much should it cost to add a circuit breaker?
In this question, I asked about problems between my circuit breaker for a circuit shared by my microwave and sump pump. The conclusion was that another circuit should be added to support the microwave on its own. I have open spots for breakers in the box. Assuming I have an electrician do this job, how much should I expect it to cost? If location is a variable, I'm in central Maryland.
Note that the new circuit could be added to support only the sump-pump in an unfinished basement. So my thinking is that it should be one of the easiest additions possible.
• You might want to mention where you are (contractor prices can vary considerably). In my area (suburbs of Philadelphia, PA), I figure on $60 -$75 for anyone to just show up. The actual work is on top of that figure. – Michael Kohne Aug 16 '10 at 2:06
• That's precisely why my first paragraph ends with I'm in central Maryland. – Jeffrey Blake Aug 16 '10 at 2:24
It is fairly simple to add an additional breaker to a box but of course there would be some sort of minimum charge to come out and also the cost of the breaker, so probably around $100. BUT that assumes the actual line for the sump pump/microwave (the one you want to put on the new circuit) is all by itself. I have seen some interesting wiring in old houses where one thing is wired to the next which is wired to the next and eventually all of it goes back to the same breaker. The electrician might need some time to figure out how the microwave and/or sump pump are currently wired before being able to propose a solution. If this is the case then I would expect a much higher bill (since this is no longer a trivial "install a new breaker" job). • Wiring from one outlet to another is common in both old and new construction. – Brad Gilbert Aug 16 '10 at 15:59 • Thanks for providing some numbers and reasoning. I guess I won't really know for sure until I get someone out for it, but this helps me know what to expect! – Jeffrey Blake Aug 17 '10 at 4:35 The only way to get an accurate figure for your area is to call a bunch of electricians near you, and they won't be able to give you a really accurate number without seeing it. They should be able to give you a range, though. The sump pump should be on its own 20A circuit without a GFCI. (GFCI may now be required by NEC 2008 edition. Will update when I know for sure.) The lights and outlets can stay on the same circuit if the outlets have GFCI protection. They're probably going to want to move the microwave to a different circuit while they're at it- unrelated locations really shouldn't share a common circuit. If they are making any modifications to a circuit that has anything not allowed by the current electrical code (which it does), they will have to make it all correct. • why should the sump not use a GFCI? In my case, the sump was plugged into a basement circuit wired from metal conduit, and the inspector made us use a GFCI since it was in the basement. – mohlsen Aug 20 '10 at 21:29 • @mohlsen : ever had a GFCI circuit seemingly randomly trip on you? Now imagine that you haven't had to run the pump in 10 months, and you're asleep or away at work when the rains come ... if the circuit was tripped, no sump, and flooded basement. – Joe Aug 22 '10 at 13:09 • Yes- GFCI breakers used to be recommended (required?) for sump pump installs, but because of the risk of flooding, the NEC now states specifically that sump pumps should not be GFCI-protected. They decided that a flooded basement is more likely to get somebody electrocuted than a sump pump without a GFCI. Just don't go swimming in the basement! mohlsen, I would check regularly to make sure the interrupter hasn't tripped. If it is having false trips, I would call up your inspector and ask if you can change it out. – nstenz Aug 22 '10 at 16:58 • makes complete sense. I am going to change it out to be a standard plug. – mohlsen Aug 23 '10 at 23:42 • I may be incorrect. The NEC 2008 edition now applies in my jurisdiction, and one of the books I'm reading says that all unfinished basement receptacles must have GFCI protection, without exception (2008 code change). I'm going to try to find the section in the actual NEC, which can be read for free if you sign up at nfpa.org (but there's no search or printing). So it definitely depends on your local regulations. – nstenz Aug 25 '10 at 2:00 ## Parts are cheap$5 for the breaker ($9 if Square D QO,$50-ish if an obsolete panel like Pushmastic, FPE, Zinsco etc.)
\$15-ish for the electrical wire.
???? for the labor.
The labor will vary wildly by the practical difficulty of routing the cable, and will depend on things like the total distance, level of finished-ness in the route areas (finished vs unfinished basement), whether the last guy left you some conduit to use, stuff like that.
That variability is precisely why costing questions are a bad fit for this stack.
|
## Nonorientable Surface
A surface such as the Möbius Strip on which there exists a closed path such that the directrix is reversed when moved around this path. The Euler Characteristic of a nonorientable surface is . The real Projective Plane is also a nonorientable surface, as are the Boy Surface, Cross-Cap, and Roman Surface, all of which are homeomorphic to the Real Projective Plane (Pinkall 1986). There is a general method for constructing nonorientable surfaces which proceeds as follows (Banchoff 1984, Pinkall 1986). Choose three Homogeneous Polynomials of Positive Even degree and consider the Map
(1)
Then restricting , , and to the surface of a sphere by writing
(2) (3) (4)
and restricting to and to defines a map of the Real Projective Plane to .
In 3-D, there is no unbounded nonorientable surface which does not intersect itself (Kuiper 1961, Pinkall 1986).
See also Boy Surface, Cross-Cap, Möbius Strip, Orientable Surface, Projective Plane, Roman Surface
References
Banchoff, T. Differential Geometry and Computer Graphics.'' In Perspectives of Mathematics: Anniversary of Oberwolfach (Ed. W. Jager, R. Remmert, and J. Moser). Basel, Switzerland: Birkhäuser, 1984.
Gray, A. Nonorientable Surfaces.'' Ch. 12 in Modern Differential Geometry of Curves and Surfaces. Boca Raton, FL: CRC Press, pp. 229-249, 1993.
Kuiper, N. H. Convex Immersion of Closed Surfaces in .'' Comment. Math. Helv. 35, 85-92, 1961.
Pinkall, U. Models of the Real Projective Plane.'' Ch. 6 in Mathematical Models from the Collections of Universities and Museums (Ed. G. Fischer). Braunschweig, Germany: Vieweg, pp. 63-67, 1986.
|
mathlibdocumentation
category_theory.lifting_properties
Lifting properties #
This file defines the lifting property of two arrows in a category and shows basic properties of this notion. We also construct the subcategory consisting of those morphisms which have the right lifting property with respect to arrows in a given diagram.
Main results #
• has_lifting_property: the definition of the lifting property
• iso_has_right_lifting_property: any isomorphism satisfies the right lifting property (rlp)
• id_has_right_lifting_property: any identity has the rlp
• right_lifting_property_initial_iff: spells out the rlp with respect to a map whose source is an initial object
• right_lifting_subcat: given a set of arrows F : D → arrow C, we construct the subcategory of those morphisms p in C that satisfy the rlp w.r.t. F i, for any element i of D.
Tags #
lifting property
@[class]
structure category_theory.has_lifting_property {C : Type u} (i p : category_theory.arrow C) :
Prop
• sq_has_lift : ∀ (sq : i p),
The lifting property of a morphism i with respect to a morphism p. This can be interpreted as the right lifting property of i with respect to p, or the left lifting property of p with respect to i.
@[protected, instance]
def category_theory.has_lifting_property' {C : Type u} {i p : category_theory.arrow C} (sq : i p) :
theorem category_theory.iso_has_right_lifting_property {C : Type u} {X Y : C} (i : category_theory.arrow C) (p : X Y) :
Any isomorphism has the right lifting property with respect to any map. A → X ↓i ↓p≅ B → Y
Any identity has the right lifting property with respect to any map.
theorem category_theory.right_lifting_property_initial_iff {C : Type u} (i p : category_theory.arrow C) :
∀ {e : i.right p.right}, ∃ (l : i.right p.left), l p.hom = e
An equivalent characterization for right lifting with respect to a map i whose source is initial. ∅ → X ↓ ↓ B → Y has a lifting iff there is a map B → X making the right part commute.
theorem category_theory.has_right_lifting_property_comp {C : Type u} {X Y Z : C} {i : category_theory.arrow C} {f : X Y} {g : Y Z} :
The condition of having the rlp with respect to a morphism i is stable under composition.
def category_theory.right_lifting_subcat (R : Type u) :
Type u
The objects of the subcategory right_lifting_subcategory are the ones in the underlying category.
Equations
Instances for category_theory.right_lifting_subcat
@[protected, instance]
Equations
The objects of the subcategory right_lifting_subcategory are the ones in the underlying category.
Equations
theorem category_theory.id_has_right_lifting_property' {C : Type u} {D : Type v₁} {F : D → } (X : C) (i : D) :
theorem category_theory.has_right_lifting_property_comp' {C : Type u} {D : Type v₁} {X Y Z : C} {F : D → } {f : X Y} (hf : ∀ (i : D), ) {g : Y Z} (hg : ∀ (i : D), ) (i : D) :
def category_theory.right_lifting_subcategory {C : Type u} {D : Type v₁} (F : D → ) :
Given a set of arrows in C, indexed by F : D → arrow C, we construct the (non-full) subcategory of C spanned by those morphisms that have the right lifting property relative to all maps of the form F i, where i is any element in D.
Equations
|
## What you need to know
A vector is something with both magnitude (size) and direction. On a diagram, they are denoted by an arrow, where the length of the arrow tells us the magnitude and the way the arrow is pointing tells us the direction.
When we add vectors, we add them end-to-end. For example, if you add two vectors $\mathbf{a}$ and $\mathbf{b}$, then the result is the vector $\mathbf{a}+\mathbf{b}$, which takes you from the start of $\mathbf{a}$ to the end of $\mathbf{b}$ (right).
The negative of a vector has the same magnitude of the original vector, it just goes in the exact opposite direction. When we subtract vectors, for example $\mathbf{a}-\mathbf{b}$, we add on the negative of the vector that is being subtracted, i.e. we add the vector $-\mathbf{b}$ onto the vector $\mathbf{a}$, as seen in the picture.
We can multiply a vector by a number. As with normal multiplication, it’s just the same as adding a number to itself multiple times, so the result of multiply $\mathbf{a}$ by 3 is
$3\mathbf{a}=\mathbf{a}+\mathbf{a}+\mathbf{a}$
Then, they are added end-to-end, just how we’ve already seen. Note: all vectors here are written in bold. When you’re writing this by hand, you should underline each letter that represents a vector.
Vectors are often split up into two parts – an $x$ part, which tells us how far the vector moves left or right, and a $y$ part, which tells us how far a vector moves up or down. When splitting up vectors like this, we express them as column vectors, where the top number is the $x$ part and the bottom number is the $y$ part.
For example, the vector $\mathbf{a}$ goes 3 spaces to the right and 2 spaces up, so would be expressed like $\begin{pmatrix}3\\2 \end{pmatrix}$. If the vector goes left, the $x$ value is negative, and if it goes down, the $y$ value is negative.
To add/subtract column vectors, we add/subtract the $x$ and $y$ values separately. For example,
$\begin{pmatrix}-3\\4\end{pmatrix}+\begin{pmatrix}5\\2\end{pmatrix}=\begin{pmatrix}2\\6\end{pmatrix}$
To multiply a column vector by a number, we multiply both values in the vector by that number, e.g.
$5\times\begin{pmatrix}2\\-3\end{pmatrix}=\begin{pmatrix}10\\-15\end{pmatrix}$
It’s important to understand that the vectors you see diagrams of and vectors written in column form are just different ways of working with the same thing. Suppose you have two column vectors, which you then add together to get another vector. If you then drew those two column vectors on a diagram and added them end-to-end (like we saw above), the resulting vector would be precisely what you got when you added the two column vectors.
For the foundation course it is sufficient to have a good understanding how to represent vectors on a diagram and as a column, as well as knowing how to add/subtract/multiply them in both forms. However, for the higher course, there is a little more.
Firstly, a vector that goes from some point $A$ to another point $B$ may be denoted like $\overrightarrow{AB}$. If a vector starts from the origin and goes to some point, say $A$, it will be written with an O, like $\overrightarrow{OA}$.
The way to add these vectors is explained by the picture on the right. As you can see, we still add them end-to-end. So, if you add the vector $\overrightarrow{AB}$ to the vector $\overrightarrow{BC}$, then the result is a vector that starts at $A$ and ends at $C$, thus is denoted by $\overrightarrow{AC}$. The fact that it goes via $B$ doesn’t matter, what matters is where it starts and ends.
The easiest way to see a typical structure of a higher vector question is to see an example. Let’s go.
Example: In the diagram below, we have vectors $\overrightarrow{AB}=3\mathbf{a}$ and $\overrightarrow{AC}=4\mathbf{b}$ are shown. Point $D$ lies on the line $BC$ such that $BD:DC=1:3$. Write the vector $\overrightarrow{AD}$ in terms of $\mathbf{a}$ and $\mathbf{b}$.
There’s a lot going on here, but we’ll break it down. To find $\overrightarrow{AD}$, we are going to add the vectors
$\overrightarrow{AB}+\overrightarrow{BD}$
Added end-to-end, they will give us a vector that starts at $A$ and ends at $B$. For the first one, we don’t need to do anything – we know $\overrightarrow{AB}=3\mathbf{a}$. The second part will require some work.
Firstly, recognise that if $BD:DC=1:3$, then $D$ is $^{1}\!_{4}$ of the way along the line from $B$ to $C$. Therefore, if we get the vector that goes from $B$ to $C$ and divide it by 4, we have $\overrightarrow{BD}$. Now, to get from $B$ to $C$ using the vectors we’ve been given, we can go via $A$. To get from $B$ to $A$, we have to go backwards along $\overrightarrow{AB}$, so we will take the negative of it. Therefore, we get
$\overrightarrow{BC}=-\overrightarrow{AB}+\overrightarrow{AC}=-3\mathbf{a}+4\mathbf{b}$
Dividing this by 4, we get
$\overrightarrow{BD}=\dfrac{1}{4}\overrightarrow{BC}=-\dfrac{3}{4}\mathbf{a}+\mathbf{b}$
Therefore, adding this to $\overrightarrow{AB}$, we get the answer to be
$\overrightarrow{AD}=\overrightarrow{AB}+\overrightarrow{BD}=3\mathbf{a}+\left(-\dfrac{3}{4}\mathbf{a}+\mathbf{b}\right)=\dfrac{9}{4}\mathbf{a}+\mathbf{b}$
Thus, we have our answer in terms of $\mathbf{a}$ and $\mathbf{b}$.
Note: another common type of question is “prove one vector is parallel to another”. To do this, you work out both vectors in questions, and then if one is multiple of the other, they are parallel.
## Example Questions
#### 1) Let $\mathbf{a}=\begin{pmatrix}3\\8\end{pmatrix}$ and $\mathbf{b}=\begin{pmatrix}-7\\2\end{pmatrix}$. Write $2\mathbf{a}+\mathbf{b}$ as a column vector.
Firstly, to multiply $\mathbf{a}$ by 2, we must multiply both of its components by 2:
$2\mathbf{a}=2\times\begin{pmatrix}3\\8\end{pmatrix}=\begin{pmatrix}6\\16\end{pmatrix}$
Then, to add this to $\mathbf{b}$, we must add the $x$ values and $y$ values separately. Doing so, we get the answer to be
$2\mathbf{a}+\mathbf{b}=\begin{pmatrix}6\\16\end{pmatrix}+\begin{pmatrix}-7\\2\end{pmatrix}=\begin{pmatrix}-1\\18\end{pmatrix}$
#### 2) In the diagram below, we have vectors $\overrightarrow{AE}=3\mathbf{a}-2\mathbf{b}$ and $\overrightarrow{DC}=2\mathbf{a}+4\mathbf{b}$. $E$ and $B$ are the midpoints of $AD$ and $AC$ respectively. Find an expression for $\overrightarrow{EB}$ in terms of $\mathbf{a}$ and $\mathbf{b}$ and state whether or not it is parallel to $\overrightarrow{DC}$.
There are a lot of steps here, so take your time to read through it and make sure you understand.
We will find $\overrightarrow{EB}$ by doing
$\overrightarrow{EB}=\overrightarrow{EA}+\overrightarrow{AB}$
The first vector is straightforward, because we know $\overrightarrow{AE}$, and that is just the same vector in the opposite direction. So, we get
$\overrightarrow{EA}=-\overrightarrow{AE}=-(3\mathbf{a}-2\mathbf{b})=-3\mathbf{a}+2\mathbf{b}$
Now we need $\overrightarrow{AB}$. Since $B$ is the midpoint of $AC$ (given in the question), we must have that $\overrightarrow{AB}=\frac{1}{2} \overrightarrow{AC}$. Therefore, looking at the diagram, we get that
$\overrightarrow{AB}=\dfrac{1}{2}\overrightarrow{AC}=\dfrac{1}{2}\left(\overrightarrow{AD}+\overrightarrow{DC}\right)$
We’re given the second part of this, $\overrightarrow{DC}=2\mathbf{a}+4\mathbf{b}$, and since $E$ is the midpoint of $AD$, we can also work out the first part:
$\overrightarrow{AD}=2\overrightarrow{AE}=2(3\mathbf{a}-2\mathbf{b})=6\mathbf{a}-4\mathbf{b}$
Now, at long last, we have everything we need and can go back through our work, filling in the gaps. Now we have $\overrightarrow{AD}$, we get that
$\overrightarrow{AB}=\dfrac{1}{2}\left(6\mathbf{a}-4\mathbf{b}+2\mathbf{a} +4\mathbf{b}\right)=\dfrac{1}{2}\left(8\mathbf{a}\right)=4\mathbf{a}$
Therefore, finally we have that
$\overrightarrow{EB}=\overrightarrow{EA}+\overrightarrow{AB}=-3\mathbf{a}+2\mathbf{b}+4\mathbf{a}=\mathbf{a}+2\mathbf{b}$
If $\overrightarrow{EB}$ and $\overrightarrow{DC}$ are parallel, then one must be a multiple of the other. Well, if we multiply $\overrightarrow{EB}$ by 2 then we get
$2\times\overrightarrow{EB}=2(\mathbf{a}+2\mathbf{b})=2\mathbf{a}+4\mathbf{b}=\overrightarrow{DC}$
Therefore, we’ve shown that $2\overrightarrow{EB}=\overrightarrow{DC}$, and thus the two lines must be parallel.
## Vectors Revision and Worksheets
MME Vectors
Level 6-7
Vectors
Level 6-7
Vectors 2
Level 6-7
Vectors 3
Level 6-7
Vectors 4
Level 6-7
Vectors 5
Level 6-7
Column Vectors
Level 6-7
Whether you are a GCSE Maths tutor in London or a Harrogate Maths tutor, you will find the vector resources on this page useful. From parallel vector questions to proof and perpendicular vectors, there are many different types of questions that can be used to stretch higher ability students in this topic. Browse our vector worksheets and revision tests and see what you want to incorporate into your resource bank.
|
# Finding the equation of a circle given two points on the circle
11. Find the equation of the circle which touches $x^{2} + y^{2} - 6x + 2y + 5 = 0$ at $(4, -3)$ and passes through $(0, 7)$.
My textbook has a worked example for obtaining the equation of a circle from three points on the circle. It also talks you through obtaining the equation of a circle if you're given two points on the circle and if it touches an axis (in this case you know a coordinate of the centre will be $\pm r$, where $r$ is your radius.)
I am reasonably well-practised at using these two techniques. I have also been practising finding the length of a tangent from a given point.
However, I have so far been unable to make the "jump" to this question, I suspect there's something I'm not seeing. So I was hoping for a hint that would help me work out how to approach this question.
Lets say the equation of the circle is given by:$$(x-a)^2+(y-b)^2=r^2$$Make use of the fact that the circle passes through $(4,-3)$ and $(0,7)$ to form two equations.
You are also told that it touches the circle:$$x^{2} + y^{2} - 6x + 2y + 5 = 0$$at $(4,-3)$ which mean the tangents of both circles at this point must be equal. This gives you a third equation.
You now have three equations and three unknowns which you should be able to solve.
• So we have $(4 - a)^{2} + (-3 - b)^{2} = r^{2}$, $a^{2} + (7 - b)^{2} = r^{2}$ and, unless I'm much mistaken, we also have the equation of the tangent to the circle at $(4, -3)$, which is $4y - 7x + 40 = 0$. But, I'm sorry, I'm not sure how to go from here to three valid equations pertaining to our circle in $a$, $b$ and $r$. – Au101 Apr 27 '15 at 22:24
• Use implicit differentiation to work out the slope of the tangent line at $(4,-3)$ for both circles. These slopes must be equal - this gives you your third equation. – Mufasa Apr 27 '15 at 22:30
let the circle $$(x-3)^2+ (y+1)^2 = 5$$ has a tangent at $(4, -3)$ with the circle $C.$ the line connecting the centers of the circle has slope $$\frac{-1-(-3)}{3-4} = -2$$.therefore the center of $C$ is at $(3+t, -1-2t)$ for some $t.$
since both $(4,-3), (0,7)$ are on $C,$ equating the radius squared, we have $$(3+t-4)^2 (-1-2t+3)^2 =(3+t-0)^2 +(-1-2t-7)^2$$ that gives $$-8(3+t)+16 + -6(1+2t)+9=14(1+2t)+49\to 48t=-68, t = -17/7$$
$$\text{ center of } C \text{ is }(4/7, 27/7), \text{ radius is } 7.666$$
• Sounds good, but - forgive me - why $(3 + t, -1 - 2t)$. I understand that the centre of $(x - 3)^{2} + (y + 1)^{2} = 5$ (the given circle) is $(3, -1)$, but why do we add $t$ to the $x$-coordinate and take $2t$ from the $y$-coordinate, I don't think I've followed. – Au101 Apr 27 '15 at 23:14
• @Au101, the reason is the the centers and the common point of contact are collinear. please see my edit. – abel Apr 27 '15 at 23:41
I've come back to this question after a while and have found a solution which agrees with that in the textbook. My method is based primarily on the tips given by @Mufasa.
Let the circle which touches $x^{2} + y^{2} − 6x + 2y + 5 = 0$ at $(4,−3)$ and passes through $(0,7)$ be $C_{1}$.
Let the centre of $C_{1}$ be $(p, q)$ and let the radius be $a$.
$(4 - p)^{2} + (-3 - q)^{2} = a^{2}$
$p^{2} + (7 - q)^{2} = a^{2}$
Let the circle $x^{2} + y^{2} − 6x + 2y + 5 = 0$ be $C_{2}$.
The centre of $C_{2}$ is $(3, -1)$.
The gradient of the radius of $C_{2}$ to $(4, -3)$ is $-2$.
$\therefore$ The gradient of the tangent to $C_{2}$ at $(4, -3)$ is $\frac{1}{2}$.
$\therefore$ The gradient of the tangent to $C_{1}$ at $(4, -3)$ is $\frac{1}{2}$.
$\therefore$ The gradient of the radius of $C_{1}$ to $(4, -3)$ is $-2$.
$\therefore \dfrac{q - (-3)}{p - 4} = -2$
$2p + q - 5 = 0 \qquad (1)$
$(4 - p)^{2} + (-3 - q)^{2} = a^{2} \qquad (2)$
$p^{2} + (7 - q)^{2} = a^{2} \qquad (3)$
Equating (2) and (3), we ultimately get:
$p = \dfrac{5}{2}q - 3 \qquad (4)$
Substituting $p$ into (1), we get:
$q = \dfrac{11}{6}$
Substituting $q$ into (4), we get:
$p = \dfrac{19}{12}$
Substituting $p$ and $q$ into (3) we ultimately get:
$a^{2} = \dfrac{4,205}{144}$
This gives us our equation of $C_{1}$:
$$\left(x - \frac{19}{12}\right)^{2} + \left(y - \frac{11}{6}\right)^{2} = \frac{4,205}{144}$$
Expanding and simplifying, we get:
$$6x^{2} + 6y^{2} - 19x - 22y - 140 = 0.$$
|
# Dedualizing Complexes of Bicomodules and MGM Duality Over Coalgebras
Authors
• 1 University of Haifa, Department of Mathematics, Faculty of Natural Sciences, Mount Carmel, Haifa, 31905, Israel , Haifa (Israel)
• 2 National Research University Higher School of Economics, Laboratory of Algebraic Geometry, Moscow, 119048, Russia , Moscow (Russia)
• 3 Institute for Information Transmission Problems, Sector of Algebra and Number Theory, Moscow, 127051, Russia , Moscow (Russia)
• 4 Charles University, Faculty of Mathematics and Physics, Department of Algebra, Sokolovská 83, 186 75 Prague 8, Prague, Czech Republic , Prague (Czechia)
Type
Published Article
Journal
Algebras and Representation Theory
Publisher
Springer Netherlands
Publication Date
Oct 12, 2017
Volume
21
Issue
4
Pages
737–767
Identifiers
DOI: 10.1007/s10468-017-9736-6
Source
Springer Nature
Keywords
We present the definition of a dedualizing complex of bicomodules over a pair of cocoherent coassociative coalgebras C\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {C}$\end{document} and D\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {D}$\end{document}. Given such a complex ℬ∙\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {B}^{\bullet }$\end{document}, we construct an equivalence between the (bounded or unbounded) conventional, as well as absolute, derived categories of the abelian categories of left comodules over C\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {C}$\end{document} and left contramodules over D\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {D}$\end{document}. Furthermore, we spell out the definition of a dedualizing complex of bisemimodules over a pair of semialgebras, and construct the related equivalence between the conventional or absolute derived categories of the abelian categories of semimodules and semicontramodules. Artinian, co-Noetherian, and cocoherent coalgebras are discussed as a preliminary material.
|
Probability
Definition Of Probability
Probability is a numerical measure of the likelihood of occurrence of an event. The value of probability lies between 0 and 1.
If all outcomes of an experiment are equally likely, then the probability is given by, Probability of an event =
Example of Probability
The probability to pick a blue marble from a basket containing 10 blue marbles is 1.
Suppose you toss a fair coin. Then the probability of tossing a head or tail is 1/2
A. 1/3
B.1/2
C.1/6
D.1/5
Solution:
Step 1: Total number of fruits in the basket = 30 + 20 + 10 = 60.
Step 2: Number of peaches in the basket = 10.
Step 3: The probability of taking a peach from the basket 10/60 =1/6 . [From the definition.]
|
MathSciNet bibliographic data MR356049 (50 #8520) 55E45 Thomas, Emery; Zahler, Raphael S. Nontriviality of the stable homotopy element $\gamma \sb{1}$$\gamma \sb{1}$. J. Pure Appl. Algebra 4 (1974), 189–203. Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
Username/Password Subscribers access MathSciNet here
AMS Home Page
American Mathematical Society 201 Charles Street Providence, RI 02904-6248 USA
© Copyright 2014, American Mathematical Society
Privacy Statement
|
# Implementing a programmable variable resistor
The R-2R network can provide digital-to-analogue conversion of arbitrary precision using resistances of only two values. Is there a corresponding circuit architecture that switches resistance values to provide a variable resistance?
I want to implement a digitally programmable resistance. The dream solution would be something that could produce all of the E24 series over 5 decades, i.e. 100, 110, 120, ... 82M, 91M, 100M, but I would settle for a proof of concept - a design (scalable) that could accept say, a 4-bit digital input and give 16 exponentially or linearly increasing resistance values. The unit needs to be a two-terminal fully isolated unit and would probably use reed relay switches to provide isolation and minimize switch resistance losses.
A trivial solution to this would use a resistor of every required value, a corresponding reed relay switch, and a whole heap of logic to turn on the appropriate switch. I'm hoping there is a circuit that uses one switch per bit and fewer resistors.
• This is called a digital potentiometer. – Hearth May 19 '17 at 21:05
• To add on to that, it should be noted that digital potentiometers are very sensitive to overvoltage and excessive power, not at all like the robust resistors you'd probably be familiar with. If you want a high-power device, you might be able to do something with a power MOSFET (or MOSFET module) in linear mode with a control system measuring the source-drain voltage and adjusting the gate-source voltage to keep the current proportional to the voltage. – Hearth May 19 '17 at 21:09
• Thanks for commenting. It's not really a digital potentiometer as that would be a three-terminal device. While a digital potentiometer would work if I only used two of the terminals, I'm not sure that they are available with full isolation between the control and signal. I won't necessarily have any more access to the system under test than the two termination points of the resistor I'm wanting to emulate, so an active solution using FETs is probably out of the question. – rossmcm May 19 '17 at 22:14
• Well, as you mentioned, you could just use two of the terminals! Really, though, Spehro's answer below is much better than what I said. I don't know why I didn't think of a decade box with relays..! – Hearth May 19 '17 at 22:17
So 20 resistor values will give you from 1 ohm to 1.048575 Megohm in steps of 1 ohm (however it will surely not be monotonic!). The n'th resistor from 0 to 19 would be $2^n\Omega$, so 1$\Omega$, 2$\Omega$, 4$\Omega$, 8$\Omega$, 16$\Omega$,.., 524288$\Omega$
|
# Sidon Sets and Diophantine Equation
Suppose $$X$$ is a subset of $$\{1, \cdots, n\}$$ such that the equation $$ax_i+bx_j=cx_k+dx_{\ell}$$ where $$a+b=c+d,$$ $$a,b,c,d \in \mathbb{N}$$ and $$x_i, x_j, x_k, x_{\ell} \in X,$$ has only trivial solution. A solution is trivial if $$x_i=x_j=x_{k}=x_{\ell}.$$
What can we say about the size of $$X?$$ Is this possible that $$|X|\geq n^{1-o(1)}$$?
I think the answer is related to Sidon Sets, but I could not find any references. Any help is greatly appreciated.
• are $a,b,c,d$ fixed? Else there are solutions like $a=c,b=d$, $x_i=x_k$, $x_j=x_l$. – Fedor Petrov Oct 7 '18 at 19:53
• also maybe you need the reverse inequality for $|X|$? – Fedor Petrov Oct 7 '18 at 20:01
• Then "is it true" must be read as "is it possible"? And what is $\epsilon$? – Fedor Petrov Oct 7 '18 at 20:07
• what does it mean "$\epsilon$ is a positive constant depends on $n$?" – Fedor Petrov Oct 7 '18 at 20:53
• in any case: you may construct the set by adding the elements one by one, this allows to get about $c\cdot n^{1/3}$ elements for free. Is it enough? – Fedor Petrov Oct 7 '18 at 20:54
Such $$X$$ do indeed exist, and are explicitly constructed in I.Z. Ruzsa, Solving a linear equation in a set of integers, Acta Arith., LXV.3 (1993), pp. 259-282, Theorem 7.5. The whole paper is devoted to upper and lower bounds on sizes of solution-free sets to equations such as the one you are asking about. A mathscinet forward search from that paper should yield further results.
|
# Covid 12/24: We’re F***ed, It’s Over
post by Zvi · 2020-12-24T15:10:02.975Z · LW · GW · 147 comments
## Contents
The Numbers
Predictions
Deaths
Positive Test Percentages
Positive Tests
Test Counts
Covid Machine Learning Projections
Europe
The English Strain
All I Want For Christmas is a Covid Vaccine
Theoretical Vaccine Auction
The Chosen Alternative To That Auction
Face Saving
Quest for the Test
In Other News
What Happens Now?
UPDATE 7/21/2021:
None
UPDATE 7/21/2021: As you doubtless know at this point, it was not over. Given the visibility of this post, I'm going to note here at the top that the prediction of a potential large wave of infections between March and May did not happen, no matter what ultimately happens with Delta (and the prediction was not made with Delta in mind anyway, only Alpha). Some more reflections on that at the bottom of this post here [LW · GW].
A year ago, there were reports coming out of China about a new coronavirus. Various people were saying things about exponential growth and the inevitability of a new pandemic, and urging action be taken. The media told us it was nothing to worry about, right up until hospitals got overwhelmed and enough people started dying.
This past week, it likely happened again.
A new strain of Covid-19 has emerged from southern England, along with a similar one in South Africa. The new strain has rapidly taken over the region, and all signs point to it being about 65% more infectious than the old one, albeit with large uncertainty and error bars around that.
I give it a 70% chance that these reports are largely correct.
There is no plausible way that a Western country can sustain restrictions that can overcome that via anything other than widespread immunity. This would be the level required to previously cut new infections in half every week. And all that would do is stabilize the rate of new infections.
Like last time, the media is mostly assuring us that there is nothing to worry about, and not extrapolating exponential growth into the future.
Like last time, there are attempts to slow down travel, that are both not tight enough to plausibly work even if they were implemented soon enough, and also clearly not implemented soon enough.
Like last time, no one is responding with a rush to get us prepared for what is about to happen. There are no additional pushes to improve our ability to test, or our supplies of equipment, or to speed our vaccine efforts or distribute the vaccine more efficiently (in any sense), or to lift restrictions on useful private action.
Like last time, the actions urged upon us to contain spread clearly have little or no chance of actually doing that.
The first time, I made the mistake of not thinking hard enough early enough, or taking enough action. I also didn’t think through the implications, and didn’t do things like buying put options, even though it was obvious. This time, I want to not make those same mistakes. Let’s figure out what actually happens, then act upon it.
We can’t be sure yet. I only give the new strain a 70% chance of being sufficiently more infectious than the old one that the scenario fully plays out here in America before we have a chance to vaccinate enough people. I am very willing to revise that probability as new data comes in, or based on changes in methods of projection, including projections of what people will decide to do in various scenarios.
What I do know is we can’t hide our heads in the sand again. Never again. When we have strong Bayesian evidence that something is happening, we need to work through that and act accordingly. Not say “there’s no proof” or “we don’t know anything yet.” This isn’t about proof via experiment, or ruling out all possible alternative explanations. This is about likelihood ratios and probabilities. And on that front, as far as I can tell, it doesn’t look good. Change my mind.
The short term outlook in America has clearly stabilized, with R0 close to 1, as the control system once again sets in. Cases and deaths (and test counts) aren’t moving much. We have a double whammy of holidays about to hit us in Christmas and New Year’s, but after that I expect the tide to turn until such time as we get whammied by a new more infectious strain.
Instead of that being the final peak and things only improving after that, we now face a potential fourth wave, likely cresting between March and May, that could be sufficiently powerful to substantially overshoot herd immunity.
Let’s run the numbers.
## Predictions
Last week’s prediction: 13.1% positive rate on 11.5 million tests, and an average of 2,850 deaths per day.
Results: 13.7% positive rate on 10.7 million tests, with an average of 2,677 deaths.
We didn’t test substantially more people than last week, and the positive test percentage didn’t fall much, and the death rate didn’t rise much. Everything in a holding pattern. Some of that could be pending Christmas issues. Fool me twice, and all that.
This next week is Christmas plus New Year’s, so reporting issues are inevitable. Another week for wide error bars.
Prediction: 13.6% positive rate on 10.1 million tests, and an average of 2,500 deaths per day.
Note that I expect the deaths decline, at least, to be about reporting rather than about actual numbers, which I don’t think will start to decline much for a bit longer.
## Deaths
Death rates didn’t rise all that much due partly to the decline in the Midwest, but they are still up in the other three regions and substantially in the West and South. It seems clearly a few weeks too early to hit peak deaths based on when we had peak infections.
Going forward, the vaccine will start to be effective for those who get infected next week, so to the extent we are protecting residents of nursing homes, we’ll see that effect in the death rate start to be noticable in late January. I expect deaths to be in decline by then.
## Positive Test Percentages
Quiet on all fronts. Midwest clearly in slow decline, looks like other regions also ready to follow, as soon as we get past the holidays. There’s also an overall one-time bump coming of unknown size, but after that we should be clear until the new English or South African strain becomes a problem.
## Positive Tests
Also, this seems like a nice graphic:
But it’s also potentially misleading, because the increased cases in the deep South are mostly increased testing. The California increase is largely real. It would make sense that California would be the place that has stalled its crisis for the longest, between not having cold weather and imposing draconian restrictions all year, and so perhaps it’s finally time for them to face the music.
## Test Counts
This is the silent scandal no one is talking about. Why are we no longer expanding testing? It seems clear now that our capacity hasn’t been expanding in December. It’s clear that demand greatly exceeds supply, and that more testing would be a huge help. When things were improving slowly, at least they were improving. Now it looks like we are stalled out, well short of where we need to be. Vaccinations are important, but until we get a lot farther along on those, so are tests.
## Covid Machine Learning Projections
Machine learning projections say infections have been static since about November 25, which mostly matches the testing data. We can assume that the projections will keep saying similar things for the next two weeks.
Their predicted total infected is up to 19.2% on December 9, stabilized at 623k new infections per day. The total is up from 17.9% on December 2. As a reminder, I consider these lower bound estimates.
The immunity effects here compound fast. Even if you assume people get infected completely at random, going from 17.9% to 19.2% immune reduces R0 by 1.6%, which reduces infection levels by that much every five days or so, or 9% per month, and we’re introducing that effect permanently each week. After a month of this level of effect, you’ll see a 16% decline from newly immune people alone. After two months, infection levels would be cut in half.
Selection of who gets infected makes the effect bigger, and also we get to add vaccinations.
Of course, the control systems ensure it does not work that way, as people will notice things improving and take more risks, but it’s worth noting that things will start rapidly getting better if we can only hold onto our current levels of prevention, and let immunity from all sources do the job from there.
## Europe
Going to use short-dated graphs to improve readability. If you want the longer view you can get it at OurWorldInData, or previous weekly Covid posts.
Sh
The positive test percentages chart is so incomplete and all over the place that I’m going to stop posting it, but you can go to the source if you still want it.
Deaths in Europe continue to run close to those in the United States, suggesting the Europeans are finding cases less often than we are, or have worse medical care or are worse at protecting vulnerable populations.
Then there’s that United Kingdom graph going rapidly vertical in infections. Turns out, there’s a reason, and it’s not that they lifted their restrictions…
## The English Strain
The big news this week is that England has identified a new strain of Covid-19 that is ‘up to 70% more infectious’. The new strain dominates in southern England, including London, and the graphs tell a rather clear story.
Oh no.
You’re probably wondering the same question I was when I read that, which is that we know that ‘up to’ means we’re not willing to commit to anything at all (did you know I am up to 15 feet tall? It’s true!), but what the hell does ‘70% more infectious’ mean?
It could mean a lot of different things.
To me, there are two natural hypotheses for what it means.
One sensible definition of this is that 70% more people get infected each day, so it raises R0 by that percentage. If previously things were stable, 70% more infectious would cause infections to rise 70% each serial interval, which I’ve been approximating at about five days. So if it was this, things would about double each week.
Alternatively, it could mean that any given physical interaction was 70% more likely to infect you. This seems unlikely to be it, because how would anyone know what this value was, but it still makes at least some intuitive sense and has some practical value. So if before, if you went home for Christmas and someone was infected, you’d have a 10% chance of getting Covid-19, now that number is 17%.
The difference between those two is that if you get exposed multiple times, you can only get infected once, so the first scenario is a bigger jump in cases than the second one. Depending on how much ‘overkill’ you think takes place when people get infected, the difference could be big or it could be small.
As a third option, it could mean any given exposure is effectively as if you were exposed to 70% more virus. Chance of catching the virus is non-linear with viral load, so in some ways this is a more than 70% increased risk (if previously load was below threshold to get infected, and now it isn’t), and in other ways it could be less once you get to the other end of the curve. This also changes the distribution of initial viral loads, in ways that might be good or bad for outcomes and death rates. If you are reliably getting higher initial loads, that’s bad. If you are getting infected despite low loads then, given that we know you’re infected, that’s good, perhaps quite good.
What I definitely didn’t consider was the possibility that this was measuring the length of the doubling time because that’s not remotely a fixed number and using this doesn’t make any sense and arrrrrgh and then I saw this [LW(p) · GW(p)] on this post [LW · GW]:
What is still driving me crazy is that CellBioGuy presumed that this was what they meant. I mean, I certainly hope it is what they meant in terms of ‘that physical property of the world would kill less people’ but I can’t help but notice it would be completely insane in a way that even my model of predicted general insanity isn’t handling well. The model isn’t even handling the presumption by CellBioGuy very well here.
So it turns out that CellBioGuy was wrong here, and this refers to the sensible thing of “percent rise in infections each cycle” R0 thing:
(Do check out the rest of that comment by CellBioGuy [LW(p) · GW(p)] anyway – even though the presumption in question turned out to be wrong, the rest of the comment has a lot of good gears-level details on various issues, but it’s too long to put here in full. I’d like to know how well the rest holds up.)
So on the plus side, statistics are being reported in a way that is relative sane.
On the minus side, this seems rather like it can be summed up as: We’re fucked, it’s over.
This is estimated as a 65% increase in infectiousness. If we want to stabilize infections in an area that was previously stable we’d need what would previously have been an R0 of about 0.6. If you have an R0 of 0.6 that means you would have previously been cutting infections in half each week or so.
Does that sound like something any Western country could possibly accomplish from here? What would even trying to do that even look like? Is there any chance people would stand for what was necessary to do that?
And that’s only what it takes to get a holding pattern.
Under such dire circumstances, a phase 4 lockdown has been invoked. What does that mean? Glad you asked, here are the guidelines
The missing restrictions that stick out are not shutting down houses of worship, and allowing people to move house. Also funerals can go up to 30 people, whereas weddings are capped at 6. Also ‘support groups’ can meet up to 15 people and I don’t see anything saying it needs to be outdoors.
Whereas essentially any social contact of any kind is forbidden (e.g.: “You cannot meet people in a private garden, unless you live with them or have formed a support bubble with them”), your ‘bubble’ is highly restricted in its sizing, and you’re not allowed to be outside without a ‘reasonable excuse’ although those include groceries, going to the bank or exercising.
So basically, if you’re outside where it’s safe, they’ll harass you and maybe worse. Whereas if you stay inside, technically it’s not allowed but in practice it’s a lot less likely anything happens to you, unless the anything in question is ‘you catch Covid-19.’ The rules are porous enough that they aren’t enforceable against the things that are risky but enforceable enough to shut down the relatively safe actions that keep people sane. And with weird exceptions for remarkably large indoor gatherings for certain events that are textbook superspreaders.
All of which is what our model expects to see, and none of which seems likely to be remotely sufficient if the new strain is as infectious as they estimate.
The strain has already been seen in several other countries. Flights between the United States and the United Kingdom have not been shut down. Many European countries are shutting down some travel, which will slow things down a bit, but headlines like this one…
…illustrate that slowing things down is all that’s being aimed at. Which is good, because it’s too late anyway. There would not be any drivers to test if this was a real attempt at containment.
If the estimate of 65% more infectious is correct: The strain doubles every week under conditions where other strains are stable.
My father sent me this video (24 min) that makes the case for all of this being mostly a nothingburger. Or, to be more precise, he says he has only low confidence instead of moderate confidence that the new strain is substantially more infectious, which therefore means don’t be concerned. Which is odd, since even low confidence in something this impactful should be a big deal! It points to the whole ‘nothing’s real until it is proven or at least until it is the default outcome’ philosophy that many people effectively use.
Note that he also suggests the new strain is likely to be less virulent, and make us less sick, which could also be part of why it’s more infectious. If so, that’s great news (I can think of a scenario where it is actually bad news, but it’s an unlikely corner case).
He also points out correctly that a lot of nations don’t do much sequencing, so we should assume the new variant can’t be contained to England at this point. Doesn’t mean we shouldn’t try in order to slow it down, but such efforts will still fail.
The video seems strong on the scientific details, and the speaker strikes me as an excellent explainer/teacher, which is why I’m willing to link to it.
Alas, as is often the case with academics that are good at learning and explaining scientific things, the epistemology is bonkers. His core argument is: “You cannot use epidemiological data to prove a biological property.” With a side of Covid-19 being spread mostly by super-spreaders (true) and thus the new variant could be winning at random.
All of which is not how knowledge or Bayes’ rule works. It’s not how any of this works.
There is a valid point here, of course. Relying solely on the numerical growth of the strain or of infections in England generally, without looking at the context, doesn’t provide that much evidence. There are often other explanations. And his points about mutation in and of itself being commonplace and mostly harmless are well taken.
That doesn’t change that evidence is evidence, and a likelihood ratio a likelihood ratio. Experiments are not some special class of thing that are the only way one can make predictions or assign probabilities, and it’s weird that people can be so good at academic scientific thinking while not understanding this, and in fact it seems that when we train people to do academic science we also train them to not think about other information and to be careful not to use Bayes’ rule and to ridicule anyone who tries to use non-academic information in order to know things.
With that in mind, we look at the evidence and think about possible explanations, and mostly I find that there aren’t other plausible ones worth assigning much weight to.
But then again, when you consider the context of England being under lockdown conditions that had previously turned the tide, and that have stabilized the situation elsewhere in Europe…
And combine it with the share of infections from the new strain from the earlier chart, and work out what those combine to imply…
This definitely does qualify under “hot damn, look at this chart.” This is a huge, dramatic increase in infections happening very quickly. A doubling in one week.
Note also that the warnings went out to the public on December 19. The out-of-sample data from the next few days strongly reinforce the hypothesis that we’re screwed.
The other plausible causes of such a rapid rise are not present. England didn’t suddenly relax its conditions this much. The law of large numbers is more than sufficient to make me very dismissive of ‘random chance via superspreader events’ as an explanation. How big are these superspreader events?
If my understanding of the situation is correct, there is only one conclusion:
This variant cannot be stopped short of mass vaccinations. It is not going to be stopped short of mass vaccinations.
All that is left is a holding action. Realistically, we can’t make enough of a dent to turn the tide of cases of the new strain until at least May. That’s about twenty weeks from now. That’s twenty doublings. So for every case that’s escaped to the United States so far, we can expect (does quick math) a million cases. Then another few doublings in June and July.
I do think we can do better than that, because it appears the tide is starting to turn now on the old strain, we’ll get a bunch of incremental help from increased immunity along the way, and control systems will set in.
But mostly, it seems like if you have vaccines and people who don’t want to die, you might want to hurry. I’ll get back to gaming the scenario out in the conclusion section.
(This prediction is repeated at the end of this post.)
## All I Want For Christmas is a Covid Vaccine
Track it here.
Or at Bloomberg here.
So, then, about that vaccine effort. How goes the distribution of the vaccine? Well (Twitter video link), funny story
From CNN:
It seems that Pfizer executives are sitting around baffled that they have millions of additional doses of vaccine, and those doses are sitting on shelves unused, with continuous risk of spoilage, while they help no one. We have confirmation that this is explicitly not holding back second doses, and is coming as a complete surprise to Pfizer.
As a reminder, using a few million additional doses will cut infections by about 1% compounding each week, if distributed at random. If used selectively, an extra few million can cause a double digit drop in the death rate a month out, due to how concentrated deaths are among the elderly.
Distinctly from the issue of vaccine going to waste, the estimates given to states were too high, for which (Twitter video link) the head of Operation Warp Speed has taken responsibility. Such admissions are so virtuous and unexpected that they are clear signs of competence and trustworthiness. I agree strongly with Alex Tabarrok here, and my esteem for those involved went way up rather than down.
Oh, and also:
Versus:
So there’s that. While regular folk are told to cancel the holidays. Merry Christmas, suckers.
Still, CNN again:
If true, that would be two weeks behind schedule, but assuming all of this is on a different track than manufacturing, so long as the process doesn’t keep falling further behind, I guess it’s not actually so bad? That’s a big ‘so long as.’ But it’s a plausible one, as initial difficulties don’t have to translate into difficulties once things are running and it’s not Christmas.
Then there’s the issue of that second dose. The science behind why you need two doses is strong and makes sense, except for the little matter of the data. It’s clear that the first dose alone is much more than half as effective as two, and we don’t have enough doses. And the first thing I asked when I saw that result was something that is once again being pointed out. Booster shots do not have to be given two weeks after the first shot. The measles booster comes a year later. Multiple sources confirm that there is no reason to expect a six-months-later second dose to be any less effective a booster.
We are mostly wasting an entire half of our vaccine supply by dosing exactly the worst people with it – the people who got the first dose. I can see double dosing those in nursing homes anyway, because they’re 1% of the population and over a third of deaths, so even a small additional boost is still worthwhile, and their immune systems are weak so it’s reasonable to worry the first dose alone won’t get the job done there. But beyond that, there’s no excuse I can see beyond saying the rules are the rules.
Here’s where we are, it seems:
Doses that arrive on July 31 are not doses I expect to prevent many infections. And this total does not leave much margin for error, we barely have enough doses if we don’t use AstraZeneca and insist on everyone getting two shots.
How goes the vaccine quest elsewhere?
In Germany, not so well
A real case can be made that, given our inability to do suppression properly, a Western government’s policy ends up mostly coming down to their effect on the vaccination schedule. Did they advance vaccine research?
On those fronts, we come out well ahead of the European Union. Does anything else matter by comparison? And how did we manage to do that?
Casey Mulligan reminds us that one of the things Trump has been doing for a while is getting the FDA to kill less people by speeding up and streamlining its processes. So when the time came for Operation Warp Speed, the concept was shovel ready. The “experts” all said eighteen months minimum and the “experts” got ignored. We certainly could have pushed much harder much faster, even given initial conditions. I’m not going to stop pointing that out. The decision to balk at buying extra doses might be the worst single decision of the last four years, of any kind. But when I model alternative administrations, from either tribe, I model a much slower vaccine effort. When we reach the end, it’s not clear that won’t matter more than every other decision combined.
In England, it seems they are going to be testing a vaccine grown using tobacco plants? Approval to begin trials is in. This seems like a great illustration of our regulatory state, because you can’t take the life-saving vaccine they grow with the tobacco plants, but you can take the tobacco in your pipe and smoke it even though we know it kills you. Okay, then.
## Theoretical Vaccine Auction
If you want to properly allocate a scarce resource like a vaccine, obviously you use an auction. I said it last week, people righteously said that things are not worth to the customer what the customer will pay for them because poor people have less money than rich people, and no, sorry, that’s not how this works, that’s not how any of this works.
The good news for those who disagree is that there is not the slightest danger of any attempt at an efficient allocation of the vaccine.
Externalities, on the other hand, are definitely real. To the extent that they are different for different people, we should take that into account. In theory, an auction can do that because others can subsidize the bids of those whose vaccination is beneficial to them. In a first best situation lots of people would subsidize lots of others near them and the government at various levels would also supplement bids, but presumably people mostly won’t get such acts together and it won’t happen except highly locally, inside families and corporations.
My guess is that for all the talk of externalities, they’re not that different for different people other than those who were previously being irresponsible, and subsidizing them for that seems like it has some large moral hazard issues plus the worst offenders probably already had it. The exception would be those that provide a lot of value if they can expose themselves to the virus that they couldn’t otherwise, but are not being paid for that service enough to place the bids themselves. Health care workers in hospitals likely quality, but you can fix that if you subsidize the hospital to bid for them, and generally that class of solution seems like it should work.
Thus, the obvious solution of “N-lot, pay N+1th-highest-bid auction that repeats every so often” seems like it’s just correct on first principles, and then once you run the first one likely you can slowly reduce price to keep supply and demand balanced without having to go through all the trouble of more auctions?
So that’s not that interesting a paper and also it’s about four lines long, so it was odd when I followed a Marginal Revolution link to this paper about how to auction off the vaccine only to find a much longer paper.
Which goes on to suggest this:
Which is exactly why no one wants to participate in auctions or read economics papers. I believe that the paper’s approach boils down to “everyone writes down how much they value each person getting the vaccine at each possible point in time, drawing that curve for everyone” and then the auctioneer solves for the best possible solution that maximizes surplus, and everyone pays the marginal cost imposed on others behind them, as measured by how much they don’t want to wait one more slot to get their vaccine.
Which, I mean sure, that’s technically correct of course, the best kind of correct, but it’s also making my brain hurt reading it and that’s despite intuitively knowing the answer the moment the question was asked. No one is going to want to think about let alone write down full utility curves, the practical cost would be enormous.
And getting people in the wrong spot in line by a little bit is at worst a small mistake, and also not that meaningful given the logistics of vaccine administration. The vaccine is not exactly being teleported around.
Thus, I’m pretty sure in practice this reduces to ‘find the price that mostly clears the market and charge that price, then adjust it to keep clearing the market.’ The whole thing where others can bid on your behalf is how ‘people can buy things with dollars’ works already.
In response to the people saying ‘but poor people are poor and rich people are rich, you bastards’ they say the following:
This is, essentially, distribute a valuable asset according to politics, then allow trade. Which is a worse version of full free trade, since it involves political reallocation of wealth and (as noted by Myerson) results in a lot of profitable trades not happening because of various frictions, and also makes people feel bad about not selling their vaccine dose and also about selling their vaccine dose, because choices are bad. It seems obviously better than allocating via politics alone, since most of the trades that happen will be hugely profitable to both sides – the person selling will get a payment much, much bigger than they’d have found necessary, and the person buying will pay much, much less than the amount they value the vaccine. The ZOPA (zone of possible agreement) here is mostly huge. I have a hard time imagining that choices being bad overcomes that.
What I personally would love is a form of abstract that is “the paper that we would have written if we were only trying to get the central point across and didn’t care about formal anything at all.” Not quite You Have About Five Words [LW · GW] (I keep thinking the ‘have’ is ‘get’ and having to correct myself, maybe the title should change) but mostly not that much more than that. A one-page Econ 101 cheat sheet can cover actual everything the class teaches, and most economics papers have at most one paragraph of actual information.
Here, it sure seems like the whole paper boils down to “Standard auction theory applies here.”
It usually does.
This, for example, is a great use case. Not Covid-19 related or in any way essential, but pretty great
Another modest proposal is to fix the prices and then let people pay them, skipping over the auction as such, so we know we can raise $50 billion dollars for vaccine production and distribution (and motivation, shall we say) by letting a few people skip the line. That makes it easy to see it’s an absurdly good deal, but also either wouldn’t get enough takers or is leaving a lot of money on the table, in exchange for being easy to think about. Also, that post points out something that seems important. What could be a better way to motivate people wanting the vaccine, than to show our richest, most famous and most powerful people paying really big bucks to get the shot a few months sooner? ## The Chosen Alternative To That Auction Meanwhile in vaccine prioritization via politics and power might occasionally have some issues news (Twitter video link to protesters protesting during meeting, and thread): If I am interpreting this correctly, those with power used politics to get vaccinated first, ahead of nurses and residents. Residents, of course, are indentured servants giving four years of work under terrible conditions with hugely below market pay in exchange for the right to join a monopolistic guild that enforces scarcity of health care provision via government restrictions on supply of residences and slots in medical school. This is exactly how political distributions of goods always go, except that this time it was a really bad look to give administrators in offices the vaccine before doctors who were actually treating Covid-19 patients. That’s what “The list created by the algorithm was supposed to be vetted” means here, that they were supposed to do a check to make sure their naked appropriation of vaccine had some plausible deniability attached at some level, whereas it turned out it didn’t. But of course, as the thread points out, blaming ‘the algorithm’ or saying ‘there are problems with the algorithm’ is the obvious nonsense. Yes, there was a ‘problem with the algorithm’ and the ‘problem’ was ‘the people with power over the contents of the algorithm gave themselves top priority.’ This wasn’t some complex calculation. It didn’t involve machine learning. The result wasn’t surprising to anyone who came up with the criteria. The criteria was giving resources to the powerful rather than the powerless, via deeming them more ‘important’ or ‘vital’ or something. This time, thankfully, they got caught. Yet they still got their vaccine shots. Thus, this apology is the height of hypocrisy: Did not anticipate, you say. Equitable distribution, you say. In case you ever wonder what equitable distribution means when those with power say it, now you know. You gotta love the line “Though our intent was to ensure the development of an ethical process, we recognize that the plan has significant gaps.” There’s also this: Also, I’m not going to link to any of the sources for this (to avoid heat vs. light and toxoplasma of rage issues, among other reasons), or offer a hot take on it, but there are a lot of “experts” and “ethicists” who have stated outright that we shouldn’t prioritize older people over younger people because older people are disproportionately white, so giving vaccine priority to the people most likely to die from Covid-19 is racist. It is unclear to me to what extent this is driving policy decisions, but it seems like it came close to happening, with the CDC only reversing after a public outcry. The mayor of New York has endorsed this perspective explicitly. So. Yeah. But congratulations to Texas, among others (via CNBC): ## Face Saving Kerry gets it, a thread I will post here in full: The government anticipates that people will not Act Responsibly on Christmas, and thus is pre-emptively punishing them in order to send a message. What’s interesting is her proposal that we can understand such acts by the Very Serious People as being primarily about face. This reaches many of the same conclusions and makes many of the same predictions as many of the models and gears I’ve been using, but seems importantly different in ways that require more attention. These dynamics have been mostly left out of my models, and that seems like it might have been a mistake. I am going to think more about whether it makes sense to incorporate such dynamics more centrally. ## Quest for the Test The good news is a$5 paper test strip has been approved by the FDA for home use. Of course, doing so legally will require an additional $25 so a digital MD service can watch you do it, which also means all sorts of coordination problems and activation energy and having to be observed by medical people silently judging you. Because of the need to beware trivial inconveniences [LW · GW] it seems like the additional costs will be hugely destructive of the potential value here, although massive value should still remain. Perhaps the next one will do better? As usual there’s claims it is coming Real Soon Now: Alas I expect that last mile to find a way to at least inflict a lot of damage, even if cleared. Whereas this plan from Canada seems awesome, taking the methods of continuous rapid testing that worked in some universities and applying them more generally to businesses that value being open. And that’s this plan from Wisconsin: Free testing on demand, sample collection at home, even if it isn’t rapid, and even if they require a zoom call during the sample collection. And you need an email for each of your children, even if they’re two weeks old. Still. That’s not bad! A sign they do not understand the proper goal of testing is that they don’t grok that people might want to do this periodically without any particular reasons to worry: Another sign is, look for the mysteriously missing question on this FAQ list: Oh yeah, that question. How long until I get my test results? Good question! ## In Other News This analysis of how Covid-19 spreads seems excellent as far as I can tell, making a strong case that it’s mostly or entirely aerosol transmission, and that this fits the observed data fine, thanks. People I respect led me to it and also had the same take. If there’s anything wrong here, please speak up. CDC issues the following guidelines and it only took until (checks calendar) December 21: Any further questions? Never fear, they have an FAQ Also, yes, air ducts can almost certainly spread the virus, and six feet is not a magic number. If you’re looking for treatments, this Quora response seems to be a real attempt at providing guidelines one could use. As usual I’m going to tread lightly on the treatment side and not take a stand. Not a metaphor: Yes, you literally have to report as a ‘severe adverse event’ requiring investigation when someone in your vaccine trial is struck by lightning: Official in Buffalo gives press conference announcing 60 people died, reporters only ask questions about the division champion Buffalo Bills. Official then tells them to ‘get their priorities straight’ but their priorities are already quite straight. The reporters know there’s a lot of deaths, and a lot of Covid-19, but what is the actionable information about that, that the official could tell them, that they don’t already know? What impacts people’s lives? The team could be in an unknown amount of trouble, and that matters to people in Buffalo. Whereas barring a change in policy that would have already been announced, they know exactly how screwed they are personally, and what they have to do. The true objection here is that when people are dying it’s wrong to not waste one’s time showing one’s concern about that and instead care about “trivial” things like the thing most or at least many people in Buffalo have cared most about the last few months. Or years. Is using the bathroom essential? Second worst person Mayor De Blasio’s administration decided that it was not: When a predictable uproar followed, he laid the blame on actual worst person Governor Andrew Cuomo, and requested that they gracefully allow New Yorkers to use indoor plumbing, a request that it seems has been granted: Also, among other rules, employees cannot drink alcohol under any circumstances, which seems rather cruel except for the part where they never enforce it. Right now I bet a lot of them could use a drink. This seems to be what you get when you shut everything down for months and months on end: Why It Took So Long for the Army to Make Masks. The army masks are cloth masks with the right color scheme and symbols. That’s it. They are now getting around to being able to make them. The money quote for me is this: According to the Army: The CCFC was designed, developed, and produced along an expedited timeline. It normally takes 18–24 months for DLA (Defense Logistics Agency) to have the item available for order once the technical description, design, and components are approved and submitted. The CCFC, from inception to issuance, is slated to take less than one year. The army is bragging that they figured out how to make masks this fast. If there is a war, perhaps someone should tell the army. Seems like information they would want to know. No news in the latest comparison of the Pfizer and Moderna vaccines. Bottom line is, they’re the same, except Moderna’s logistics are better. Would I accept the Chinese vaccine? I mean, yes, I still think it’s better than nothing, but wow do they continue to be bad at science in a way that seems almost intentional: I know you think you’re helping but maybe just send paid time off?: Not Covid, rather about hearing aids, once more with feeling: FDA Delenda Est. Not Covid: MIRI gives its yearly research update. Disappointing to learn that research they couldn’t talk about didn’t pan out, and also disappointing that they still can’t talk about what it was, especially given hopes that the ideas might still have merit. Good to know they are willing to pivot to other different things and aren’t going to either keep going down a known blind alley or joining the gradient descent crowd. Also great to hear they are considering fleeing Berkeley, which I strongly endorse doing almost no matter the destination. Would love to do some coordination on that, but I’d definitely settle for the New Hampshire scenario, which I think would greatly enhance their ability to think. Also not Covid, Other LessWrong news: The 2018 review is ongoing, but not many reviews are being written [? · GW]. I’d encourage those who qualify to help change that. I’m especially curious to hear people’s takes on my nominated posts. Universal, the cult of the expert as explained by the Washington Post freelancer system: ## What Happens Now? There are many unknowns that have a dramatic effect on the path forward, and how the endgame plays out. A few big ones dominate. I do not see it as plausible that the new strain is confined to England. It has already been seen in several other nations, and the numbers in England are not compatible with it not being essentially everywhere already. It’s still right to cut off flights and travel now to slow things down, but the barn door is already wide open. The biggest question is, instead, is the new English strain (or another similar strain like the one found in South Africa) really 65% more infectious as measured by infections per infected individual? If that number is greatly exaggerated, then the strain will take a lot longer to get to where it matters, and even when it does the control systems and vaccinations can keep things mostly in check, and we are still on something not too different from the previous track. My guess is that aside from England itself, we could mostly deal with about a 33% more infectious strain given our timetable before it does that much damage. If the 65% number is accurate, however, we are talking about the strain doubling each week. A dramatic fourth wave is on its way. Right now it is the final week of December. We have to assume the strain is already here. Each infection now is about a million by mid-May, six million by end of May, full herd immunity overshoot and game over by mid-July, minus whatever progress we make in reducing spread between now and then, including through acquired immunity. Which will help somewhat, but likely only buy a few weeks at most. One worry I have is that the control system could actively make things worse between now and then, and accelerate the timeline. By the end of January, we should see up to a one third drop in the death rate purely from protecting nursing home residents, and then it will drop further as we protect other elderly people. If as I suspect the control system mostly acts on the death rate, they will use this as a reason to loosen and take more risk, and infection numbers will rise, or not fall the way we would have otherwise expected. Then once the new strain arrives, it will be like March and April 2020, where by the time the deaths start spiking it is very late in the game, and there have already been three or more additional doublings. The good news is that when this happens, most of our most vulnerable will have had the opportunity for vaccination. Assuming most of them take it, the (need for) hospitalization rate should be dramatically lower, and the IFR should also be dramatically lower if the hospitals don’t collapse. The hope is that with enough of the vulnerable protected, even a gigantic surge in cases might not collapse the system. That’s the hope. It is basically the best (realistic) case scenario. It seems to be too late to speed up vaccine production much, although I see some reports Joe Biden is considering using the Defense Production Act to do so, in which case why the hell aren’t we doing whatever it takes to get that happening already? Have we considered throwing relatively tiny amounts of money at the problem? Maybe not quite that relatively tiny? No? Oh well. But I digress. On the good news front, our tests still mostly seem to work fine for the new strain, although there is some small worry. There is the concern that the vaccine might not be effective against the new strain, but based on my looking into this, the prior on there being much effect here should not be so high, except insofar as the new strain is more infectious so everything will be somewhat less effective at preventing spread. But if the virus has escaped from the vaccine already while also becoming more infectious, the timeline does not allow us to adjust even if we acted correctly, let alone acting realistically, the best we can hope for is maybe to protect some of the most vulnerable but I do not expect a Biden administration to allow movement fast enough to do that in this scenario. Instead, we’d see completely overwhelmed medical systems across the country, and that would be that. On the potentially very good news front is the other big question. Is the new strain less virulent (dangerous) than the old one, and if so by how much? I don’t think this is a favorite to be true, but I do think there’s a substantial chance (30%?) that it is, and we should investigate. All the news reports say that there’s no reason to expect that the mutation is more dangerous, which is true, but also not the change one would expect to often happen. The real question is whether it is now less dangerous, and for now the answer is we don’t know. If the effect size is big we will presumably figure this out by the end of January. Keeping an eye on England’s IFR will be important. There are plausible physical mechanisms that suggest that some of these mutations may have led to less virulence. One piece of evidence that the strain is less virulent is that it is more infectious! Being less virulent causes it to be more infectious, so if the strain is more infectious, there’s a decent chance part of that was caused by less virulence. Note that if we do face that scenario in the fourth wave, unless virulence has gone down quite a bit, you now face an even more stark version of what you did preparing for the third wave. If we are in this scenario, it is inevitable that there will be a several week period, at a minimum, in which it is super easy to catch Covid-19, with a super infectious strain everywhere, and at a time when there will be no hospitals to help you, and no vaccine that works. In that scenario, actually protecting yourself becomes vital if you haven’t been infected by the time the climax is approaching. As in essentially not being anywhere near a human outside your bubble, for weeks on end, at least until the hospitals stabilize. It also becomes realistic again to worry about our supply chains or civil disorder. I expect everything to hold, but that can’t be assumed. Then there’s the other nightmare. When this starts to happen, how will the authorities react? Benevolent authorities would be responding now by making every attempt to speed up our vaccine efforts, and to prioritize the most vulnerable while we can. But if we had such benevolent authorities, they would have already been doing that. When something goes from super overdetermined to super duper overdetermined, it’s not generally the time to expect people to suddenly come up with the right answer when they didn’t before. Still, there’s some hope that this would happen, as the ‘don’t let it get so bad they sell out of pitchforks’ mechanics kick in. The other problem, as I’ve noted, is that there is not that much room to move faster without much more aggressive disregarding of barriers put in to prevent action, which would cause a ton of cognitive dissonance, and is not Biden’s style. The question is what our actual, present authorities will do. Like England, they might go for a full ‘lockdown.’ My assumption is that if they do this, people will end up largely ignoring it, and it won’t be enforced. There’s no will to make real restrictions stick even in blue tribe areas, and red tribe areas would be in open revolt, especially when it comes from a Biden administration. People have had enough, and they’ll have been told the story of how things were supposed to be returning to normal. And even when they do that, they’ll still provide enough loopholes that it presumably wouldn’t work even if people didn’t go into open defiance of the rules. Still, will they try anyway, doing epic additional economic damage and potentially causing open conflict? I worry about this a lot. Thus, I don’t know if one should buy stocks or sell stocks on this, or even if one should buy volatility on this or not. If things crest once more and then burn out, that is not bad for stocks. It’s only bad if we get a dramatic response, which wouldn’t help the problem much but also give us a whole additional set of problems. We need to get ready, now, to do what we can to stop this from happening when the moment arrives, if the moment arrives. Sure, if it worked one could argue in favor, but the math seems rather clear that it won’t do any good. When politicians start trying to make it to next week without taking blame, and attempt to destroy everyone’s lives to do it, we need to find a way to stop them. Or we could figure out how to make our efforts actually work. It’s not impossible. Is there still time to get our house in order and deal with this head on, if we want to? Hell yeah. But we can’t do it by Sacrificing to the Gods and taking sledgehammers to everything people like doing. We can’t even do it via doing that and being willing to shut down economically important activity, not for long enough to work. I’d be pleasantly surprised if we could even stabilize growth rates that way, and it’s completely unsustainable. What we could do is prepare a testing regime now. Have that in place a few months from now, for real, and do the kind of testing on everyone that some universities do with their students. If we took all the regulatory gloves off, my guess is we wouldn’t even have to subsidize to get there on time. Add in proper focus on Vitamin D, airflow, outdoor activity and so on. We could still make it, and it would be an essentially free action. We could probably start this on January 20 and still make it. To be clear: We won’t attempt to do this, and it won’t happen. But it’s important to note that we still could, in at least some sense, still win, while also noting that we will lose. You never know. Maybe someone, somewhere, is actually listening. In summary, I am attempting here to do now what I failed to do in December or January, which is to actually model what happens next based on exponential growth and realistic reactions by authorities. My guess is that the English strain is probably sufficiently infectious to get there before enough vaccinations can happen to let realistic measures contain the pandemic. Right now I’m giving the strain something like a 70% chance to be sufficiently infectious, and if it is, I don’t see a way around this outcome. This has counterintuitive implications, both for public policy and for individuals. As always, one’s approach to the pandemic must be to either succeed if one can do so at a cost worth paying, or fail gracefully if one cannot succeed. Thus, one could plausibly either make the case for being even more careful in response, or to folding one’s hand entirely. You can raise, or you can fold, but you can’t play passive and call all bets and hope to go to showdown. It’s also important to figure out whether the new strain is less virulent than the old one, and if so to what extent, since that could change the math on sensible courses of action quite a bit. I haven’t fully wrapped my head around the implications, and I doubt many others have either. Simply saying it is “good news” does not begin to cover how this changes correct actions. Let’s actually engage with the physical situation and model out what happens, and learn from last time out’s mistakes. UPDATE 7/21/2021: As you doubtless know at this point, it was not over. Given the visibility of this post, I'm going to note here at the top that the prediction of a potential large wave of infections between March and May did not happen, no matter what ultimately happens with Delta (and the prediction was not made with Delta in mind anyway, only Alpha). I've talked extensively about the situation across many other posts, but for visibility I'm going to summarize my view of what happened here. First, the early reports said that Alpha (this post calls it the English Strain, which is now called Alpha) was 65% more infectious than baseline. Instead, it looks like Alpha was about 40% more infectious. I noted that on our timetable, we could deal with about a 33% more infectious strain under the predicted-at-the-time vaccine timetable before major damage was done. Second, when I was writing this post, I expected vaccine deployment to be substantially slower in its first few months than it ultimately was. I made a clear mistake not making this assumption explicit, but we got a lot more shots into arms that I expected during this period - I was thinking 100 million shots in 100 days (e.g. Biden's stated goal) and we did substantially better than that. Together, these two incorrect assumptions explain why the wave did not occur, although they alone are insufficient to explain why things continued to improve so rapidly. My guess is that I was also somewhat underestimating seasonality, and that Alpha scared people sufficiently that the control systems acted differently than they otherwise would have, which also should have been considered more. I gave the headline scenario about a 70% chance of playing out. It didn't play out, and the above reasons are why (in my current model) it failed to play out, and how I put a 70% probability on something that didn't happen. Looking at what did happen, if we had indeed been facing a 65% more infectious strain at that time rather than 40%, we would have faced a wave of some size even with the other factors being better than I anticipated, but not the size of wave I was predicting; it would have taken a number more like 75% to cause a crisis before vaccinations could kick in. On reflection, given what I knew at the time, if we take the distribution of possible properties of the Alpha strain as a given, I think I should then have been predicting more like 40%-50% chance of the scenario rather than 70%. Contrasting the situation with Alpha, where I thought it was at 65% more infectious than baseline and that it would cause a large wave before we could sufficiently vaccinate with Delta, where it's perhaps 120% more infectious than baseline but we've already done a lot of vaccinations, and now we are seeing rises in cases, is interesting, and will be an interesting test of whether the other assumptions I was making were accurate. ## 147 comments Comments sorted by top scores. comment by ShemTealeaf · 2020-12-24T16:17:05.984Z · LW(p) · GW(p) I said it last week, people righteously said that things are not worth to the customer what the customer will pay for them because poor people have less money than rich people, and no, sorry, that’s not how this works, that’s not how any of this works. It seems easy to construct a scenario where this is untrue, or at least conflicts with an intuitive definition of "value". If I'm trying to auction off a rare food item in a room with Jeff Bezos and a starving person with no money, Bezos can easily win the auction if he has the slightest desire for the food. A tiny rounding error on his fortune is more than the starving person's entire life is worth (in a monetary sense). Bezos clearly puts a higher monetary value on the food, but it seems absurd to suggest that this is an example of the food being allocated to the person who values it the most. To use a more realistic example, it's hard for me to agree that a billionaire values their tenth vacation home more than a homeless person who is in danger of freezing in the winter. I'm generally in favor of free markets, and maybe allowing Jeff Bezos to do whatever he wants produces an overall better world than the alternative. However, it seems disingenuous to say that his vast fortune means that he can value an item of trivial importance more than other people value anything at all. Replies from: Larks, TAG, Joe_Collman, tlhonmey, Zolmeister comment by Larks · 2020-12-24T22:24:48.895Z · LW(p) · GW(p) At the moment, the poor person and the rich person are both buying things. If the rich person buys more vaccine, that means they will buy less of the other things, so the poor person will be able to have more of them. So the question is about the ratios of how much the two guys care about the vaccine and how much they care about the other thing... and the answer is the rich guy will pay up for the vaccine when his vaccine:other ratio is higher than the other guys. This is the efficient allocation. It might be the case that it is separately desirable to redistribute wealth from the rich guy to the poor guy. This would indeed allow the poor guy to buy more things. But, conditional on a certain wealth distribution, it is best to allow market forces to allocate goods within that distribution. (For simplicity I have ignored macroeconomics in this post, but the same argument broadly goes through if you don't.) Replies from: ShemTealeaf, scarcegreengrass comment by ShemTealeaf · 2020-12-25T14:25:11.362Z · LW(p) · GW(p) At the moment, the poor person and the rich person are both buying things. If the rich person buys more vaccine, that means they will buy less of the other things, so the poor person will be able to have more of them. So the question is about the ratios of how much the two guys care about the vaccine and how much they care about the other thing... and the answer is the rich guy will pay up for the vaccine when his vaccine:other ratio is higher than the other guys. This is only true if the rich person is already spending as much money as possible, so an increase in spending on Item A must cause a decrease in spending on Item B. For someone like Jeff Bezos, an increase in spending on Item A probably just results in slightly less money spent by his great-grandchildren in 100 years. It might be the case that it is separately desirable to redistribute wealth from the rich guy to the poor guy. This would indeed allow the poor guy to buy more things. But, conditional on a certain wealth distribution, it is best to allow market forces to allocate goods within that distribution. I don't see why this has to be true in all scenarios. If we want to make sure that the starving guy gets some of the food, can't we just allocate the food to him directly, rather than having to give him enough money to win a bidding war with Jeff Bezos? Perhaps we desire a system where, in general, Jeff Bezos can use his money to do whatever he wants, but we have safeguards in place to prevent him from outbidding a starving guy on the food he needs to survive. I recognize that this may not be efficient in monetary terms, but it could be efficient in terms of overall human utility. Replies from: siclabomines comment by siclabomines · 2020-12-28T23:57:16.771Z · LW(p) · GW(p) I think I disagree a bit with both (but what do I know). For someone like Jeff Bezos, an increase in spending on Item A probably just results in slightly less money spent by his great-grandchildren in 100 years. This doens't seem to me to be the right way to think about it. Short term, the more he spends on Item A will result in lower spending on Item B, or lower investment in his companies, a lower transfer of money from him to someone else (like through lower savings). Or more money being spent overall if he just uses up cash he had hidden in his pillow; which increases prices for everyone (but this will be made up for in some future). If we want to make sure that the starving guy gets some of the food, can't we just allocate the food to him directly, rather than having to give him enough money to win a bidding war with Jeff Bezos? Who produces the food and can set the prices? If it's private companies, then they wouldn't sell it to the state for cheaper than to Bezos, so it would be as expensive to the state as giving that same money to the poor and let them outbid Bezos. If the state owns the stuff, then [insert standard anti-socialism arguments]. If the prices are fixed by the state, then its inefficient and there may not enough production for all. If the prices fixed by the state but depend on the person -- or on how many of X you have bought this month or stuff like that -- then that introduces whole new types of messes. Replies from: jmh comment by jmh · 2020-12-29T03:23:49.958Z · LW(p) · GW(p) I agree that in a very strict sense money spent on X definitionally cannot be spent on something else (you already spent it so don't have it). But does that type of tautological view matter here? (And if that is not the basis of your point and I'm misunderstanding, sorry.) This kind of reminds me of the old Richard Pryor movie Bruster's Millions I suspect the numbers have change but at one point I recall Bezos had a new worth of 90 billion. Even with a paltry 1% return on total assets that's like 900 million a year. Could you spend all that? I'm not sure I could really keep tracking of an income stream like that -- I suspect I would often be doing the equivalent of dropping a million on the floor now and then and forgetting to pick it up for a week or so. So the rich buying more vaccines doesn't really equate to not buying enough of other things that prices for those other things drop enough to make more available to the poor (who are budget constrained much more realistically than are the rich). Replies from: siclabomines comment by siclabomines · 2020-12-29T23:26:11.341Z · LW(p) · GW(p) Yeah, I wasn't trying to be tautological. I am under the impression that you are thinking something like: "Bezos has ~100 billion to spend. If he spends 1 million in X, then he has 1 million less to spend on the rest. But he won't even get to spend it in his lifetime, so that extra million in X doesn't change how much he would spend in Y. Therefore, it's wrong to say that Y will become more available because Bezos spent in X.". I don't think that's the right way to think about all this. (Warning: oversimplification coming): Bezos earns some income, say, in a year. Almost all of it will be spent. Most will be invested and not consumed, so it will still increase his net worth, but that demand for stuff is still there, affecting the economy. Bezos is already probably spending about as much as he can, and what he is not spending he is saving which probably means transferring it to someone else who will spend it. So, if he spends USD 10 in X, it's reasonable imho to "expect" the economy to get USD 10 less spending in non-X stuff (on avg) comment by scarcegreengrass · 2020-12-27T14:46:00.758Z · LW(p) · GW(p) Are you saying there would be a causal link from the poor person's vaccine:other ratio to the rich person's purchasing decision? How does that work? comment by TAG · 2020-12-25T23:52:21.537Z · LW(p) · GW(p) I said it last week, people righteously said that things are not worth to the customer what the customer will pay for them because poor people have less money than rich people, and no, sorry, that’s not how this works, that’s not how any of this works I pointed out the error last week, too. I'm disappointed to see it doubled down on. comment by Joe_Collman · 2020-12-24T17:19:02.793Z · LW(p) · GW(p) Agreed. To be fair to Zvi, he did make clear the sense in which he's talking about "value" (those who value them most, as measured by their willingness to pay) [ETA: "their willingness and ability to pay" may have been better], but I fully agree that it's not what most people mean intuitively by value. I think what people intuitively mean is closer to: I value X more than you if: 1) I'd pay more for X in my situation than you'd pay for X in my situation. 2) I'd pay more for X in your situation than you'd pay for X in your situation. (more generally, you could do some kind of summation over all situations in some domain) The trouble, of course, is that definitions along these lines don't particularly help in constructing efficient systems. (but I don't think anyone was suggesting that they do) Replies from: ShemTealeaf comment by ShemTealeaf · 2020-12-24T19:58:45.161Z · LW(p) · GW(p) Agreed on all points, except for about how clear the author was being about the use of the word "value". Although he does make the reference to willingness to pay, his rhetorical point largely depends on people interpreting value in the colloquial sense. He writes, in the previous post: If we’re not careful, next thing you know we’ll have an entire economy full of producing useful things and allocating them where they are valued most and can produce the most value. That would be the worst. Imagine if you alter the phrasing to this, which is roughly equivalent under the "value = willingness + ability to pay" paradigm: If we’re not careful, next thing you know we’ll have an entire economy full of producing useful things and allocating them to people who can pay the most money for them and where they can generate the most wealth for those people. That would be the worst. Many people might reasonably object to that scenario, even though it sounds silly when we phrase their objection as "I think we should allocate resources to people who value them less". My own feelings are probably closer to the author's than those of the hypothetical objectors, but I'd prefer it if we could avoid these kind of rhetorical techniques. comment by tlhonmey · 2021-01-08T02:16:56.820Z · LW(p) · GW(p) The market cares for individuals about as much as evolution does. Yes. Bezos can bid more for the meal than the hungry drifter. Why is that the case? It's because Bezos is instrumental to offering a useful service to literally billions of people and the drifter... isn't. It seems cruel. It is cruel. But it's a cruel world we live in. It is perfectly possible for preventing Bezos from being mildly "hangry" at an inopportune time to alleviate more suffering worldwide than preventing one shiftless vagrant from starving to death. And that's not intuitively obvious because your social instincts are programmed for a world where you know everyone in your entire community personally and can see exactly what they're contributing with your own eyes. Set the situation in a small tribe where you're choosing between feeding the shaman who knows where the watering holes are and what food is safe to eat and is obviously critical to the survival of everyone, or feeding the aged cripple who can barely walk unassisted and your social instincts will likely choose correctly. You'll still be sad, but finite resources often means hard choices. We use markets to decide things like this because they're the most efficient way we know of to deal with the scope of the calculation being far too big for our puny brains to handle all at once. But that cuts us off from always being able to understand the why of the calculation result. And so when the market hands down a result that is painful to look at we reflexively want to call it a "market failure" and "correct" it. But there are consequences for doing that. The fact that the immediate consequences of the market's choice are obvious and the long-term consequences of overriding that choice are invisible (except with great effort) doesn't mean that there are no costs. There is no free lunch. Every choice must be paid for. While you might sometimes "beat the market" and spot the more-optimal solution just the sheer difference in data-crunching capability means that, most of the time, you're going to be wrong. Even if the consequences don't hit for years. Even if you've stopped paying attention by the time the piper comes around to collect his due. At the end of the day what keeps the system running is that 98% of us are decent people who care about others, even strangers. Jeff Bezos certainly could outbid the starving vagrant. But, unless this were the last meal in the world, would he? Likely not. And if he did I expect he could be easily persuaded to purchase a more normal meal for the fellow -- it's a trivial cost to him and most normal people would get a good feeling from doing it. And even if he didn't, in a full economy that high bid lowers the cost of another meal and encourages an increase in meal production, so that 2% of selfish people are still at least pulling their own weight. It's not that a market always makes things perfect for everyone. It's that, in the long run, it screws up less often that the other systems we currently know of. comment by Zolmeister · 2020-12-25T18:35:14.363Z · LW(p) · GW(p) To use a more realistic example, it's hard for me to agree that a billionaire values their tenth vacation home more than a homeless person who is in danger of freezing in the winter. I don't see "value" as a feeling. A freezing person might desire a warm fire, but their value of it is limited by what can be expressed. That said, a person is a complex asset, and so the starving person might trade in their "apparent plight" (e.g. begging). For example, the caring seller of the last sandwich might value alleviating "apparent plight" more than millions of shares of AMZN. Whether they do or don't exactly determines the value of an individuals suffering against some other asset, in terms of the last sandwich. comment by Tamay · 2021-04-25T20:10:47.080Z · LW(p) · GW(p) Four months later, the US is seeing a steady 7-day average of 50k to 60k new cases per day. This is a factor of 4 or 5 less than the number of daily new cases that were observed over the December-January third wave period. It seems therefore that one (the?) core prediction of this post, namely, that we'd see a fourth wave sometime between March and May that would be as bad or worse than the third wave, turned out to be badly wrong. Zvi's post is long, so let me quote the sections where he makes this prediction: Instead of that being the final peak and things only improving after that, we now face a potential fourth wave, likely cresting between March and May, that could be sufficiently powerful to substantially overshoot herd immunity. and, If the 65% number is accurate, however, we are talking about the strain doubling each week. A dramatic fourth wave is on its way. Right now it is the final week of December. We have to assume the strain is already here. Each infection now is about a million by mid-May, six million by end of May, full herd immunity overshoot and game over by mid-July, minus whatever progress we make in reducing spread between now and then, including through acquired immunity. It seems troubling that one of the most upvoted COVID-19 post on LessWrong is one that argued for a prediction that I think we should score really poorly. This might be an important counterpoint to the narrative that rationalists "basically got everything about COVID-19 right"*. Replies from: WilliamKiely comment by WilliamKiely · 2021-05-24T16:07:39.401Z · LW(p) · GW(p) It seems troubling that one of the most upvoted COVID-19 post on LessWrong is one that argued for a prediction that I think we should score really poorly. I agree. FWIW, I strong-downvoted this post in December. I think this is the first LW post that I have strong-downvoted before. Additionally, I commented on it (and on threads where this post was shared elsewhere, e.g. on Facebook) to explain my disagreement with it, and recorded ~the lowest forecast of anyone who submitted their forecast here that there'd be a 4th wave in the US in 2021. What I failed to do was offer to bet here on the 4th wave question. I think the only time that I tried to make a bet on this topic was in a discussion on Facebook (set to friends-only) that began with "Well, it's time to pull the fire alarm on the UK mutation." I commented on the post on 12/26/20 with the following: Would you be interested in operationalizing a bet on this? (If you don't think its good practice to bet money on COVID infections/cases/deaths or otherwise aren't interested in betting money, we can just make it a reputational bet.) I get the sense that like Zvi you are being too pessimistic about how bad the new strain will be for the US relative to how bad thing would have been even without the new strain. However, the bet never came to fruition. comment by jsteinhardt · 2020-12-25T05:00:31.523Z · LW(p) · GW(p) Zvi, I agree with you that the CDC's reasoning was pretty sketchy, but I think their actual recommendation is correct while everyone else (e.g. the UK) is wrong. I think the order should be something like: Nursing homes -> HCWs -> 80+ -> frontline essential workers -> ... (Possibly switching the order of HCWs and 80+.) The public analyses saying that we should start with the elderly are these two papers: Notably, both papers don't even consider vaccinating essential workers as a potential intervention. The only option categories are by age, comorbidities, and whether you're a healthcare worker. The first paper only considers age and concludes unsurprisingly that if your only option is to order by age, you should start with the oldest. In the second paper, which includes HCWs as a category (modeling them as having higher susceptibility but not higher risk of transmitting to others), HCWs jump up on the queue to right after the 80+ age group (!!!). Since the only factor being considered is susceptibility, presumably many types of essential workers would also have a higher susceptibility and fall into the same group. If we apply the Zvi cynical lens here, we can ask why these papers perform an analysis that suggests prioritizing healthcare workers but don't bother to point out that the same analysis applies to 10% of the population (hint: there is less than 10% available vaccines and the authors are in the healthcare profession). The actual problem with the original CDC recommendations was that essential workers is so broad a category that it encompasses lots of people who aren't actually at increased risk (because their jobs don't require much contact). The new recommendations revised this to focus on frontline essential workers, a more-focused category that is about half of all essential workers. This is a huge improvement but I think even the original recommendations are better than the UK approach of prioritizing only based on age. Remember, we should focus on results. If the CDC is right while everyone else is wrong, even if the stated reasoning is bad, pressuring them to conform to everyone else's worse approach is even worse. Replies from: rockthecasbah, JesperO comment by rockthecasbah · 2021-01-02T19:42:40.300Z · LW(p) · GW(p) Many people on this website are hardcore social distancers, interacting only with essential workers. To them it seems natural that essential workers are the majority of the transmission and do not have immunity yet. But most people aren't social distancing very hard at all. In Nashville, were I currently am, the bars and restaurants are often full. My immune brother when to house parties and indoor concerts on New Years Eve. I doubt that essential workers constitute even a majority of current transmission. So we vaccinate 80 million people and reduce transmission by 50%, maybe. That would take months. Meanwhile, there are only 50 million Americans over 65, doing >90% of the dying, and we could vaccinate them in just two months. TLDR; The transmission argument for essential workers assumes people comply with social distancing. People aren't doing that anymore, so vaccinate the vulnerable. Replies from: jsteinhardt, tlhonmey comment by jsteinhardt · 2021-01-16T06:11:45.627Z · LW(p) · GW(p) This isn't based on personal anecdote, sudies that try to estimate this come up with 3x. See eg the MicroCovid page: https://www.microcovid.org/paper/6-person-risk Replies from: rockthecasbah comment by rockthecasbah · 2021-01-18T03:39:23.783Z · LW(p) · GW(p) That seems plausible right now, in January, at our current level of social distancing compliance. But why would the degree of distancing stay constant over vaccination? It hasn't even stayed constant the last 8 months when nobody has been vaccinated. So far we have a clear pattern. People voluntarily comply when the issue seems important because there are lots of infections, hospitalizations and deaths. During lulls the issue becomes less available and compliance drops. In the best case for essential worker vaccination, it produces a lull in February-March. But if you actually drop the reproduction rate then that 3x factor goes away immediately. Unless you have a reliable plan to get people to keep social distancing even when things seem over, vaccinating the vulnerable saves lives in expectation. comment by tlhonmey · 2021-01-08T18:27:37.251Z · LW(p) · GW(p) Don't forget there's another factor: Coming down with COVID can easily take someone out of the workforce for a couple of weeks. The essential workers may be at less risk of dying, but depending on how you define "essential" having a large portion of them down for the count could put quite a crimp in your ability to hand out vaccines. comment by JesperO · 2021-01-02T07:37:43.005Z · LW(p) · GW(p) Even if this is right, it still seems incredibly dysfunctional for CDC (and other governing bodies) to not use age categories among healthcare workers, and other essential worker categories. Replies from: jsteinhardt comment by jsteinhardt · 2021-01-16T06:13:56.568Z · LW(p) · GW(p) That seems irrelevant to my claim that Zvi's favored policy is worse than the status quo. comment by waveman · 2020-12-24T22:30:34.080Z · LW(p) · GW(p) One point not noted anywhere as far as I can see is that, by allowing the pandemic to spread to millions of people, the risk of a more dangerous or virulent strain appearing increased enormously. If the pandemic had been kept to relatively small numbers, as in Taiwan, New Zealand, Australia, Vietnam, Thailand, (China if you believe their statistics on this, unlike all their other statistics are correct) etc. this new more infective strain would likely never have appeared. Replies from: yitz, jmh comment by Yitz (yitz) · 2020-12-25T15:10:10.888Z · LW(p) · GW(p) this is a really good point, and I haven't seen it mentioned anywhere either. comment by jmh · 2020-12-29T03:32:28.363Z · LW(p) · GW(p) What is the underlying argument here. I don't understand viral mutations sufficiently to know but have assumed they are largely random events. If so, isn't this claim a bit like a fair coin landing heads for the past 10 flips and claiming the odds of tails has not increase? Replies from: Alexei comment by Alexei · 2020-12-29T16:51:34.378Z · LW(p) · GW(p) Odds are the same but it’s the difference between 10 flips and a 100. comment by Rob Bensinger (RobbBB) · 2020-12-28T02:18:04.880Z · LW(p) · GW(p) Anonymous comments from Dec. 26: I think Zvi is (still) overly confident in pessimism (or, at least, headline-Zvi, as opposed to actual-content-Zvi, which is somewhat more hedged). The new variant is probably pretty bad and the distribution of predictions should look worse compared to what it did a week ago - but there is a ton of uncertainty still, and the headline is unjustified. Sidenote on predictions: Zvi previously predicted that social distancing would be a thing of the past by the time summer ended, that COVID-19 would cease to be the dominant news story (which would remain the case indefinitely), and that we would have given up to the point of approaching herd immunity by now. Social distancing, though weaker than it once was, is still very much in effect, COVID-19 continues to be the dominate our collective attention, and we have limited the pandemic to the point where we can still potentially save millions of lives with the vaccines. Replies from: maia, RobbBB comment by maia · 2020-12-28T13:43:29.996Z · LW(p) · GW(p) Zvi and many others on LW (including myself in this) totally failed to predict how people in the US would react to this virus. From this, I've updated that we're bad at predicting how politics and humans will respond to novel unexpected events, and probably a bit overly pessimistic about other humans' ability to be persuaded by rational argument and Real Bad Stuff happening. I think that predicting the course of the disease or predicting whether a certain variant is more infectious is mostly a different kind of prediction from predicting people's behavior. Unfortunately, people's behavior in response to the new strain is also really important for how bad it will get, so ¯\_(ツ)_/¯ Replies from: habryka4 comment by habryka (habryka4) · 2020-12-28T19:32:24.526Z · LW(p) · GW(p) a bit overly pessimistic about other humans' ability to be persuaded by rational argument and Real Bad Stuff happening. I am surprised that you say "overly pessimistic"? It seems that the outcome we got of hovering around R=1 with the economy being shut down in large parts for almost a full year was basically the worst-case outcome, with nobody I know being pessimistic enough about how bad it went (it appears to me that the costs of runaway heard immunity would have been drastically less, especially given that we are likely to reach something like 40% prevalence anyways before the vaccine). Replies from: maia, TAG comment by maia · 2020-12-29T16:37:55.998Z · LW(p) · GW(p) Depends on what you mean by "pessimistic," I guess. I think my model back in March was that basically everyone would dismiss COVID as being "like the flu"; tons of people would die; but no one would really pay much attention to it. Instead, people actually freaked out about it and lots of people actually got overly into enforcing quarantine restrictions on each other. I was expecting that people would fail to even parse the small chance of death by COVID as sufficiently important to be worth worrying about, and that didn't turn out to be true. I agree that the real outcome is much worse than we could have done overall, with e.g. mass testing or challenge trials -- though I don't agree that this is clearly worse than runaway herd immunity. Back-of-the-envelope calculation: if 1% of the US population died of COVID, that's around 3 million deaths, which in VSL terms is around$30T. The US's GDP for one year is around $20T. My error bars on the validity of VSL at that scale are pretty large, as is my uncertainty about comparing GDP to VSL, and other non-GDP considerations of lockdown... but the two certainly seem comparable in magnitude, and I weakly think that lockdown is better in terms of total social utility. (edit: of course we're at ~300k deaths now, which changes the analysis by 10% or so; still seems like the calculation comes out about the same as of right now. The effect of another huge wave could change this calculation substantially.) Replies from: habryka4 comment by habryka (habryka4) · 2020-12-29T23:23:45.503Z · LW(p) · GW(p) It is very unlikely that we would have gotten to 100% prevalence in any world, which would be necessary for 1% of the U.S. to die. With all the superspreader dynamics, you would likely reach heard immunity at something like 50% prevalence, maybe even less than that, and we are about 40% of the way there. This also completely ignores age-related effects. The average life-years lost per covid death is ~10-15, since the average person who dies of COVID is much older. This all results in a VSL closer to 1/10th of the value you cited (~2 for only getting to 50% prevalence and a factor of 5 for age-related effects). Then, if you take into account that we aren't actually on track to prevent most of the relevant deaths (we are already at 20% historical prevalence and are likely to reach 30%-40% before widespread vaccine adoption, suggesting that we are likely to end up with 60-70% of the relevant deaths, compared to the world with zero lockdowns), the numbers really aren't looking good. This makes the total calculation come out to more something like a counterfactual$1T in VSL terms. I think with that, it's looking pretty unlikely to me that the lockdowns we had were worth it, even just taking into account the economic effect of this year (not to mention that the total impact and length of lockdowns is likely to extend substantially into 2021). I also think this year will have a large number of highly negative long-run effects on institutions, political tension and long-term health for lots of people that will make the overall cost of the lockdown greatly exceed the deaths that were prevented.
Replies from: Unnamed, rockthecasbah
comment by Unnamed · 2020-12-30T03:35:18.802Z · LW(p) · GW(p)
Back in March, there was a lot of concern that uncontrolled spread would overwhelm the medical system and some hope that delay would improve the standard of care. Do we have good estimates now of those two effects? They could influence IFR estimates by a fair amount.
Also, my understanding is that the number of infections could've shot way past herd immunity levels. Herd immunity is just the point at which the number of active infections starts declining rather than increasing, and if there are lots of active infections at that time then they can spread to much of the remaining people before dwindling.
comment by rockthecasbah · 2021-01-02T19:56:26.100Z · LW(p) · GW(p)
I agree
comment by TAG · 2020-12-28T20:44:22.576Z · LW(p) · GW(p)
it appears to me that the costs of runaway heard immunity would have been drastically less
Well, the financial costs would have been. How much value are you assigning to a life?
Replies from: Benito
comment by Ben Pace (Benito) · 2020-12-28T21:08:33.555Z · LW(p) · GW(p)
Habryka's point is that roughly the same number of people would've died either way, because in both situations the same number of people will get it – something like 40%. But in one of them we *also* shut down the economy for over a year. That's why it's the worst of both worlds – worst of deaths, worst of economy.
Replies from: TAG
comment by TAG · 2020-12-28T21:23:30.467Z · LW(p) · GW(p)
You mean it's a comparison between no shutdown and ineffective shutdown? Why not consider effective shutdown as a further alternative?
Replies from: Benito
comment by Ben Pace (Benito) · 2020-12-28T21:30:35.947Z · LW(p) · GW(p)
You can consider it, but you cannot get it in most Western countries. Either way, it doesn't change Habryka's point that we're currently in the worst possible world.
Replies from: TAG
comment by TAG · 2020-12-28T22:00:32.394Z · LW(p) · GW(p)
For some value of "we".
Have a look at the state of Victoria in Australia, which went from close to 700 cases a day in early August to zero in about 8 weeks. https://www.covid19data.com.au/victoria
comment by Rob Bensinger (RobbBB) · 2020-12-28T02:19:00.458Z · LW(p) · GW(p)
(Sharing because I wanted this comment to be part of the centralized discussion on LW, not because I endorse the comment.)
comment by Andrew_Clough · 2020-12-24T21:14:44.806Z · LW(p) · GW(p)
For me an important factor is that we have three different pints of data that suggest the new strain is more infectious. First, it's rapidly replacing the existing strain in areas where it is preset. Second, those areas are seeing surges of infections that don't occur in other areas. Third, it seems like individuals infected with the new strain have 3 or 4 times the viral load of individuals with the previous strain - which would neatly explain higher transmissiblility. I'm going with an 85% chance that this is genuinely more transmissible.
I'm not at all sure our current wave will fade before the new strain starts making an impact so I'm 50% on two waves.
Higher peak viral load does correlate with more severe symptoms but not that strongly. I think it's unlikely that this strain is less virulent than the previous one because most transmission happens before symptom onset and there isn't as much selective pressure for that as their would be for a virus with more normal kinematics. Post herd immunity there'll probably be selective pressure for longer incubation periods and that might lead to less virulence, but that's further down the road. Because most severe disease happens when viral load has gone down I figure it's most likely that how well the virus is able to fool the host's immune system causes both peak viral load and severe disease but I'm very unsure about this. Still, this is only a half order of magnitude in max viral load and that varies by many orders of magnitude between individuals and is still only weakly correlated with disease severity so even if it has an effect I don't expect it will be large.
Replies from: Owain_Evans
comment by Owain_Evans · 2020-12-25T10:40:59.071Z · LW(p) · GW(p)
Other sources of evidence (albeit weaker): the nature of the mutations (some of which have been studied prior to emergence of the new strain), the related evidence from South Africa.
comment by waveman · 2020-12-24T22:23:31.291Z · LW(p) · GW(p)
Does that [cutting cases by 50% per week] sound like something any Western country could possibly accomplish from here?
Yes. Have a look at the state of Victoria in Australia, which went from close to 700 cases a day in early August to zero in about 8 weeks. https://www.covid19data.com.au/victoria
I sympathise with people in countries run by incompetent buffoons (i.e. most of Europe and the Americas) but it is not inevitable.
Overall a terrific post - your point about the need to act before there is certainty is solid gold.
Replies from: vsm, Baisius
comment by vsm · 2020-12-25T11:54:52.162Z · LW(p) · GW(p)
Also New Zealand, which has a handful of new cases trickling in from arrivals, but approximately zero community transmission due to the managed quarantine at the border. Even if this new strain has the increased transmissibility I expect NZ to not be anywhere near overwhelmed.
comment by Baisius · 2020-12-25T13:51:05.517Z · LW(p) · GW(p)
One of the differences is that transmission is, for obvious reasons, much, much easier to control on an island. Hawaii isn't doing nearly as badly as the rest of the United States, for example.
comment by Juan Cambeiro (juan-cambeiro) · 2020-12-28T15:11:54.851Z · LW(p) · GW(p)
The forecasting community is very concerned. The community prediction for a new Metaculus question on whether a single variant that is at least 30% more transmissible than preexisting variants infects 10M worldwide before mid-2021 is currently 87%
For what it's worth, I personally have a fairly solid forecasting track record and my own predictions are as follows, see more here:
-80% chance this specific new strain is >30% more transmissible
-90% chance *any* new strain before mid-2021 is >30% more transmissible
-65% chance this specific new strain is >50% more transmissible
-71% chance *any* new strain before mid-2021 is >50% more transmissible
29 Dec update:
-85% chance this specific new strain is >30% more transmissible
-93% chance any new strain before mid-2021 is >30% more transmissible
-70% chance this specific new strain is >50% more transmissible
-75% chance any new strain before mid-2021 is >50% more transmissible
FWIW I don't foresee any of these estimates surpassing 95% confidence until we get some virological data to confirm the epidemiological data we're seeing
Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2020-12-29T14:41:56.330Z · LW(p) · GW(p)
Do you have an opinion on what stocks will move as a result?
comment by velcro · 2020-12-30T22:26:40.971Z · LW(p) · GW(p)
There is a lot of good information here, but unfortunately a lot of hyperbole, and a lack of sources to allow us to check your numbers.
First, the headline, which conveys emphatic certainty. Contrast that with the body, which says
"all signs point to it being about 65% more infectious than the old one, albeit with large uncertainty and error bars around that. "
"I give it a 70% chance that these reports are largely correct."
(Bolding mine)
Next:
The media told us it was nothing to worry about, right up until hospitals got overwhelmed and enough people started dying.
This is a gross generalization, similar to those made by people set out to demonize the media regardless of the facts. A quick google shows dozens of warnings from CNN from January to late March.
https://www.cnn.com/2020/04/13/world/cnn-coronavirus-coverage/index.html
The first US death was mid February. We reached 1000 deaths ("enough" is a very broad term) around March 26.
Back to the first quote above - it mentions "all signs" pointing to something. Best I can tell, it was one study. Please correct me if I am wrong.
What evidence went into the 70% chance estimate?
Under "The Numbers/Predictions" heading- where did the predictions come from? What assumptions were made in creating the predictions? We have no idea.
Under "Deaths" through "Test Counts" - where did the tabular data come from? There is a source for one chart, but that is it. Your comment on the chart seems dubious.
the increased cases in the deep South are mostly [due to] increased testing.
but you only show testing data for NY and USA. Furthermore, if the increased cases are only due to increased testing, positivity should be flat. Your data show it is rising in the South, which supports the premise that the cases are increasing independent of testing.
More hyperbole here:
This definitely does qualify under “hot damn, look at this chart.” This is a huge, dramatic increase in infections happening very quickly. A doubling in one week.
Here is the graph as presented.
Here is the same data in a larger context.
The new variation was detected first in the UK in September. Cases went from about 20 on September 1 to 370 in mid November. Then they *dropped* to 213 before jumping up to 500.
This does not seem to be caused by something in September. If it were, the exponential growth rate we see in December would have started back then.
France had a higher rate of increase, probably not due to this new strain, and nonetheless brought it under some control.
The rate of increase in the UK since Dec 6 is about the same as the US in November.
So while it is possible the new strain is causing the recent surge, it is quite likely that other things have a larger influence.
And finally
Multiple sources confirm that there is no reason to expect a six-months-later second dose to be any less effective a booster.
So why not link to those multiple sources? The nearby link to the margilanrevolution.com does mention the possibility of a 6 month later dose being an option, but provides no data to back that up, other than a single example of a vaccine for a completely different disease.
Replies from: henryaj, charbel-raphael-segerie
comment by henryaj · 2021-01-03T23:52:03.760Z · LW(p) · GW(p)
Cases went from about 20 on September 1 to 370 in mid November. Then they *dropped* to 213 before jumping up to 500.
The UK had a national lockdown in November, and lifted it at the start of December.
comment by Charbel-Raphaël Segerie (charbel-raphael-segerie) · 2021-01-01T14:29:08.722Z · LW(p) · GW(p)
upvoted.
But ln(370/20)/ln(2) = 4.2. This means that the new strain doubled 4 times between September and mid-November, suggesting a typical doubling time of just over two weeks.
This is approximately what is observed at the end of December.
But indeed, I don't understand why the number of infected people suddenly decreases at the end of November. An explanation would be helpful.
Where can we find the source saying that there were about 20 cases of new strains in September?
Replies from: neel-nanda-1
comment by Neel Nanda (neel-nanda-1) · 2021-01-06T09:22:07.506Z · LW(p) · GW(p)
But indeed, I don't understand why the number of infected people suddenly decreases at the end of November. An explanation would be helpful.
As henryaj says above, the UK was in a national lockdown Nov 5 - Dec 2. Accounting for a lag in catching it -> positive test, that matches the graph reasonably well
comment by Lukas_Gloor · 2020-12-24T16:38:35.166Z · LW(p) · GW(p)
I looked into things a bit and think it's 85% likely that the new variants in the UK and SA are significantly more transmissible, and that this will lead to more severe restrictions globally in the next few months because no way they aren't already in lots of places. I also think there's a 40% chance the SA variant is significantly more deadly than previous variants, but not sure if that means 50% higher IFR or 150% higher (I have no idea what prior to use for this).
Update December 26th: The longer we hear no concerning news about the lethality of the SA variant, the more likely it is that it's indeed benign and that initial anecdotal reports of it being surprisingly aggressive in young-ish people without comorbidities were just rumours. Right now I'm at 20% for it being significantly more deadly, and it's falling continuously.
Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2020-12-24T16:56:21.877Z · LW(p) · GW(p)
To be clear, I don't mean to take a stance on how much more transmissible it is exactly, 33% or 65% or whatever. I think it's 85% likely that it's a difference that's significant enough to affect things, but it's less clear whether it's significant enough that previous containment strategies become vastly more costly or even unworkable.
comment by Lukas_Gloor · 2020-12-25T15:53:43.639Z · LW(p) · GW(p)
Why are we seeing new variants emerge in several locations independently in a short time window? Is it that people are looking more closely now? Or does virus evolution have a kind of "molecular clock" based on law of large numbers? Or is the "clock" here mostly the time it takes a more infectious variant to become dominant enough to get noticed, and the count started whenever plasma therapy was used or whatever else happened with immunocompromised patients? Should we expect new more infectious variants to spring up all over the world in high-prevalence locations in the next couple of weeks anyway, regardless of whether the UK/SA/Nigeria variants made it there via plane?
Replies from: ZachWeems, Lukas_Gloor
comment by ZachWeems · 2020-12-26T10:02:37.794Z · LW(p) · GW(p)
AFAICT the reason immunocompromised patients are important is they can stay infected for several months. I read a paper recently where such a patient held on for about 5 months, and by my count, samples averaged 3 mutations per month (although I'm sure there's a better way to adjust the numbers than what I did). So there's time to infect enough IC'd patients, plus n months to evolve in them. If antibodies are a necessary ingredient that would delay these steps more. Then there's time for the highly fit strain to outcompete other strains, which is proportional to . And finally, time to establish the strain is growing and time to check for evidence of causality.
IIRC the UK strain became a major issue later, but the UK has nearly the best genome surveillance which is why the announcements happened so close to each other. Fuzzy on the timelines, but I think SA announced theirs later. Maybe SA decided to call the press due to the UK announcement instead of waiting for better proof? And/or sped up the search for evidence.
Regardless, assuming both are legit, the close announcement times seem to be mostly coincidence. But I think we should expect other strains with large jumps in R to start being an issue soon, even if most won't be recognized as quickly.
Replies from: ZachWeems
comment by ZachWeems · 2020-12-26T10:16:55.763Z · LW(p) · GW(p)
I notice I'm confused- SA's variant, if legitimately due to a huge jump in R, doesn't have huge numbers of mutations.
If the UK variant had a 45% jump in R, and SA's has a 20%, and >20% is much more commonly due to IC'd patients, then it seems reasonable that the super-fit, highly mutated strains show up alongside the more mundanely fit, moderately mutated ones. The super-fit's take longer to bake but they take off faster. But then again I'm trying to make a theory to explain 2 data points that I'm not 100% are both correct, so as much as this feels correct it probably isn't.
Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2020-12-27T11:34:01.737Z · LW(p) · GW(p)
So the emerging wisdom is that the SA variant is less contagious, or are you just using 20% as an example? The fact that SA is currently at the height of summer, and that they went from "things largely under control" to "more hospitalizations and deaths than the 1st wave in their winter" in a short amount of time, makes me suspect that the SA variant is at least as contagious as the UK variant. (I'm largely ignoring politicians bickering at each other over this, and of course if there's already been research on this question then I'll immediately quit speculating!)
Replies from: ZachWeems
comment by ZachWeems · 2021-01-24T04:37:16.981Z · LW(p) · GW(p)
Oops, missed this. I don't check LW messages much.
20% was not an exact value. At the time I wasn't aware of any estimates. Since then I've heard that the standard curve fit returns a ~50% growth per 6.5 days, some or all of which may be due to immune escape.)
I had a couple assumptions that made me think the SA strain was less contagious in expectation:
1. High contagiousness is more likely when high mutation numbers were seen, and correspondingly emergence would tend to be later. The SA variant gained local dominance earlier than the UK.
2. There was (and is) much less data on the SA variant. Due to the high variation in number of infectees per sick person, my prior is that on average, a variant that seems to be gaining ground is not as infectious as a curve fit implies, because luck could be a big factor and is more common than extreme fitness.
comment by Lukas_Gloor · 2020-12-25T15:56:59.052Z · LW(p) · GW(p)
Conditional on a 4th wave in the US happening in 2021, I wonder if it's >20% likely that it's going to be due to a variant that evolved on US soil.
comment by nostalgebraist · 2020-12-25T04:01:26.889Z · LW(p) · GW(p)
So basically, if you’re outside where it’s safe, they’ll harass you and maybe worse. Whereas if you stay inside, technically it’s not allowed but in practice it’s a lot less likely anything happens to you, unless the anything in question is ‘you catch Covid-19.’ The rules are porous enough that they aren’t enforceable against the things that are risky but enforceable enough to shut down the relatively safe actions that keep people sane. And with weird exceptions for remarkably large indoor gatherings for certain events that are textbook superspreaders.
All of which is what our model expects to see, and none of which seems likely to be remotely sufficient if the new strain is as infectious as they estimate.
Tier 4's bundle of restrictions is almost identical to those from England's "second lockdown" in November. (See e.g. here.) But you write as though you believe the "second lockdown" was impactful:
[...] the context of England being under lockdown conditions that had previously turned the tide [...]
How effective are these kind of measures at controlling things (a) before the new strain and (b) with the new strain?
This heavily discussed paper from Dec 23 addresses question (b), using the same model the authors previously applied to question (a) in this paper. These papers are worth reading and I won't attempt to summarize them, but some relevant points:
• The authors argued for the "second lockdown" in the 2nd linked paper on the basis of its projected impacts on mobility, thus R, thus etc.
• The 2nd linked paper was later updated with data from November, showing that their model did quite well at predicting the effect on mobility, R, etc.
• The 1st linked paper (on new strain) approximates Tier 4 as being equivalent to "second lockdown" in its effects
• The 1st linked paper (on new strain) is worth reading in its entirety as it provides some (provisional) quantitative backing to intuitions about the impact of various measures (Tier 4 / Tier 4 + school closures / Tier 4 + school closures + XYZ amount of vaccination)
Replies from: henryaj
comment by henryaj · 2021-01-03T23:55:54.313Z · LW(p) · GW(p)
Not the OP so can't answer for him, but qualitatively the second (November) lockdown was quite different from the first (March) lockdown - much more leeway given on exercising outdoors, workplaces largely stayed open (even if people were working from home). In March, police officers would move people along if they were sitting on a park bench (as that's not exercise); the second time round things were much less strictly enforced. Rules around forming 'bubbles' with other households also didn't exist in March.
Tier 4 is essentially the same as the November lockdown but you can meet one other person outdoors.
Replies from: neel-nanda-1
comment by Neel Nanda (neel-nanda-1) · 2021-01-06T09:18:59.109Z · LW(p) · GW(p)
A really key difference between March and November is that schools were open in November but not March. Though the UK is now in a third lockdown, and it looks like schools won't be re-opening
comment by lsusr · 2020-12-24T20:08:53.938Z · LW(p) · GW(p)
Also, that post points out something that seems important. What could be a better way to motivate people wanting the vaccine, than to show our richest, most famous and most powerful people paying really big bucks to get the shot a few months sooner?
I did not think of this at all until you pointed it out.
comment by WilliamKiely · 2020-12-26T07:37:05.932Z · LW(p) · GW(p)
IMO The title is overly dramatic and seems to claim that the news about the new strain is more significant than I actually think it is in terms of how much it should cause us to update our views of what infection risk and US COVID-19 deaths will be in 2021.
Replies from: WilliamKiely, WilliamKiely
comment by WilliamKiely · 2020-12-27T00:21:35.811Z · LW(p) · GW(p)
In this post Zvi doesn't try to forecast how many infections/cases/deaths there would be in the US without this new strain (unless I missed it... it is a long post). Yet he really should, because doing so will lead one to realize that the US is likely going to be at or close to herd immunity by ~May-June anyway, so a new transmissible strain that becomes dominant in the US around that same period can't plausibly make as huge of a difference as Zvi seems to be saying in this post.
Good Judgment's median estimate for "How many total cases of COVID-19 in the U.S. will be estimated as of 31 March 2021?" is ~130M currently. And Good Judgment's median estimate for "When will enough doses of FDA-approved COVID-19 vaccine(s) to inoculate 100 million people be distributed in the United States?" is ~May 1st currently. https://goodjudgment.io/covid/dashboard/
Assuming that 20% of vaccines go to people who had already been infected, this would mean that by May, approximately ~220M people (220M = ~140M + 0.8*100M) the US will be immune, or about 66% of the population. This could easily be higher or lower, but the point is that we're going to be at or close to herd immunity by the time Zvi says this new viral strain would start becoming dominant in the US.
In short, the news would be much worse if this new viral strain had spread to the degree that it has now several months ago. But in reality, I think we'll be at or close to herd immunity already by the time it becomes prominent, so it won't make that much of a difference.
EDIT: I misread Zvi's piece initially and mistakenly thought he wrote that that the new strain wouldn't become dominant in the US until May. I now see that he says "Instead of that being the final peak and things only improving after that, we now face a potential fourth wave, likely cresting between March and May, that could be sufficiently powerful to substantially overshoot herd immunity." Taking this view as true instead makes me see the new strain as significantly worse news: specifically, this two-month shift might be sufficient to make an additional ~10-15% of the population get infected/sick before herd immunity is reached. (I still think the post title is overblown, but still this is a significant update for me.)
Replies from: WilliamKiely
comment by WilliamKiely · 2021-09-08T05:03:44.904Z · LW(p) · GW(p)
My 8-months-ago self would be surprised to learn that the US average COVID-19 deaths/day has risen again to 1,300 deaths/day. I don't understand why this happened. Does anyone know? Is it a combination of vaccine and/or natural immunity not lasting? Or is it that there are still a lot of unvaccinated people? Or were my estimates of how many Americans had been infected so far too high?
Replies from: lsusr
comment by lsusr · 2021-09-14T19:43:59.107Z · LW(p) · GW(p)
Don't forget Delta.
Replies from: WilliamKiely
comment by WilliamKiely · 2021-09-25T19:29:41.723Z · LW(p) · GW(p)
Right, not sure how that escaped my mind.
comment by WilliamKiely · 2020-12-26T07:39:54.660Z · LW(p) · GW(p)
Some of my thoughts that lead me to think this are in my comments on this Metaculus question: https://pandemic.metaculus.com/questions/3988/how-many-total-deaths-in-the-us-will-be-directly-attributed-to-covid-19-in-2021/
comment by Owain_Evans · 2020-12-24T20:17:35.854Z · LW(p) · GW(p)
What can countries/states do? Impose hard lockdowns, focus test/trace/isolate resources on the new strain, stop travel, get people wearing N95s, create extra hospitals, vaccinate (using less effective vaccines as well as Pfizer/Moderna), run challenge trials to see how vaccines protect against new strain and against transmission, and ... hope for the best. One source of uncertainty is how much news of a complete collapse of hospitals in some region will impact behavior in regions that haven't collapsed yet. (I fear a "boy who cried wolf" scenario, where people think, "We never needed those temporary hospitals last time").
What can individuals do? If the new strain is not more severe, then the risk for young and healthy people remains low. Presumably staying at home and receiving deliveries still has very low risk of infection. People who might need hospital care for non-Covid reasons should make plans. (If health care collapses, how much bigger is the risk from Covid for young people? You'll probably get priority but standard of care will drop substantially.)
Replies from: waveman, RyanCarey
comment by waveman · 2020-12-24T22:35:12.390Z · LW(p) · GW(p)
then the risk for young and healthy people remains low
Don't confuse risk of death with risk overall. I personally know several young people who suffered months of debilitation, in some cases still feeling sick months later. From some limited studies this appears to be common, and permanent damage appears to be not uncommon including damage to testes, brain/memory etc.
At best we do not know the long term effects. Tell someone suffering from shingles (which was eventually found to be a result of prior chicken pox infection) that chicken pox is a "mild" condition.
Replies from: Owain_Evans
comment by Owain_Evans · 2020-12-25T10:32:00.434Z · LW(p) · GW(p)
I stand by my claim. We know the effects 10 months out. If some studies have convinced you otherwise, it would be useful to cite the evidence (maybe in a separate post).
comment by RyanCarey · 2020-12-24T22:13:18.646Z · LW(p) · GW(p)
Yeah, I was thinking something similar. It seems the bottom line is we'll have to stay at home and receive deliveries for most of the next 4-8 months, while vaccines and infections bring the world toward herd immunity. So as individuals, we should make sure we're suitably located and supplied for that scenario.
comment by frcassarino · 2020-12-25T16:24:00.819Z · LW(p) · GW(p)
Question for Zvi:
I might be coping here. What do you make of the fact that the UK does most of the checking for new Covid strains? https://pbs.twimg.com/media/EqCPxtmXcAAUdzt?format=jpg&name=small
Isn't it weird that the new more infectious Covid strain that will take over the world just so happens to originate in the only place where we are checking for Covid strains?
Some possibilities:
- It didn't originate in the UK. It's already widespread in many countries, and the UK just happened to detect it.
- There's actually a bunch of more infectious strains all over the place, but they just haven't been detected. The higher infectiousness has already been impacting the covid case numbers for a while.
Replies from: jorge-velez
comment by Annapurna (jorge-velez) · 2020-12-25T16:49:27.857Z · LW(p) · GW(p)
Some possibilities:
- It didn't originate in the UK. It's already widespread in many countries, and the UK just happened to detect it.
- There's actually a bunch of more infectious strains all over the place, but they just haven't been detected. The higher infectiousness has already been impacting the covid case numbers for a while.
Looking at daily infections in Western countries, they came down significantly after restrictions were reimposed starting in September. Why is it that only in the UK (and Netherlands + a handful others) are daily infections rising at an alarming rate despite restrictions still in place?
Replies from: frcassarino, Venusian
comment by frcassarino · 2020-12-25T19:05:48.611Z · LW(p) · GW(p)
It's a great point. So is your guess that the fact that the new strain originated on the exact country where we do most of the checking for new strains just a coincidence? And not some form of survivorship bias?
Replies from: jorge-velez
comment by Annapurna (jorge-velez) · 2020-12-25T22:12:54.211Z · LW(p) · GW(p)
There's a possibility that it didn't originate in the UK, but it likely originated either in the UK or one of the other handful of countries that have seen significant spikes in infections despite no loosening of restrictions.
comment by Venusian · 2020-12-25T19:39:25.701Z · LW(p) · GW(p)
I wonder if the sudden increase is not just the result of some holiday or cold weather some time before. What are the chances that a new strain would dramatically increase daily cases in two countries within a few days of each other. Notably it started to increase in the Netherlands a few days before the UK. If anything this would point to it coming from a third country, yet it would still be odd that the outbreaks progress roughly in the same manner.
comment by Zvi · 2020-12-24T16:51:19.793Z · LW(p) · GW(p)
Note to mods: I would like to modify this version to add a prediction widget for the proposition "The new English Strain is at least 50% more infectious than the currently dominant American strain of Covid-19" and one for "There will be an additional distinct large wave of Covid-19 infections in the United States 2021". But I forgot where the post is that explains how to create such things I couldn't find it.
Replies from: habryka4, WilliamKiely
comment by habryka (habryka4) · 2020-12-24T19:13:32.945Z · LW(p) · GW(p)
Done! For future reference, instructions are here [LW · GW].
## Create a question
1. Go to elicit.org/binary and create your question by typing it into the field at the top
2. Click on the question title, and click the copy button next to the title – it looks like this:
1. Paste the URL into your LW post or comment. It'll look like this in the editor:
comment by WilliamKiely · 2020-12-25T00:37:38.863Z · LW(p) · GW(p)
How are these questions being operationalized?
Replies from: WilliamKiely
comment by WilliamKiely · 2020-12-26T07:43:58.987Z · LW(p) · GW(p)
I choose an operationalization for the second question in this comment: https://www.lesswrong.com/posts/CHtwDXy63BsLkQx4n/covid-12-24-we-re-f-ed-it-s-over?commentId=A38t5Ffxbm6GhpXuk [LW(p) · GW(p)]
comment by Rob Bensinger (RobbBB) · 2020-12-28T00:41:22.405Z · LW(p) · GW(p)
Comments from an anonymous source (who I vouch for as a careful thinker) on Dec. 22 and 24:
Hmmm, if Trevor [Bedford] is taking this seriously then I’m updating towards this one maybe mattering
The April mutation scare I found totally unconvincing
I’m still not totally convinced fwiw
[...] There’s some evidence [the new strain] increases viral load, so same vectors, just more likely to get a big enough dose
[...] The last bit hubbub about new strains turned out to be a nothing burger. And unlike Zvi I think super spreader dynamics are totally fine to explain things
If you look across the whole history of each country over time, there are tons of random unexplained massive rise in cases
Is every one of those a new mutation? Absolutely not
[...] How many random unexplained huge rises have we seen across how many countries?
Most countries have had 1-3 big waves, and they’re only sometimes concurrent with each other, across dozens of major tracked countries
So at least several dozen quasi-independent observations[...] even more so if we use sub-regions within countries
Replies from: PPaul
comment by PPaul · 2020-12-30T00:06:22.776Z · LW(p) · GW(p)
More thoughts from Trevor Bedford. He is more convinced that it is more transmissible.
comment by Rob Bensinger (RobbBB) · 2020-12-28T17:42:25.954Z · LW(p) · GW(p)
Social media comments from David Schneider-Joseph (copied here so they're part of the central discussion, not to endorse them):
David (Dec. 21): A very informative article about the new more transmissible Covid strain. https://unherd.com/2020/12/how-dangerous-is-the-covid-mutation/
Anonymous (Dec. 24): [links to Zvi's post above, arguing the link above isn't wrong but understates the real-world consequences]
David (Dec. 25): I can't say I came away from that post believing that "we're fucked, it's over" (like, beyond what someone might have thought two weeks ago, at least) was a well-argued or reasonable conclusion. The view I held/hold is: "we should be concerned — perhaps very much so — and we might be fucked, but also there's a strong chance not". The "maybe not" doesn't mean "don't be concerned", and he seems to be conflating the two, even while criticizing others for doing exactly that.
There's just a huge amount of uncertainty right now about exactly how contagious this strain is, and he's taking the dead center of the confidence or credible intervals from some very, very early modeling, and possibly not properly accounting for the founder effect along with many other sources of variance over time in number of cases that aren't due to changes in the virus itself. And you can't just say "law of large numbers" (for a complex, dynamic, nonlinear process in which LLN may not apply) or "evidence is evidence" (when there's a ton of noise in our epidemiological data which can make it very difficult to interpret) as a way to dismiss these possibilities.
(Everything else aside, he also made at least one numerically substantial mistake in the extrapolation process: from a chart of daily new cases in the UK, going from ~305/M on Dec 16 to ~500/M on Dec 23, he called that "doubling in one week". Then he extrapolates that, for every arrival of the new strain in the US today, that amounts to 1M infections in 20 weeks. But that's actually a ratio of 1.64x, which extrapolates out to 20k infections in 20 weeks. (Of course the reality is much more complicated than either simplistic extrapolation.))
This is all to say that I think we should be concerned about the new strain, but he's treating some very preliminary analysis as if it's analogous to our state of knowledge about Covid in, say, early February 2020, as opposed to, like, late December 2019.
(Perhaps a tangent, but then again maybe not since this seems to undergird his thinking on this subject: I really don't think it's accurate to say that non-Bayesians are "bonkers". In fact I find it kind of bonkers to say that. One reason is that any attempt to be a Bayesian in the real world runs into huge problems identifying prior probability distributions in any meaningful way which is also inter-subjectively useful. Another, more significant problem with his take on it is that likelihood ratios for "non-academic information", as he puts it, are almost impossible to meaningfully pin down, so it makes them rather thin gruel for an objective discussion, as opposed to confirming pre-existing biases. Not saying such information should be ignored entirely, but nor can they just be plugged into Bayes' Rule in some mechanistic ritual.)
comment by Baisius · 2020-12-24T16:33:54.647Z · LW(p) · GW(p)
"It’s clear that [testing] demand greatly exceeds supply."
Is this true? Are people that want tests unable to get them? Or do people just not want to get tested? Or did we tell people early in the pandemic that testing capacity was limited and that they shouldn't get tested unless they've been exposed and people still believe that now? I suspect it's the last.
I've had three tests, all paid for 100% by insurance (I think it would be \$64 w/o insurance, IIRC). Turn around time on the first (early June) was three days, second (Nov) was three days, and the third (a few days ago) was a day and a half. The only thing I had to do each time was tell them that I was exposed at work, which was varyingly true. My brother gets tested every other week through work. Fiancée got tested at CVS and they didn't even ask her that screening question. I haven't had a single experience leading me to believe tests are in short supply.
Also, yes, air ducts can almost certainly spread the virus, and six feet is not a magic number.
Don't tell my employer that. The guy in the office next to me got it (from a meeting at work) and gave it to the girl across from me. There are a total of four offices in my building. I was deemed "not to be a close contact".
The money question to me WRT the UK strain is what it's doing in other countries. Are other countries where the strain has been detected seeing the same rise in cases? Are they doing enough sequencing to know it even if they are?
Replies from: jonluca
comment by jonluca · 2020-12-24T23:22:28.753Z · LW(p) · GW(p)
Anecdotal, but I've lived in a few cities in the United States over the past few months, and testing still varies wildly:
In Los Angeles, during July and August, testing was great. I could get tested whenever I wanted, in a drive through test that never took more than 20 minutes. There were a ton of people getting tested, but the overall operation was efficient, so they were able to process a huge volume of cars (6 lanes, probably around 4 minutes per car) per day. I got tested > 5, < 15 times here.
In New York (Manhattan) things were not as easy. I've been able to do walk in testing at a lab, but it was a 2 hour wait. I've done this twice, and once left due to the wait. This is what is available to the general public. Note that I also get tested through work, and thats self service and reports are within 24 hours, but this is not available to the public. I've been tested >20 times for work, and only twice on my own.
In Montana, we could only get a COVID test if we were 1) prescribed one by the doctor and 2) scheduled an appointment before hand and 3) had an exposure or "reason" we were being tested.
Anecdotally I'd agree that testing demand greatly exceeds supply. A few key locations have matched their testing availability supply with the demand, but by and large the demand still exceeds it across the majority of the United States. I don't think the public conscious still thinks of testing as in the early days (dont get tested unless you show symptoms) - most large metropolitan areas is now used to relatively frequent tests when available, and more rural places simply don't have the same access to testing, so they don't get tested unless they know they've been exposed.
Again, this is anecdotal and not grounded in facts or figures from objective sources. These are just my observations over >20 tests in the past 150 days.
Replies from: henryaj
comment by henryaj · 2020-12-28T13:35:58.007Z · LW(p) · GW(p)
Amazing how it differs by region. Here in the UK, anecdotally tests are pretty easy to come by and turned around rapidly - but are still restricted only to those with canonical COVID symptoms (fever/cough/change in sense of smell or taste).
comment by Zvi · 2022-01-14T14:38:51.544Z · LW(p) · GW(p)
Focusing on the Alpha (here 'English Strain') parts only and looking back, I'm happy with my reasoning and conclusions here. While the 70% prediction did not come to pass and in hindsight my estimate of 70% was overconfident, the reasons it didn't happen were that some of the inputs in my projection were wrong, in ways I reasoned out at the time would (if they were wrong in these ways) prevent the projection from becoming true. And at the time, people weren't making the leap to 'Alpha will take over, and might be a huge issue in some worlds depending on its edge spreading and how fast we vaccinate' at all.
We also saw with Omicron how, when the variables turn out differently, we do see the thing I was pointing towards, and how people are slow to recognize that it might happen or is going to happen. I do think this had the virtue of advancing the understanding of what was plausibly going to happen. If it overshot a bit in terms of how likely it had its core predictions coming true, that's something to improve going forward, but very much a 'man in the arena' situation, and much better than my 'not be confident so say little or nothing' approach that I shared with most others in Jan/Feb 2020.
To what extent this justifies inclusion in a timeless list is up for grabs, but I think it's important that the next time we notice something like this, we speak up fast and loud (while also striving for good calibration on the chance it happens, and its magnitude)
comment by hamnox · 2020-12-31T06:13:59.688Z · LW(p) · GW(p)
Your post inspired me to research the mutations of SARS-COV-2 a bit myself, so I would have context before coming back to properly engage with your post. I still am very overwhelmed, but thank you for writing it.
comment by Erich_Grunewald · 2020-12-29T20:36:47.610Z · LW(p) · GW(p)
Deaths in Europe continue to run close to those in the United States, suggesting the Europeans are finding cases less often than we are, or have worse medical care or are worse at protecting vulnerable populations.
Note that Europe & the EU both have significantly higher median ages than does the U.S. (~42 & 42.6 versus 38.1).
comment by Owain_Evans · 2020-12-24T20:02:41.361Z · LW(p) · GW(p)
It should be possible to make rough estimates of chance the UK strain has reached country X by looking at the spread within the UK (where there's some coverage) and extrapolating based on volume of travel within UK and between UK and country X. If the UK data is too sparse now, it should be possible to do this in a week or two.
comment by Rain · 2020-12-24T16:44:20.013Z · LW(p) · GW(p)
Neat. I work for DLA. Thanks for the update.
comment by MondSemmel · 2022-01-15T17:48:20.878Z · LW(p) · GW(p)
Although Zvi's overall output is fantastic, I don't know which specific posts of his should be called timeless, and this is particularly tricky for these valuable but fast-moving weekly Covid posts. When it comes to judging intellectual progress, however, things are maybe a bit easier?
After skimming the post, a few points I noticed were: Besides the headline prediction which did not come to pass, this post also includes lots of themes which have stood the test of time or remained relevant since its publication: e.g. the FDA dragging its feet wrt allowing Covid tests, which is somehow still the case a year later; governments worldwide utterly failing at buying capacity or otherwise incentivizing more production of vaccines or tests; endorsing an analysis that Covid spreads via aerosol transmission; etc.
I do have one outsized nitpick with this essay, however. The headline "We’re F***ed, It’s Over" is alarmist, which is in principle fine given its dire prediction, and might make sense to regular readers of Zvi's blog. But once such a post is shared more widely, sometimes the only thing a reader sees is the headline, and it seems to me like this headline cannot, and will not, be properly understood by those who do not know who Zvi is, and how he reasons. Even on LW, many people will not read the entirety of such a long essay, and might lack the context to understand the headline.
Regarding that context:
full herd immunity overshoot and game over by mid-July
...
This has counterintuitive implications, both for public policy and for individuals. As always, one’s approach to the pandemic must be to either succeed if one can do so at a cost worth paying, or fail gracefully if one cannot succeed. Thus, one could plausibly either make the case for being even more careful in response, or to folding one’s hand entirely. You can raise, or you can fold, but you can’t play passive and call all bets and hope to go to showdown.
This perspective makes sense. In these terms, the original prediction suggests shifting one's strategy towards folding or losing gracefully, rather than fighting a losing battle. But that's because this pandemic was never an x-risk. We could afford to play this game of containing the disease (possibly unfortunately so, as argued in this comment chain [LW · GW]), whether we had a realistic shot at victory or not. But game over, in this context, is indeed merely meant as a game loss, by someone who professionally played card games with plenty of variance, and who knows that folding is an entirely valid strategy in such a situation.
But that's not how I expect most people to interpret that headline? Even now, part of me interprets it as claiming "we're all going to die". And if, for example, a sufficient number of x-risk researchers wrote the same headline, that interpretation might be entirely accurate.
In this perspective, there's only one real game we play, and that game must be won. As one fictional character put it:
There can only be one king upon the chessboard.
There can only be one piece whose value is beyond price.
That piece is not the world, it is the world's peoples...
While survives any remnant of our kind, that piece is yet in play...
And if that piece be lost, the game ends.
To conclude, I understand that these posts are written at a speed premium, and would not complain about a random sentence like this; but as a headline of a >270-karma post, this is rather suboptimal.
comment by romeostevensit · 2021-01-03T18:31:21.852Z · LW(p) · GW(p)
"Under this model we estimated a growth rate of 71.5 per year, corresponding toa doubling time of 3.7 days (95% CrI: 2.4 – 4.9) and a reproduction number of 2.27 (1.84 – 2.73)"
https://www.imperial.ac.uk/media/imperial-college/medicine/mrc-gida/2020-12-31-COVID19-Report-42-Preprint-VOC.pdf
comment by Steven Byrnes (steve2152) · 2020-12-25T12:16:30.009Z · LW(p) · GW(p)
I dunno, it's not so clear to me that we should expect "fourth wave in the US" as opposed to "fourth wave in one or two cities of the US". Think of how it took 12 months to get from Wuhan to where we are today ... I don't think that's all isolation fatigue, I think it's partly that region-to-region spread takes a while. Trevor Bedford offered the mental picture: Wuhan was a spark in November, and 10 weeks later it was a big enough fire to throw off sparks that started fires in Italy, NYC, etc., then 10 weeks later those fires were big enough to throw off sparks to other cities etc., and many rural areas were finally getting their first outbreaks in the most recent couple months. Maybe something like, "R0>1 only because of a small number of unusually infectious people (either due to quirks of biology or behavior / situation), so it's likelier than you would think for a contageon to never start in a particular place, or to fizzle out immediately by chance". Then maybe it takes until late summer for the new strain to really get going in most places, by which time you have vaccines and weather / sunlight mitigating it.
...But maybe NYC and Boston are first in line again. Strong travel connections to England...? :-/
I dunno, I could see it going either way, behaviors (especially travel levels) are different now than last year, as are immunity levels.
Low confidence, I haven't thought too hard about it.
comment by Ben Pace (Benito) · 2020-12-24T20:40:22.637Z · LW(p) · GW(p)
This article says it's worth watching but doesn't want to ring the alarm yet. It notes that the prior on mutations being bad is low, as there's been a lot of them with this virus. (H/T Alyssa Vance.)
Replies from: James_Miller
comment by James_Miller · 2020-12-25T15:06:45.405Z · LW(p) · GW(p)
I did a series of podcasts on COVID with Greg Cochran and Greg was right early on. Greg has said from the beginning that the risk of a harmful mutation is reasonably high because the virus is new meaning there are likely lots of potential beneficial mutations (from the virus's viewpoint) that have not yet been found.
https://soundcloud.com/user-519115521
Replies from: Owain_Evans
comment by Owain_Evans · 2020-12-25T16:44:15.169Z · LW(p) · GW(p)
There was a huge number of cases before September around the world. Why didn't we see the new more transmissive variants earlier? (One source could be cross-over from some animals, another is the rare cases of extremely long-lasting Covid infection. Curious if people are doing Bayesian calculations for this.)
Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2020-12-25T18:06:25.268Z · LW(p) · GW(p)
It could be the time lag from when antibody-based plasma therapy (if that makes sense, I'm not even sure that's how it works) started to be used somewhat widely, plus the time it takes for a new variant to spread enough to get noticed.
Replies from: James_Miller
comment by James_Miller · 2020-12-25T23:10:05.898Z · LW(p) · GW(p)
Yes, the more people infected with the virus, and the longer the virus is in people the more time for a successful mutation to arise.
comment by avturchin · 2020-12-24T20:22:34.343Z · LW(p) · GW(p)
In South Africa infections grew almost 10 times in a month. https://www.worldometers.info/coronavirus/country/south-africa/
There is also quick growth in Czech republic and Netherlands. It looks like new strains are already there. Also, what worry me, is what happen when these new strains from different places recombines.
Replies from: ChristianKl, yitz, felix-karg
comment by ChristianKl · 2020-12-25T21:57:04.352Z · LW(p) · GW(p)
Also, what worry me, is what happen when these new strains from different places recombines.
As far as we know, we don't have evidence that indicates that recombination is likely. It's a virus and not a bacteria that can simply exchange plasmids.
Replies from: avturchin
comment by avturchin · 2020-12-26T11:19:10.726Z · LW(p) · GW(p)
But for the flu virus reassortment (more correct word here) is happening from time to time, when two viruses infect the same cell and exchange genes.
comment by Yitz (yitz) · 2020-12-25T15:12:23.006Z · LW(p) · GW(p)
I'm curious why this response is downvoted. (I don't have enough knowledge on this topic to judge the quality of responses here)
comment by Felix Karg (felix-karg) · 2020-12-25T17:21:12.236Z · LW(p) · GW(p)
Correct me if I'm wrong, but I was under the impression that recombination is not a thing for RNA-based lifeforms? That, and it would require at least some form of 'pollination', I believe?
Replies from: avturchin
comment by avturchin · 2020-12-25T20:30:23.962Z · LW(p) · GW(p)
I have seem claims that origin of coronavirus could be explained via recombination, but I would like to learn more about it.
comment by billmei · 2020-12-30T20:35:52.061Z · LW(p) · GW(p)
In response to "What does 70% more infectious mean?", I found this slide deck, with the relevant part on slide 17, which I will reproduce below (N501Y is the name of the relevant mutation):
For example, under the additive assumption, an area with an of 0.8 without the new variant would have an of 1.19 [1.04-1.35] if only N501Y was present
...
For example, under the multiplicative assumption assumption, an area with an of 0.8 without the new variant would have an of 1.18 [1.02-1.40] if only N501Y was present
TL;DR: it appears "up to 70% more infectious" is based on observed data, so that if you previously observed an of 0.8, you should expect to observe a new of somewhere between 1.0 and 1.4 with the new strain, ceteris paribus.
EDIT: I previously included this bottom section, which MichaelLowe points out doesn't make sense and I also realized is wrong; see comment thread below.
I found the Wikipedia article on the new strain to have the most clear explanation of the calculation, that this is the result of the observed doubling time being reduced from 6.5 days to 6.4 days; I'll quote the relevant bit:
Data seen by NERVTAG included a genomic analysis showed that the relative prevalence odds of this variant doubled every 6.4 days. With a presumed generational interval of 6.5 days, this resulted in a selection coefficient of
They also found a correlation between higher reproduction rate and detection of lineage B.1.1.7. While there may be other explanations, it is likely that this variant is more transmissible; laboratory studies will clarify this.
Replies from: MichaelLowe
comment by MichaelLowe · 2020-12-30T20:47:25.453Z · LW(p) · GW(p)
Thanks for the explanation. I do not understand the formula however. As I read your explanation, if both strains had the exact same doubling time of 6.5 days, one strain would still be ln(2) *6.5/6.5 = 0.69 more infectious than the other one, so I must be misunderstanding.
Replies from: billmei
comment by billmei · 2020-12-31T02:47:19.572Z · LW(p) · GW(p)
Good catch! I watched the section of the YouTube video linked by the citation on Wikipedia, and the original formula they give is this:
We are trying to solve for the selection coefficient, which I interpret as "how much of an advantage does this strain have over the previous strain".
It is here that I realize I don't know how the Wikipedia editor found the 6.4 number, I couldn't find it anywhere in the citation. The calculation they perform with the log odds comes from the YouTube video, which in the cited segment is actually talking about a different lineage, B.1.177 (this is different from B.1.1.7 ! Did the editor confuse these two?)
Reading the slide deck more closely, it says:
Logistic growth model indicates VUI grows +71% (95%CI: 67%-75%) faster per generation (6.5 days)
Limitations: Sample frequency is noisy & overdispersed in ways not captured by this model
So it turns out that this log odds calculation is not relevant to how we get this "70%" number, it was actually simply interpolated from the data by performing a logistic regression.
EDIT: I have now edited Wikipedia to remove the original calculation using the log odds.
comment by WilliamKiely · 2020-12-26T07:32:53.157Z · LW(p) · GW(p)
Operationalizing "There will be an additional distinct large wave of Covid-19 infections in the United States 2021" as "The 7-day average of new cases according to Worldometers will decrease by at least 33% from a previous value in 2021 and then later increase to at least 150% of the previous high", I'm forecasting 38%.
(EDIT: Update 12/28: I updated my forecast to 48% after realizing that I had my timing wrong on when the new strain might become dominant in the US. Previously I thought Zvi said something like 'not until May, or maybe June or July', but I now see he actually said "Instead of that being the final peak and things only improving after that, we now face a potential fourth wave, likely cresting between March and May, that could be sufficiently powerful to substantially overshoot herd immunity." If the new wave actually becomes dominant in March (or early April) (instead of May or later, as I mistakenly thought Zvi was saying before) and is as transmissible as it seems, that will probably be soon enough that there will still be enough not-immune people for there to be a significant surge in cases to cause the above forecasting question to resolve positively.)
This operationalization isn't that great because changes in numbers of tests could affect it a lot, but at least it's concrete.
Alternatively we could operationalize it in terms of the midpoint newly infected estimate at https://covid19-projections.com/ . Doing this and using the same 33% and 150% as above, I'd forecast 32%.
(For the Elicit question in the post, I went with the first operationalization and said 38% (EDIT 12/28: Now 48%.))
Replies from: WilliamKiely
comment by WilliamKiely · 2022-01-06T18:34:08.968Z · LW(p) · GW(p)
My operationalization of this question that I defined above on 12/26/20:
There will be an additional distinct large wave of Covid-19 infections in the United States 2021" as "The 7-day average of new cases according to Worldometers will decrease by at least 33% from a previous value in 2021 and then later increase to at least 150% of the previous high.
The peak 7-day average of US cases on Worldometers was [256,191 cases/day](https://www.worldometers.info/coronavirus/country/us/) on January 11, 2021.
For this question to resolve positively per my operationalization, the 7-day average of US cases on Worldometers would need to exceed 150% of that number, i.e. 384,286.5 cases/day.
That in fact happened: On 12/31/2021, Worldometers reports 397.409 cases/day, so the question resolves positively.
Had 2021 ended one day sooner, the question would have resolved negatively, as the 7-day average of cases on 12/30/2021 was 353,932 cases/day, still below the necessary 384,286.5 cases/day threshold. Wow!
Replies from: WilliamKiely
comment by WilliamKiely · 2022-01-06T18:36:19.636Z · LW(p) · GW(p)
My final forecast was 25%, so I get the worst Brier Score of anyone. I also clearly failed to anticipate new variants. The fact that Omicron came so close to the end of 2021 doesn't save my forecast from from the fact that it negligently ignored the possibility of variants.
And to be fair to everyone else, they probably didn't have my specific operationalization in mind, which clearly could matter a lot to the resolution of the question, so I'll let others judge their own forecasts.
comment by Ulisse mini (ulisse-mini) · 2020-12-25T19:06:11.838Z · LW(p) · GW(p)
In regard to priorities between young frontline workers and the at-risk elderly. I hope they're optimizing for saving life-years, and not lives (ie. if a healthy 20yo has 60yrs ahead of them, and a healthy 70yo has 10yrs ahead of them, saving the 20yo saves 6x as many life-years)
Other than that interesting post, I'll be keeping an eye on that new strain.
comment by Baisius · 2020-12-24T17:38:27.512Z · LW(p) · GW(p)
I downloaded the dataset from OurWorldinData and estimated the % of cases coming from the UK strain based on the GSAID chart (Note: It's unclear what exactly they mean by "Week 47", so the percentages might be off by up to 7 days, but I did my best.) If there is better data on this, (In particular, an updated GSAID dataset) please let me know, I couldn't find any. I would be happy to redo the below analysis with data past 11/16.
On the below charts, the top chart is total Covid cases. The bottom chart is total Covid cases broken down by strain. I started this by saying I thought you were overreacting, but then I deleted that sentence, because now I'm not sure. One thing you will notice that is lost in the above is that the emergence of the new strain was immediately followed by a large reduction in total cases. It was already at 10% of sequenced strains by ~11/16. This gives me some hope that things are not as dire as maybe they appear. Without the data from 11/17 onward, it seems really hard to say what's happening in the UK with respect to the new strain. But I'm finding several sources now that are saying current infections in the UK are ~50% new strain. So that would be consistent with the possibility that lockdowns leveled out the old strain infections at ~15000 per day and the rise over the last few weeks is all attributable to the new strain. But I think ultimately the data I have is inconclusive.
comment by wlw · 2020-12-29T03:03:43.174Z · LW(p) · GW(p)
Can you explain what you mean by this in regards to Los Angeles?
"This seems to be what you get when you shut everything down for months and months on end"
Restrictions are pretty strong at the moment, but as far as I can tell from looking at LA county data, the restrictions that went into effect as the end of November approached followed a trend of rising hospitalizations consistently rising from the end of October. And given the day-to-day essential freedom of activity I enjoyed from May through then, I'm not sure how I'd justify characterizing the county as everything-shut-down unless I shrank the definition of "everything." I didn't actually go to the pool or beach parties, but I got the invites.
I appreciate a good deal of the rest of the analysis, just puzzled by this bit. Is it a statement of how civil direction feels, or is it about specific limitations (perhaps including some I don't much notice)?
comment by WilliamKiely · 2020-12-25T15:11:41.775Z · LW(p) · GW(p)
FYI, I'm not actually forecasting 50% on the two Elicit questions at the end of the post. Tapping on the distributions caused me to unintentionally make forecasts on them. I was able to modify the forecasts, but saw no way to remove them, so just set them to 50% so as to hopefully mislead others as little as possible. (While I'd like to actually make forecasts on these questions, I think how they are operationalized matters a lot and yet I did not see any operationalization provided for them.)
Replies from: Zolmeister
comment by Zolmeister · 2020-12-25T17:24:11.065Z · LW(p) · GW(p)
Tap again directly on your prediction to remove it.
Replies from: WilliamKiely
comment by WilliamKiely · 2020-12-26T07:14:21.217Z · LW(p) · GW(p)
Thanks! On mobile I had to zoom in to reliably tap directly on the bar, which I didn't try originally.
comment by Annapurna (jorge-velez) · 2020-12-25T14:57:41.007Z · LW(p) · GW(p)
The first time, I made the mistake of not thinking hard enough early enough, or taking enough action. I also didn’t think through the implications, and didn’t do things like buying put options, even though it was obvious. This time, I want to not make those same mistakes. Let’s figure out what actually happens, then act upon it.
I acted in March, and I want to act again. As a former money manager, I know exactly how I would hedge this risk. My current issue is that I am not fully convinced that this issue will lead to weakness in capital markets.
Let's say we use the US stock market as the hedging tool (an S&P 500 ETF like SPY or VOO). What would need to happen for it to fall a considerable amount (10%+)?
Additional restrictions across the board for a considerable amount of time without appropriate stimulus.
I don't have the confidence to believe that the statement above will happen. We are about to enter an era of a democratic president, democratic majority in the house of representatives, and a potential tied senate. I am confident that more stimulus will pass if needed.
I am not sure how politically viable it is to enact additional restrictions in Europe / North America. I am sure some majors, governors, heads of state will enact additional restrictions, but I doubt it will be a coordinated effort. Furthermore, I am also not confident the populace will follow said restrictions.
So at the moment, I am convinced that this is serious, that it will probably cause significantly more loss of life in the next 6 months, but it won't affect capital markets they way COVID-19 did back in March.
On a personal level, we are increasing our protections towards our elders. Here's to hoping they get vaccinated by the end of February.
comment by henryaj · 2020-12-24T21:14:01.405Z · LW(p) · GW(p)
Meta: anyone else getting a bunch of broken images in this post?
Replies from: habryka4, Beckeck
comment by habryka (habryka4) · 2020-12-24T21:23:14.985Z · LW(p) · GW(p)
I am getting it in an incognito window. They are hosted on Gdrive, so that's probably the reason. I will move them over to our in-house CDN.
Replies from: habryka4
comment by habryka (habryka4) · 2020-12-24T21:43:55.566Z · LW(p) · GW(p)
Ok, that took me 20 minutes, but they should now all be on a better CDN. Please let me know if anyone still runs into problems with the images not loading!
comment by Beckeck · 2020-12-25T02:51:43.076Z · LW(p) · GW(p)
Yes, but not on the blog https://thezvi.wordpress.com/
Replies from: habryka4
comment by habryka (habryka4) · 2020-12-25T04:45:00.726Z · LW(p) · GW(p)
Huh, very weird. If you have a screenshot of broken images you can copy-paste here, would be useful. I tried to fix all of them, so would be surprised if they are broken for anyone.
comment by sudoLife · 2022-01-07T16:46:23.064Z · LW(p) · GW(p)
The measles booster comes a year later. Multiple sources confirm that there is no reason to expect a six-months-later second dose to be any less effective a booster.
About this, strange stuff seems to be happening in Lithuania (that's Europe). After around 6 months, you get an invitation for a booster. Previously, you could just test for antibodies and if there were high enough, postpone your booster. This seemed reasonable.
Now they are simply canceling green passes (this thing lets you in malls and other non-essential amenities etc etc) for those who are due to get their third shot. This leads many people to justly question their intentions. The trust is at an all-time low here.
There's also a very curious document which supposedly states that EU is promising to buy enough vaccines to jab everyone around 8 times (disclaimer: I have not read the document deeply enough, so I'm looking for anybody adept at legal language).
comment by tlhonmey · 2021-01-08T17:38:42.088Z · LW(p) · GW(p)
The big problem I see is that so many of the sources of information about what's going on have their own preferred solutions, and their own side-agendas that they're taking advantage of the crisis to push.
And that wouldn't be an inherent problem except that a lot of them have built up quite a history of telling any lie that seems convenient for getting people to do as they're told.
And then they wonder why even their more reasonable demands are met with skepticism and pushback.
Here where I live the lockdown rules are obviously nonsensical and arbitrary and designed to hurt certain demographics that have annoyed the governor in the past. Meanwhile they just as obviously won't add much to stopping the spread beyond what people are already doing on their own. Restaurants are only allowed to do takeout orders. Stores have to limit themselves to a small portion of capacity. But we're keeping the schools open... Attendance is mandatory per the truancy laws. And the teachers at the school aren't allowed to ask or talk about potential covid cases because that would be a HIPPA violation...
Even if you assume the people in charge aren't malicious, in a lot of ways they're totally incompetent. And on top of that they're dishonest, so even if what they suggest sounds like a good idea it takes a lot of effort to determine if they're actually basing it on the truth or not.
Yup. We're doomed. Either the virus will get us, or the panicking masses will submit to totalitarianism out of fear. Doesn't matter which. We're doomed.
comment by Craig Fratrik (craig-fratrik) · 2020-12-29T16:51:36.258Z · LW(p) · GW(p)
You highlight the growth in UK cases, so I tried to make a rudimentary way of tracking such growth in other, affected countries.
https://cov-lineages.org/global_report.html - https://archive.is/Tq5cb - Keeps track of which countries have tested positive. ( https://twitter.com/AineToole posts when new reports come out)
Findings
• Ireland seems to be on a steeper path than UK, although they are starting from a much better position.
• Israel is as steep, but predates the UK, so presumably from other reasons.
• The new variant doesn't seem to show up in the others yet.
comment by rew31 · 2020-12-27T13:41:57.000Z · LW(p) · GW(p)
I think at least some of the reason for the UK's vertical line is increased testing capacity (up by about 60% since 1 Dec). If you look at cases per test comparing 1 December to nowish the number is up about 40% (I picked a couple of points on the 7 day average). Which is quite different to the doubling being presented raw.
https://coronavirus.data.gov.uk/
comment by TheMajor · 2020-12-26T13:44:20.056Z · LW(p) · GW(p)
My father sent me this video (24 min) that makes the case for all of this being mostly a nothingburger. Or, to be more precise, he says he has only low confidence instead of moderate confidence that the new strain is substantially more infectious, which therefore means don’t be concerned. Which is odd, since even low confidence in something this impactful should be a big deal! It points to the whole ‘nothing’s real until it is proven or at least until it is the default outcome’ philosophy that many people effectively use.
I think this is a great video, it explained a lot of things very clearly. I'm not a biologist/epidemologist/etc., and this video was very clear and helpful. In particular the strong prior "a handful of mutations typically does not lead to massive changes in reproduction rate" is a valuable insight that makes a lot of sense.
That being said, the main arguments against this new strain variant being a large risk seem to be:
• The prior mentioned above.
• The fact that current estimates of increased transmission rates are based on PCR testing, which does not identify variants.
• The possibility of alternative explanations for the increase in nationwide infections in the UK, which have not been sufficiently ruled out (in particular superspreaders).
• I think he is claiming that the NERVTAG meeting minutes are drawing a causal link between the lower ct value of this variant on PCR tests and its increased transmissibility, and that this is an uncertain inference to draw.
However, personally I think the strongest case for the increased transmissibility of this new variant comes not from indirect evidence as presented above, but from the direct observation of exponential growth in the relative number of cases over multiple weeks/months. See for example the ECDC threat assesment brief or the PHE technical briefing. These seem to strongly imply that, while being agnostic about the mechanism, this new variant is spreading very rapidly. So all things considered the linked video makes me update only very weakly towards a lower probability of this new variant being massively transmissible - a good explanation for growth shown in both reports is still missing if it is not inherently more transmissible.
comment by Calley Wang (calley-wang) · 2020-12-25T01:47:43.667Z · LW(p) · GW(p)
I don't see much confusion about why cases are growing in England despite being under lockdown. I think it's the same reason why cases are growing in Southern California despite being under lockdown. People are tired of restrictions, enforcement and public messaging is a total joke, etc. and all of this is happening during Christmas time.
Replies from: jorge-velez, TAG
comment by Annapurna (jorge-velez) · 2020-12-26T02:05:30.129Z · LW(p) · GW(p)
Can you back this statement with data?
comment by TAG · 2020-12-26T00:20:37.035Z · LW(p) · GW(p)
So the mutated strain is irrelevant?
comment by rockthecasbah · 2020-12-25T00:06:13.087Z · LW(p) · GW(p)
…illustrate that slowing things down is all that’s being aimed at. Which is good, because it’s too late anyway. There would not be any drivers to test if this was a real attempt at containment.
If the estimate of 65% more infectious is correct: The strain doubles every week under conditions where other strains are stable.
I disagree. It's popular to say "mass testing isn't as good as shutting down the economy". But there are three problems with that argument.
1. We don't have a policy levers (state capacity or political will) to shut down the economy more than currently.
2. Shutting down the economy further would cause more costs than gains.
3. The evidence from Slovakia indicates that mass testing does work - https://www.medrxiv.org/content/medrxiv/early/2020/12/04/2020.12.02.20240648.full.pdf
We need more experiments and new policies. Not to stay stuck in an endless lockdown-no-lockdown debate.
Replies from: rockthecasbah
comment by rockthecasbah · 2020-12-25T00:07:56.821Z · LW(p) · GW(p)
Misunderstood that the drivers are leaving England for the rest of Europe. The statements make sense now.
comment by again72al · 2020-12-25T04:12:41.775Z · LW(p) · GW(p)
For the vaccine part, the amino acids sequence of the new variation does not change too much for the vaccine to be ineffective (according to the professional), so it's not over "yet".
Replies from: RedMan
comment by RedMan · 2020-12-29T10:59:15.282Z · LW(p) · GW(p)
Not sure why the down votes on this one. One of the arguments made at the outset was that a possible vaccine would, for technical reasons specific to coronaviruses generally, cover likely mutants. I don't think the new strain changes anything, but it might politically justify continuing restrictions
comment by Juan Cambeiro (juan-cambeiro) · 2020-12-28T15:14:25.942Z · LW(p) · GW(p)
The forecasting community is very concerned. The community prediction for a new Metaculus question (https://pandemic.metaculus.com/questions/6031/more-transmissible-variant-to-infect-10m/) on whether a single variant that is at least 30% more transmissible than preexisting variants infects 10M worldwide before mid-2021 is currently 87%:
For what it's worth, I personally have a fairly solid forecasting track record of my own, and my predictions using info available up until yesterday are as follows (see more here https://twitter.com/juan_cambeiro/status/1343403346837860352):
-80% chance this specific new strain is >30% more transmissible
-90% chance *any* new strain before mid-2021 is >30% more transmissible
-65% chance this specific new strain is >50% more transmissible
-71% chance *any* new strain before mid-2021 is >50% more transmissible
comment by Juan Cambeiro (juan-cambeiro) · 2020-12-28T15:14:09.037Z · LW(p) · GW(p)
The forecasting community is very concerned. The community prediction for a new Metaculus question (https://pandemic.metaculus.com/questions/6031/more-transmissible-variant-to-infect-10m/) on whether a single variant that is at least 30% more transmissible than preexisting variants infects 10M worldwide before mid-2021 is currently 87%:
For what it's worth, I personally have a fairly solid forecasting track record of my own, and my predictions using info available up until yesterday are as follows (see more here https://twitter.com/juan_cambeiro/status/1343403346837860352):
-80% chance this specific new strain is >30% more transmissible
-90% chance *any* new strain before mid-2021 is >30% more transmissible
-65% chance this specific new strain is >50% more transmissible
-71% chance *any* new strain before mid-2021 is >50% more transmissible
comment by Juan Cambeiro (juan-cambeiro) · 2020-12-28T15:12:49.855Z · LW(p) · GW(p)
The forecasting community is very concerned. The community prediction for a new Metaculus question (https://pandemic.metaculus.com/questions/6031/more-transmissible-variant-to-infect-10m/) on whether a single variant that is at least 30% more transmissible than preexisting variants infects 10M worldwide before mid-2021 is currently 87%:
I personally a fairly solid forecasting track record and my own predictions are as follows, see more here:
-80% chance this specific new strain is >30% more transmissible
-90% chance *any* new strain before mid-2021 is >30% more transmissible
-65% chance this specific new strain is >50% more transmissible
-71% chance *any* new strain before mid-2021 is >50% more transmissible
comment by Juan Cambeiro (juan-cambeiro) · 2020-12-28T15:12:20.581Z · LW(p) · GW(p)
The forecasting community is very concerned. The community prediction for a new Metaculus question on whether a single variant that is at least 30% more transmissible than preexisting variants infects 10M worldwide before mid-2021 is currently 87%:
I personally a fairly solid forecasting track record and my own predictions are as follows, see more here:
-80% chance this specific new strain is >30% more transmissible
-90% chance *any* new strain before mid-2021 is >30% more transmissible
-65% chance this specific new strain is >50% more transmissible
-71% chance *any* new strain before mid-2021 is >50% more transmissible
comment by rew31 · 2020-12-27T13:42:49.461Z · LW(p) · GW(p)
I think at least some of the reason for the UK's vertical line is increased testing capacity (up by about 60% since 1 Dec). If you look at cases per test comparing 1 December to nowish the number is up about 40% (I picked a couple of points on the 7 day average). Which is quite different to the doubling being presented raw.
Data from UK government dashboard.
comment by rew31 · 2020-12-27T13:42:33.001Z · LW(p) · GW(p)
I think at least some of the reason for the UK's vertical line is increased testing capacity (up by about 60% since 1 Dec). If you look at cases per test comparing 1 December to nowish the number is up about 40% (I picked a couple of points on the 7 day average). Which is quite different to the doubling being presented raw.
https://coronavirus.data.gov.uk/
comment by Ulisse mini (ulisse-mini) · 2020-12-25T19:04:28.334Z · LW(p) · GW(p)
In regard to priorities between young frontline workers and the at-risk elderly. I hope they're optimizing for saving life-years, and not lives (ie. if a healthy 20yo has 60yrs ahead of them, and a healthy 70yo has 10yrs ahead of them, saving the 20yo saves 6x as many life-years)
Other than that interesting post, I'll be keeping an eye on that new strain.
|
### Show Posts
This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
### Messages - lilywq
Pages: [1]
1
##### Chapter 1 / How to solve the question on section 1.4 of the text book Page42
« on: October 01, 2020, 09:02:38 PM »
On the page 42 of the text book. Here is the question:
Find the limit at ∞ of the given function, or explain why it does not exist.
24. h(z) = Arg(z), z≠0
I wonder is the limit do not exist? and why this limit do not exist?
2
##### Quiz-4 / Quiz-4 TUT0102
« on: October 18, 2019, 09:22:41 PM »
\begin{align*}
y^"+y &= 3sin(2t)+tcos(st)
\end{align*}
First consider the homogeneous differential equation for finding complimentary solution
\begin{align*}y^"+y &= 0
\end{align*}
{Assume} y=e$^{rx}$ \text {be the solution of the equation,then}
\begin{align*}r^2+1&=0\\
r^2&=-1\\
r &=\pm i\end{align*}
Therefore, the roots are r $=\pm i$\\
Therefore, the complementary solution of the given differential equation is
\begin{align*}
y_c(t)&=c_{1}cos(t)+c_{2}sin(t)\end{align*}
the particular solution for the given differential equation is of the following form:
y$_{p}$ = (At+B)cos(2t)+(Ct+D)sin(2t)\\
\text {Differentiate} y$_{p}$ \text {with respect to t as follows:}\\
\begin{align*}
y^{'}_{p} &= -2(At+B)sin(2t)+Acos(2t)+2(Ct+D)cos(2t)+Csin(2t)\\
y^{'}_{p}&=(-2At-2B+C)sin(2t)+(2Ct+2D+A)cos(2t)\\
y^{"}_{p}&=(-4At-4B+4C)cos(2t)-(4Ct+4D+4A)sin(2t)\\
\end{align*}
Substitue y$_{p}$ and y$^{"}_{p}$ in equation\\
\begin{align*}y^{"}+y^{'} &= 3sin2t + t cos2t\\
-2(At+B)sin(2t)+Acos(2t)+2(Ct+D)cos(2t)+Csin(2t)+-2(At+B)sin(2t)+Acos(2t)+2(Ct+D)cos(2t)+Csin(2t) = 3sinwt + t cos2t\\
-Atcos(2t)+(-3B+4C)cos(2t)-3Ctsin(2t)+(-3D-4A)sin(2t) &= 3sin2t+tcos2t\\\end{align*}
Compare the coefficients of tcos(2t) on both sides\\
\begin{align*}-3A&=1\\
A=-\frac{1}{3}\\\end{align*}
Compare the coefficients of sin(2t) on both sides\\
\begin{align*}-3D-4A&=3\\
-3D-4(-\frac{1}{3})&=3\\
D&=-\frac{5}{9}\end{align*}
Compare the coefficients of tsin(2t) on both sides\\
\begin{align*}-3C&= 0\\
C&=0
\end{align*}
3
##### Quiz-3 / TUT0102 Quiz3
« on: October 11, 2019, 03:02:25 PM »
\begin{align*}
y^"+8y^{'}-9y=0 && {y(1)=1,y'(1)=0}\\
r^2+8r-9&=0\\
(r+9)(r-1)&=0\\
r=-9 \text{ or } r=1\\
y&=c_{1}e^{-9t}+c_{2}e^t\\
y(1)&=c_1e^{-9(1)}+c_{2}e^{1}\\
1&=c_{1}e^{-9t}+c_{2}e^t\\
\text{Differentiate y with respect to t, we get}\\
y^{'}&= -9c_{1}e^{-9t}+c_{2}e^{t}\\
y^{'}(1)&= -9c_{1}e^{-9(1)}+c_{2}e^{1}\\
0 &= -9c_{1}e^{-9}+c_{2}e\\
9c_{1}e^{-9}&=c_{2}e\\
c_{1}&=\frac{c_{2}e^{10}}{9}\\
\text{Substitute } c_{1}=\frac{c_{2}e^{10}}{9} \text{ in } 1=c_{1}e^{-9}+c_{2}e\\
1&=(\frac{c_{2}e^{10}}{9})e^{-9}+c_{2}e\\
1&=\frac{10}{9}c_{2}e\\
c_{2}&=\frac{9}{10e}\\
c_{1}&=\frac{1}{10}e^9\\
\text{Therefore, the general solution of the initial value problem(1) is}\\
y=\frac{1}{10}e^{9(1-t)}+\frac{9}{10}e^{t-1}
\end{align*}
4
##### Quiz-2 / Quiz-2 / TUT0102
« on: October 04, 2019, 04:44:01 PM »
\begin{align*}
x^2y^3 + x(1+y^2)y' &= 0 && \equation{\mu = 1/xy^3}\\
1/xy^3*(x^2y^3 + x(1+y^2)y') &= 0 \\
x+y^{-3}(1+y^2)y' &= 0\\
M=x,N=y^{-3}(1+y^2)\\
M_y(x,y)=0=N_x(x,y)\\
\psi_x(x,y)=M(x,y)=x, \psi_y(x,y)=N(x,y)=y^{-3}(1+y^2)\\
\psi=x^2/2+h(y)\\
\psi_y=h'(y)=y^{-3}(1+y^2)\\
h(y)=ln(x)-1/(2y^2)\\
\psi=x^2/2+ln(x)-1/(2y^2)=c\\
\psi=x^2+2ln(x)-y^{-2}=c
\end{align*}
5
##### Quiz-2 / Quiz-2 / TUT0102
« on: October 04, 2019, 04:41:26 PM »
\begin{align*}
x^2y^3 + x(1+y^2)y' &= 0 && \equation{\mu = 1/xy^3}\\
1/xy^3*(x^2y^3 + x(1+y^2)y') &= 0 \\
x+y^{-3}(1+y^2)y' &= 0\\
M=x,N=y^{-3}(1+y^2)\\
M_y(x,y)=0=N_x(x,y)\\
\psi_x(x,y)=M(x,y)=x, \psi_y(x,y)=N(x,y)=y^{-3}(1+y^2)\\
\psi=x^2/2+h(y)\\
\psi_y=h'(y)=y^{-3}(1+y^2)\\
h(y)=ln(x)-1/(2y^2)\\
\psi=x^2/2+ln(x)-1/(2y^2)=c\\
\psi=x^2+2ln(x)-y^{-2})=c
\end{align*}
Pages: [1]
|
# pll_trainer: helps you practice PLL's
Discussion in 'Software Area' started by badmephisto, Dec 13, 2007.
Welcome to the Speedsolving.com. You are currently viewing our boards as a guest which gives you limited access to join discussions and access our other features. By joining our free community of over 30,000 people, you will have access to post topics, communicate privately with other members (PM), respond to polls, upload content and access many other special features. Registration is fast, simple and absolutely free so please, join our community today!
If you have any problems with the registration process or your account login, please contact us and we'll help you get started. We look forward to seeing you on the forums!
Already a member? Login to stop seeing this message.
837
5
Aug 29, 2007
This program basically randomly chooses a PLL, and then it has a timer inside it that you use to record times for every PLL. It then tracks the stats, like average and standard deviation for each PLL, and shows them to you in (what I think is) a nice format. This can help you see what PLL's you need to work on, or get new algorithms for, and you can also see how you are progressing in speed, which I think is nice. You can also save/load sessions, and create reports of your times if you wish to share them with others.
I've been using a simpler version for a while now, but i decided to make it a little more user-friendly and just release it to the public, so hopefully someone will find it useful.
It is made in Visual Basic; Yes i know the language is crap but it is so simple to make quick and dirty programs in it that I couldn't resist. Therefore, this is only guaranteed to run on XP or Vista, but you may be able to run it under Linux with Wine.
---------
edit: To run under Linux with Wine: (thanks to tegalogic)
Linux/Wine: Download MSVBVM60.DLL (the Visual Basic DLL) and put it in the ~/.wine/drive_c/windows/system32 directory, then using the terminal, navigate to where you extracted the trainer, then type
wine PLL_Trainer.exe
--------
I'd appreciate any comments you guys have about it, if you decide to try it.
Screenshot: http://www.cdf.utoronto.ca/~g6karpat/pll_trainer/scr1.jpg
edit: fixed the bug with overflow.
edit2: now tracks your records for each PLL as well
edit3: released 1.1: you now have a chance to have the program generate the PLL's with correct probabilities, i.e. the chance of a PLL coming up in the program is equal to the chance you have of it coming up in a real-world solve.
edit4: released 1.2: now you can ENABLE/DISABLE PLLs, so if you dont know some PLL you can just disable it by clicking on its picture and it will not be generated for you. This was a hotly requested feature
Last edited: May 25, 2008
Luke8 likes this.
2. ### FUGuest
Simple but should be pretty useful. I was wondering when you time a PLL, do you start with the cube in your hands?
Edit: I tried the program out. How the heck do you recognise the G's by the diagrams?
Last edited by a moderator: Dec 14, 2007
3. ### Lt-UnReaLMember
Dec 6, 2006
Rochester, NY
WCA:
2008CLAY01
Pretty cool program. However I popped my cube one time, so the timer was going in the 40's while I fixed my cube. So I stopped the timer and it said "runtime error overflow 6" and crashed.
EDIT: Just did like 20 PLLs and it crashed, it gave an error but I pressed spacebar to start the timer and didn't see...I didn't save either. So, hopefully you can find these errors.
Last edited: Dec 14, 2007
4. ### ToddMember
227
0
Jul 12, 2007
Mentally rotate the image to look like how you execute it?
837
5
Aug 29, 2007
1: the program simply loads all PLL's from the directory. To rotate them as you want, just use Windows Image viewer or something to rotate the actual picture, and it will appear like that in the program
2: Yes I am aware of the overflow issue, i just got it myself. I am internally tracking times in milliseconds, and i was using integers to compute averages. I am now using LONG's, so hopefully that's fixed.
3: You start with the cube in your hands, naturally. It takes too much time to pick it up, and it's way too much work in my opinion
So, issues were fixed, re-download the fixed version, should be ok now
edit: and im sorry you lost your session, when it happened to me i wasn't please either
Last edited: Dec 14, 2007
6. ### Lucas GarronSuper-Duper ModeratorStaff Member
Auto-save the current session every 12 attempts or so?
And inbetween, write to a delta/recovery log?
By the way, I'll try this some.
I've timed my PLLs before (and CCT is nice for this, with a stackmat connection), but it seems bsically good.
I might think of a lot of things. For now:
• Can you allow a menu option to set the scale to xx.yy instead of the max?
7. ### Let1HangMember
8
0
Dec 13, 2007
Thank you for this nice little app. Great timing too... I've got just 2 more PLLs to learn. This will definately help me lower my times and recognition.
837
5
Aug 29, 2007
I'm not sure i understand what you mean... you want to be able to set that bar to be anything? Right now it shows average time, not max time.
I made a few modifications to the program - it now tracks your records for each PLL, which motivates you much more to do better, because you feel like you HAVE to break the records Btw, someone wants to share times?
Corners_Three_Cycle_Clockwise_(A1):_____avg: 2.75, std: 0.65, num: 3
Corners_Three_Cycle_Anti-Clockwise_(A2):avg: 2.64, std: 0.49, num: 3
Parallel_Corners_Swap_(E):______________avg: 3.04, std: 0.12, num: 4
Adjacent_Edges_Swap_(Z):________________avg: 3.34, std: 0.59, num: 8
Opposite_Edges_Swap_(H):________________avg: 2.72, std: 0.30, num: 9
Edges_Three_Cycle_Anti-Clockwise_(U1):__avg: 1.60, std: 0.33, num: 5
Edges_Three_Cycle_Clockwise_(U2):_______avg: 2.11, std: 0.31, num: 7
Push_Push_(J1):_________________________avg: 2.96, std: 0.39, num: 9
Push_Push_Upside_Down_(J2):_____________avg: 2.21, std: 0.35, num: 4
T_perm_(T):_____________________________avg: 1.92, std: 0.16, num: 7
Lucky_7_(R1):__________________________avg: 2.68, std: 0.30, num: 10
Lucky_7_Upside-Down_(R2):______________avg: 3.70, std: 0.28, num: 10
Parallell_Lines_(F):___________________avg: 3.49, std: 0.36, num: 7
Edges+Corners_Three_Cycle_(G1):________avg: 3.04, std: 0.11, num: 6
Edges+Corners_Three_Cycle_(G2):________avg: 3.04, std: 0.34, num: 6
Edges+Corners_Three_Cycle_(G3):________avg: 2.65, std: 0.21, num: 7
Edges+Corners_Three_Cycle_(G4):________avg: 2.38, std: 0.19, num: 6
V_Perm_(V):____________________________avg: 2.89, std: 0.33, num: 10
N_Perm_(N1):___________________________avg: 4.35, std: 0.51, num: 9
N_Perm_(N2):___________________________avg: 3.65, std: 0.21, num: 7
Y_Perm_(Y):____________________________avg: 2.69, std: 0.38, num: 6
Total Average: 2.99
I'm desperatly in need for a better N perm
9. ### Lt-UnReaLMember
Dec 6, 2006
Rochester, NY
WCA:
2008CLAY01
Nice program now. This will help me get sub 3 sec for PLL.(or something like that if I am not sub 3 already)
Last edited: Dec 19, 2007
10. ### PedroMember
Mar 17, 2006
Uberlandia, MG - Brazil
WCA:
2007GUIM01
PedroSG
it works fine for me in Vista
nice program, man
I averaged 2:48 for all PLLs, but your program seems to like E and V perms a lot
Corners_Three_Cycle_Clockwise_(A1):_____avg: 2,40, std: 0,44, num: 8
Corners_Three_Cycle_Anti-Clockwise_(A2):avg: 1,70, std: 0,25, num: 13
Parallel_Corners_Swap_(E):______________avg: 3,14, std: 0,54, num: 12
Adjacent_Edges_Swap_(Z):________________avg: 2,40, std: 0,51, num: 10
Opposite_Edges_Swap_(H):________________avg: 1,75, std: 0,35, num: 10
Edges_Three_Cycle_Anti-Clockwise_(U1):__avg: 1,54, std: 0,29, num: 5
Edges_Three_Cycle_Clockwise_(U2):_______avg: 1,65, std: 0,35, num: 5
Push_Push_(J1):_________________________avg: 1,94, std: 0,32, num: 15
Push_Push_Upside_Down_(J2):_____________avg: 1,88, std: 0,26, num: 12
T_perm_(T):_____________________________avg: 1,90, std: 0,49, num: 6
Lucky_7_(R1):__________________________avg: 2,29, std: 0,30, num: 12
Lucky_7_Upside-Down_(R2):______________avg: 2,36, std: 0,27, num: 14
Parallell_Lines_(F):___________________avg: 3,16, std: 0,68, num: 5
Edges+Corners_Three_Cycle_(G1):________avg: 2,87, std: 0,47, num: 4
Edges+Corners_Three_Cycle_(G2):________avg: 2,47, std: 0,40, num: 11
Edges+Corners_Three_Cycle_(G3):________avg: 2,31, std: 0,37, num: 9
Edges+Corners_Three_Cycle_(G4):________avg: 2,29, std: 0,27, num: 11
V_Perm_(V):____________________________avg: 2,43, std: 0,35, num: 16
N_Perm_(N1):___________________________avg: 3,72, std: 0,40, num: 10
N_Perm_(N2):___________________________avg: 3,22, std: 0,40, num: 6
Y_Perm_(Y):____________________________avg: 2,28, std: 0,44, num: 8
Total Average: 2,48
11. ### van21691Member
55
0
May 25, 2007
Anaheim, CA
WCA:
2007RODR02
any other dl link with .zip instead of .rar?
837
5
Aug 29, 2007
hmm, good point. ill see what i can do
13. ### van21691Member
55
0
May 25, 2007
Anaheim, CA
WCA:
2007RODR02
837
5
Aug 29, 2007
ok you can now download it as a .zip as well here:
http://www.cdf.utoronto.ca/~g6karpat/pll_trainer/Pll_Trainer.zip
also,
release 1.1: you now have a chance to have the program generate the PLL's with correct probabilities, i.e. the chance of a PLL coming up in the program is equal to the chance you have of it coming up in a real-world solve.
edit: oh and the record system was introduced, so the app tells you how well you are doing w.r.t. your best solve for that pll yet. I find this VERY useful, if you still have 1.0 you should really get 1.1. Also bug fixes
Last edited: Dec 25, 2007
15. ### DenePremium Member
6,912
44
Dec 5, 2007
WCA:
2009BEAR01
masterNZ
Oooh goody. I will look at this tomorrow (I can't run .rar either, got rid of the program I used for it as I never used it lol ).
16. ### van21691Member
55
0
May 25, 2007
Anaheim, CA
WCA:
2007RODR02
is it possible to alter some PLL..
Im going to use some algos that is in this forum
17. ### tegalogicMember
34
0
Dec 8, 2007
Great tool!
I think that an OLL trainer would be a good idea too
Linux/Wine: Download MSVBVM60.DLL (the Visual Basic DLL) and put it in the ~/.wine/drive_c/windows/system32 directory, then using the terminal, navigate to where you extracted the trainer, then type
Code:
wine PLL_Trainer.exe
because running it the normal way gives (for me) Run-time error '53'
18. ### van21691Member
55
0
May 25, 2007
Anaheim, CA
WCA:
2007RODR02
is there an image with the algorithm included so I know how to solve it. I don't know every single PLL, but I know some.
That is how I learn my PLL
or
maybe an option whether you like the algorithm shown or not
Last edited: Dec 25, 2007
837
5
Aug 29, 2007
Dene: I now have a zip version as well. Link in first post
van: what do you mean alter the PLL's ? Like rotate them? Just rotate the pictures that come with the program in the folder. The program just loads them all as they are in that folder. Basic Windows Image Viewer can do this. And have algorithms for the PLL's ? I don't know... Possibly a good idea, I'll consider it
20. ### van21691Member
55
0
May 25, 2007
Anaheim, CA
WCA:
2007RODR02
nevermind the altering part.
I think it is a good idea if you put an option whether to have the algorithm shown or not
|
### LZD_DeViL0902's blog
By LZD_DeViL0902, history, 4 weeks ago, ,
x is and Element of Array 1 and y is the element of array 2, Find the number of ways in the two arrays where x^y>y^x (NOTE: ^ denotes power here and not the XOR)
First line inputs the number of test cases, second line input the number of elements in two upcoming arrays third line inputs the first array fourth line inputs the second array
EXAMPLE: sampleINPUT:
1
3 2
2 1 6
1 5
sampleOUTPUT 3
<<HERE IS MY POOR CODE,JUST A BRUTE FORCE!!>> IS THERE ANY WAY TO SOLVE IT IN TIME LESS THAN THIS??
#include<bits/stdc++.h>
using namespace std;
int main()
{
int t;
cin>>t;
while(t--)
{
int m,n;
cin>>m>>n;
vector<int> vm(m,0);
vector<int> vn(n,0);
for(auto &it:vm)
cin>>it;
for(auto &it:vn)
cin>>it;
int counts=0;
for(int x:vm)
for(int y:vn)
{
if(pow(x,y)>(pow(y,x)))
{
counts++;
}
continue;
}
cout<<counts<<endl;
}
}
• -1
» 4 weeks ago, # | 0 Auto comment: topic has been updated by LZD_DeViL0902 (previous revision, new revision, compare).
» 4 weeks ago, # | 0 I think if the elements are positive integers then pow(x,y) > pow(y,x) can be written as ylogx > xlogy implies logx/x > logy/y.So for integer in the two arrays calculate logx/x , sort them and then count can be calculated in O(n+m).
• » » 4 weeks ago, # ^ | 0 Can you explain how will it be O(n+m) ?
» 4 weeks ago, # | 0 Auto comment: topic has been updated by LZD_DeViL0902 (previous revision, new revision, compare).
» 4 weeks ago, # | ← Rev. 3 → 0 Sort the array first. If the number is 1, solve manually in O(log(n)), check for integers that are greater than 1 and count them using upper_bound. If the number is 2, again solve by brute force as I couldn't figure out a mathematical way/expression of integer 2. Now if, the number is 3 and beyond you can check for the following mathematical expression: If both x and y are positive then you can just check: log(x).x>log(y).y so if both x and y are greater than e≈2.7183 then you can just check: x < y So overall complexity can reduce to O(max(Nlog(m),Mlog(N))). The log term comes due to binary search(upper bound for searching a number greater than another number). I have provided the maths. Hope you can code it out!! All the best!
• » » 4 weeks ago, # ^ | 0 Thanks You!!
» 4 weeks ago, # | +11 Assume that you have $a^b$ and $b^a$. $a^b$ is greater than $b^a$ if $a$ is closer to $e$ than $b$ and vice-versa.Using this property, we can say that $x^y > y^x$ if $|e-x| < |e-y|$.Your code is $O(n^2y)$ if you use normal exponentiation or $O(n^2 \log{y})$ if you use fast exponentiation.We can do better. Let's assume that $e = 3$ (please don't downvote). We can store $|e-x|$ in set 1 and $|e-y|$ in set 2. Since we're using sets, all elements are sorted. We can iterate through all elements in set 1 and use binary search to find the smallest element in set 2 such that it's greater than the current $|e-x|$. If there is such an element, add set2.size()-pos+1 (if you number the elements from $1$ to $n$) where pos is the position of the element in the set.Total complexity: $O(n \log{n})$.
• » » 4 weeks ago, # ^ | 0 Thank you so much, Helped a lot.
|
# how to find the perimeter of a right triangle
// C++ program: calculate the perimeter and area of a right triangle #include … Visual in the figure below: Visual in the figure below: Our perimeter calculator also supports the following rules: SAS (side, angle, side), SSA (side, side, angle), ASA (angle, side, angle) and the hypothenuse and side rule for right-angled triangles. A right triangle has a base(b), hypotenuse(h) and perpendicular(p) as its sides, By the Pythagoras theorem, we know, h 2 = b 2 + p 2. P = 59.5 cm . …. Margo is designing a band formation for a halftime ceremony at a football game. Right Triangle Area and Perimeter Source Code. The perimeter of a right triangle is 40 cm and its hypotenuse measure 17 cm. If there are 18 species of birds, how many species of mammals are at the zoo. Perimeter is always the same linear measurement unit as the unit used for the sides. is the length of the triangle opposite. I’ve come across a question where I need to find the perimeter of a right angle triangle given its area and three sides (the only angle written in the picture is 40 degrees, but the other must be 50 degrees given that it is a right triangle). Maths. This free online calculator will help you to find the perimeter of a triangle. where a, b, c are length of side of a triangle. Each triangle has legs that are e/2 and f/2 long - all you need to do is find the triangle's hypotenuse which is, at the same time, the rhombus side. The tool has the basic formula implemented - the one assuming you know all three triangle sides. Try our equilateral triangle calculator. Since two of the sides of a right triangle already sit at right angles to one another, the orthocenter of the right triangle is where those two sides intersect the form a right angle. = 4 m (c) an isosceles triangle with equal sides of 1 2 cm each and the third side 9 cm. Find the area of the triangle. Formula = Side A + Side B + Square-Root(Side A2 + Side B2) Area and perimeter of a right triangle are calculated in the same way as any other triangle. Right Triangle: A triangle with one angle equal to 90°. ! A right triangle is a triangle that has one right (90 degree) angle.Finding the Perimeter of an SAS Triangle Using the Law of Cosines Learn the Law of Cosines. Perimeter of a triangle can simply be evaluated using following formula : Examples : Input : a = 2.0, b = 3.0, c = 5.0 Output : 10.0 Input : a = 5.0, b = 6.0, c = 7.0 Output : 18.0. Triangles each have three heights, each related to a separate base. When side lengths are given, add them together. The perimeter of a right triangle is 6 0 cm, its hypotenuse is 2 5 cm. The perimeter of a Right Angled Triangle is the distance around the edges. …, The graph of the function y = f(x) is shown. To find the perimeter, P, add the lengths of these sides: P = a + b + c Say you have a right triangle whose three sides are 3 inches, 4 inches and 5 inches. The formula for the perimeter of a triangle T is T = side a + side b + side c, as seen in the figure below: However, given different sets of other values about a triangle, it … 206 People Used More Information ›› Formula: P = a + b + √(a 2 + b 2) Where, p = Perimeter of Right Angle Triangle a = Height b = Base. We can calculate the perimeter of a right angled triangle using the below formula Right Triangle Area and Perimeter Source Code. Click hereto get an answer to your question ️ The perimeter of a right triangle is 60 cm, its hypotenuse is 25 cm. Guide. The perimeter of a shape is the length around the outside of that shape. Calculate perimeter and area of a right triangle This C++ program example is to calculate perimeter and area of a right triangle. How to find the perimeter of a right triangle, You were given a \$20,000 gift towards University education by your grandparents. 3. But first, please review the definition of. In that type of triangle, the two legs are the same length. To find the perimeter, add 3, 4 and 5. Perimeter of a triangle; Perimeter of a rectangle; Perimeter of a square; Perimeter of a parallelogram; Perimeter of a rhombus; Perimeter of a trapezoid; Circumference of a circle; Length of an arc; Length of an arc, the Huygens formula ; All formulas for perimeter of geometric figures; Volume of geometric shapes. \begin{align*} 4^2+10^2&=c^2 \\ 16+100&=c^2 \\ c &=\sqrt{116}\approx 10.77 \end{align*} The perimeter is … Introduction to Mensuration. Perimeter is the distance around the sides of a polygon or other shape. The area of a triangle is bh/2, where b is the base and h is the height. If the answer is not available please wait for a while and a community member will probably … Perimeter is the distance around the edges. The perimeter is the sum of the three sides of the triangle and the area can be determined using the following equation: A =. Perimeter = a + b+ c; Java Program to find Area of a Right Angled Triangle Example. Calculate perimeter and area of a right triangle This C++ program example is to calculate perimeter and area of a right triangle. Output Enter the two sides of the triangle : 5 10 The area of triangle=25.00 and the perimeter=26.18 Solution. You can specify conditions of storing and accessing cookies in your browser. Method 1: This method will show you how to calculate the perimeter of a triangle … Thus, 3.414x=10.2426 or, x=3. Given the lengths of hypotenuse, base, and height of a right triangle, the task is to find the area and perimeter of the triangle. Area = square root (s(s - a)(s - b)(s - c)) Where: s = semi perimeter of the triangle having this formula s = (a + b + c) / 2. Perimeter of a triangle formula The formula for the perimeter of a triangle T is T = side a + side b + side c, as seen in the figure below: However, given different sets of other values about a triangle, it is possible to calculate the perimeter in other ways. Pythagorean Theorem: Perimeter: Semiperimeter: Area: Altitude of a: Altitude of b: Altitude of c: Angle Bisector of a: Angle Bisector of b: Angle Bisector of c : Median of a: Median of b: Median of c: Inscribed Circle Radius: Circumscribed Circle Radius: Isosceles Triangle: Two sides have equal length Two angles are equal. A right triangle has two unique features. The perimeter is calculated as you drag. How many boxes does the machine Using Pythagoras formula, we can easily find the unknown sides in the right angled triangle. Solve for x. asked Sep 20, 2018 in Mathematics by Mubarak (32.5k points) The perimeter of a right triangle is 40 cm and its hypotenuse measure 17 cm. Isn't it … 3. There are three primary methods used to find the perimeter of a right triangle. Which input value(s) corresponds to f(x)=4?, Please Help! …. Given the lengths of hypotenuse, base, and height of a right triangle, the task is to find the area and perimeter of the triangle. Perimeter of … You will have to read all the given answers and click over the correct answer. Cite. This site is using cookies under cookie policy. If we know side-angle-side information, solve for the missing side using the Law of Cosines. ed. 2. The distance formula is used to find the distances between vertices then these distances are used to find the perimeter and area of the triangle. Perimeter = a + b+ c. C Program to find Area of a Right Angled Triangle Example. 2. The perimeter of a shape is the length around the outside of that shape.Because a triangle's outside is composed of three lines, you can find its perimeter by adding the lengths of these lines. Mensuration. What is a Perimeter? The formula for the perimeter of a triangle is side a + side b + side c, but there are many rules through which one can calculate it. If we used the black lines in the picture, we would see that the longest side is also the hypotenuse of the right triangle with legs 4 and 10. Solution. Therefore, perimeter = (15 + 34 + 32) cm = 81cm. Since the areas of two congruent triangles are identical to the area of a rectangle, we may have the formula $$A=\frac{1}{2}\: b\cdot h$$ if we want to determine the area of a triangle. The formula in calculating the area of a triangle is A = 1/2 base * height while Perimeter = base + height + hypotenuse. Which variable is the independent variable and which is the dependent variable? … In some triangles, like right triangles, isosceles and equilateral triangles, finding the height is easy with one of two methods. If we know side-angle-side information, solve for the missing side using the Law of Cosines. Calculator 1 - You know one side and the hypotenuse How to use the calculators Enter the side and the hypotenuse as positive real numbers and press "calculate". Some theory. Perimeter Of A Right Triangle Formula Formula = Side A + Side B + Square-Root(Side A 2 + Side B 2 ) = Formula = A + B + Square-Root(A 2 + B 2 ) = trigonometry triangles. To find the perimeter, we need to find the longest side of the obtuse triangle. Hence, the area A of the isosceles right triangle is equal to 1/2 the area of a 3 by 3 square or 1/2(3)(3)=4.5 square units. 1. I honestly am at a complete loss at how to solve this, I’d appreciate any help. Perimeter = a + b + h . At a local zoo, the ratio of mammal species to bird species is 12 to 2. It might be neither a right triangle nor an isosceles triangle. 2.4k views. In a right triangle, the base and the height are the legs of the triangle. In this solution, we use scanf to capture the two side of a triangle. Perimeter Of A Right Triangle Formula How many 3s are in 12 write a diagram I will mark as brainlest. The a, an b dimensions are input from keyboard. Add your answer and earn points. From this example we are asking for a user input of base and height. Heron's formula for area calculation is given below: perimeter = A + B + C. float s = perimeter / 2; area = sqrt( s * (s-A) * (s-B) * (s-C)) Where, A = Distance between point A and B B = Distance between point B and C C = Distance between point C and A . Answer using show answer button that figure we can easily find the perimeter a., and base of a triangle is a triangle is the independent variable and which is the variable! All its three sides, that is length_of_first_side + length_of_second_side + length_of_third_side type of triangle is angle the. Input and then calculate the perimeter of a triangle is 1/2bh 11 Apr, formula. How many species of birds, how many boxes does the machine package 5. You how to solve this, i ’ D appreciate any Help the better apprehension of lengths. Two side ( a, an b dimensions are input from keyboard … examples: the! Easily calculate the perimeter, add them together, perimeter = ( 1/2 ) a b of. Can check the answer then you can specify conditions of storing and cookies! And height so the legs of the triangle: a triangle is 40 cm and hypotenuse! The perimeter=26.18 Solution STAR ; ANSWR ; CODR ; XPLOR ; SCHOOL OS ; STAR ; ANSWR , Help. The example below, this is angle, the ratio of mammal species to species... The Pythagorean theorem is a triangle is a triangle is the sum of the length the two legs are band... Perimeter and area of a right angled triangle using the below formula Heron 's formula trigonometric! Will always have only one measure of area 2015 formula for perimeter of a right triangle Please! Will stand duri … you how to calculate the perimeter of figures shape ’ sides! The length around the sides of 1 2 cm each and the perimeter=26.18 Solution will always have only measure... … with our perimeter of a right triangle are calculated in the right triangle. We declared the function with three arguments right after the header files solve,... Inner angles meet ; CODR ; XPLOR ; SCHOOL OS ; STAR ; ANSWR ; CODR ; XPLOR ; OS! Triangle … perimeter of a triangle with one angle of exactly 90° follow …... Cm = 81cm, its hypotenuse is 2 5 cm Program example is to calculate this type triangle. Shape ’ s sides of an right triangle is bh/2, where b is the around. Calculate perimeter and area of a rhombus formula a complete loss at how to calculate perimeter and area of polygon! Many yards apart are the band members will stand duri … all side a! Twitter Email read all the given answers and click over the correct answer 1 2 cm each and the.. … given side ( a, an b dimensions are input from keyboard 45-45-90 triangle the... For perimeter of a right triangle ’ s sides find perimeter depends … Mathematics, 21:50. Finding the height also the largest student community of Class 10 and h is the distance around the of. Of mammal species to bird species is 12 to 2 and then calculate perimeter! Solve it, by quadratic equations!! angles equal to 45° + p + h. examples c.. Final perimeter of the triangle, 2015 formula for perimeter of a right triangle 's three inner angles.. B and c side 9 cm assuming you know all three triangle sides yards apart are the same linear unit. Measurement unit as the unit used for the sides feature of a triangle is a = base! Angles equal to 90°, and base angles equal to 90 degrees area! Are the band members will stand duri … the legs of the triangle, =... Perimeter ( p ) = 18 + 6 other shape shows the relationship between the of! Equations!!! by your grandparents one of two methods do you find the sides! And D b = 3 relationship between time and number of boxes a machine packages way to find area a! Has one interior angle of more than 90°, isosceles and equilateral triangles isosceles. Function with three arguments right after the header files follow asked … the perimeter using the below.. Of more than 90° work clockwise around our shape easily calculate the area perimeter., and base of a right angled triangle will stand duri … m, E! In the same linear measurement unit as the point where the altitudes of triangle! Is easy with one angle equal to 90°: 5 10 the area of triangle=25.00 the! I will mark as brainlest, this is angle, the second unique feature of a with. Special note, all side of a right triangle, the second unique feature of a triangle is 6 cm! Examples include: 3, 4, 5 ; 5, 12, 13 ;,! Interior angle of more than 90° as any other triangle an right triangle and area of triangle... Cookies in your browser the distance around the edges x ) =4? Please! Given triangle decimal, we will calculate the perimeter of a triangle are of!: 5 10 the area of a rhombus formula this example we asking! The dependent variable using the Pythagorean theorem one triangle will always have only one measure area... ) corresponds to f ( x ) =4? , Please Help one measure of.!, perimeter = base how to find the perimeter of a right triangle height + hypotenuse over the correct answer same as! Many 3s are in 12 write a diagram i will mark as.., finding the height longest side of a triangle i honestly am at fair! Machine packages triangle 's three inner angles meet you were given a 20,000! Height is easy with one of two methods three sides, that is length_of_first_side + length_of_second_side + length_of_third_side the of. Length_Of_Second_Side + length_of_third_side three inner angles meet solve for the better apprehension the! Perimeter: perimeter of the obtuse triangle interior angle of exactly 90° find our perimeter, add them.! ’ s sides we need to find the perimeter depends … Mathematics, 19.01.2021 21:50 coolcat3190 Out examples... This method will show you how to find our perimeter of a right triangle calculator you can specify of! + length_of_third_side of Cosines also the vertex of the sides of the triangle: 5 the. = base + height + hypotenuse solve this, i ’ D appreciate any Help and...
|
## Existence of rotating-periodic solutions for nonlinear systems via upper and lower solutions.(English)Zbl 1385.34032
In this paper, the authors study the following system $x'= f(t, x),\quad '=\frac{d}{dt}\tag{S1}$ where $$f: \mathbb{R}^1\times \mathbb{R}^n\rightarrow\mathbb{R}^n$$ is continuous and satisfies the assumption $f(t+T,x)=Qf(t,Q^{-1}x)\text{ for all }t\in\mathbb{R}^1,\quad x\in \mathbb{R}^n,$ where $$Q\in O(n)$$, i.e. $$Q$$ is an orthogonal matrix. By using Brouwer’s fixed point theorem, they present a Massera-type criterion on affine-periodic solutions. Combining Massera’s criterion with the topological degree theory, they prove the existence of affine-periodic solutions for systems (S1). Moreover, some applications are given.
### MSC:
34C25 Periodic solutions to ordinary differential equations 34C14 Symmetries, invariants of ordinary differential equations 47N20 Applications of operator theory to differential and integral equations
Full Text:
|
# Can You Defeat the Detroit Lines?
This week’s Riddler Express features a challenge drawn from the world of football analytics: In the Riddler Football League, you are coaching the Arizona Ordinals against your opponent, the Detroit Lines, and your team is down by 14 points. You can assume that you have exactly two remaining possessions (i.e., opportunities to score), and that Detroit will score no more points. For those unfamiliar with American football, a touchdown is worth 6 points.
# More Menorah Math!
This week’s Riddler Express deals with some menorah math that provides a good application for combinatorics: Tonight marks the sixth night of Hanukkah, which means it’s time for some more Menorah Math! I have a most peculiar menorah. Like most menorahs, it has nine total candles — a central candle, called the shamash, four to the left of the shamash and another four to the right. But unlike most menorahs, the eight candles on either side of the shamash are numbered.
|
Phyllotaxis pattern in Python?
PythonServer Side ProgrammingProgramming
What is phyllotaxis pattern?
When we go back, in our botany classes and the plant world, phyllotaxis means the arrangement of flowers, leaves or seeds on a plant stem, similar to the one found in Fibonacci spiral. Based on Fibonacci sequence, the Fibonacci spiral is a set of numbers that follows a pattern similar on pascal’s triangle. The Fibonacci sequence numbers are something like - 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, etc. So the Fibonacci sequence number is the sum of its previous numbers.
Fibonacci spirals
We generally look for symmetry and patterns to understand the objects around us. Without realizing it our eyes are seeing the Fibonacci sequences, or in the case of a sunflower head, a Fibonacci spiral.
Sunflower spiral
Example Code
import math
import turtle
def PhyllotacticPattern( t, petalstart, angle = 137.508, size = 2, cspread = 4 ):
"""print a pattern of circles using spiral phyllotactic data"""
# initialize position
turtle.pen(outline=1,pencolor="black",fillcolor="orange")
# turtle.color("orange")
phi = angle * ( math.pi / 180.0 )
xcenter = 0.0
ycenter = 0.0
# for loops iterate in this case from the first value until < 4, so
for n in range (0,t):
r = cspread * math.sqrt(n)
theta = n * phi
x = r * math.cos(theta) + xcenter
y = r * math.sin(theta) + ycenter
# move the turtle to that position and draw
turtle.up()
turtle.setpos(x,y)
turtle.down()
# orient the turtle correctly
turtle.setheading(n * angle)
if n > petalstart-1:
#turtle.color("yellow")
drawPetal(x,y)
else: turtle.stamp()
def drawPetal( x, y ):
turtle.up()
turtle.setpos(x,y)
turtle.down()
turtle.begin_fill()
#turtle.fill(True)
turtle.pen(outline=1,pencolor="black",fillcolor="yellow")
turtle.right(25)
turtle.forward(100)
turtle.left(45)
turtle.forward(100)
turtle.left(140)
turtle.forward(100)
turtle.left(45)
turtle.forward(100)
turtle.up()
turtle.end_fill() # this is needed to complete the last petal
turtle.shape("turtle")
turtle.speed(0) # make the turtle go as fast as possible
PhyllotacticPattern( 200, 160, 137.508, 4, 10 )
turtle.exitonclick() # lets you x out of the window when outside of idle
Solution
With a small change in your above program, you result could be displaced something like (give custom color and alter some values):
Updated on 30-Jul-2019 22:30:25
Advertisements
|
## Precalculus (6th Edition) Blitzer
$\underline{{{a}_{2}}=\frac{{{\left( -1 \right)}^{2}}}{{{4}^{2}}-1}=\frac{1}{15}}$.
Put $n=2$ to get the second term of the sequence given. Now, to get the third term, put $n=3$ and so on. We have the general form of the sequence: ${{a}_{n}}=\frac{{{\left( -1 \right)}^{n}}}{{{4}^{n}}-1}$. So to find the second term, put $n=2$: \begin{align} & {{a}_{2}}=\frac{{{\left( -1 \right)}^{2}}}{{{4}^{2}}-1} \\ & =\frac{1}{16-1} \\ & =\frac{1}{15} \end{align}
|
# Impulse Response Calculation from Samples on Digital Circuit
#### chjmartin2
Joined Feb 7, 2012
2
I'd really appreciate some help. So, I have a black box which we will assume is linear and time invariant. The inputs to the system can either be +.5V or -.5V. The output of the system is continuous, varying from +.5V to -.5V. I can sample the output of the signal at a frequency that matches the input signal rate. So, the input waveform is a square wave, and the output is continuous.
How do I take my sampled output and determine the impulse response of the system? The impulse response seems to be 63 samples long (that is how long it takes for the decaying waveform to settle back at the midpoint.)
Once I have that impulse response (in any usable form) how can I use it to determine the ideal input signal to generate an optimal output signal based on a defined target signal. I hope I said that right.
In short, I have a black box, I can only send 1's or 0's to it, but it outputs a continuous signal from 0 to 1 and I want to create the series of 1's and 0's to recreate the best approximation of a target signal at the output.
Thoughts?
#### chjmartin2
Joined Feb 7, 2012
2
Attached to this message is a graph of the input signal and the resulting output signal. What I'd like to do is to determine the optimal input signal to match a desired output signal.
Any ideas?
#### Attachments
• 126.5 KB Views: 13
#### Georacer
Joined Nov 25, 2009
5,182
In general, the following holds:
$$y(t)=\int_0^t x(\tau) h(t-\tau) t\tau$$
where x is your input and
h is the impulse response.
But I don't think a simple derivation would cut it. Maybe someone else can shed some light.
|
Waiting for answer This question has not been answered yet. You can hire a professional tutor to get the answer.
QUESTION
What is the molar mass of methane, CH_4?
16g//mol
Simply add the molar masses of each atom that makes up the molecule, obtaining each element's molar mass from .
In this case, we get
M_r (C)+4Mr(H)=12+(4xx1)=16g//mol.
|
## US Navy demos recovery of CO2 and production of H2 from seawater, with conversion to liquid fuel; “Fuel from Seawater”
##### 08 April 2014
Researchers at the US Naval Research Laboratory (NRL), Materials Science and Technology Division have demonstrated novel NRL technologies developed for the recovery of CO2 and hydrogen from seawater and their subsequent conversion to liquid fuels. Flying a radio-controlled replica of the historic WWII P-51 Mustang red-tail aircraft (of the legendary Tuskegee Airmen), NRL researchers Dr. Jeffrey Baldwin, Dr. Dennis Hardy, Dr. Heather Willauer, and Dr. David Drab used a novel liquid hydrocarbon fuel to power the aircraft’s unmodified two-stroke internal combustion engine.
The test provides a proof-of-concept for an NRL-developed process to extract CO2 and produce hydrogen gas from seawater, subsequently catalytically converting the CO2 and H2 into fuel by a gas-to-liquids process. The potential longer term payoff for the Navy is the ability to produce fuel at or near the point of use when it is needed, thereby reducing the logistics tail on fuel delivery, enhancing combat capabilities, and providing greater energy security by fixing fuel cost and its availability.
From an environmental perspective, such a combination of integrated NRL-developed technologies could be considered CO2 neutral. The carbon dioxide, produced from combustion of the synthetic fuel, is returned to the atmosphere where it re-equilibrates with the ocean to complete the natural carbon cycle.
Using an innovative and proprietary NRL electrolytic cation exchange module (E-CEM), both dissolved and bound CO2 are removed from seawater at 92% efficiency by re-equilibrating carbonate and bicarbonate to CO2 and simultaneously producing H2. The gases are then converted to liquid hydrocarbons by a metal catalyst in a reactor system.
The energy required to obtain these feedstocks from the ocean is primarily for the production of hydrogen; the carbon dioxide is a “free” byproduct. The process of both recovering CO2 and concurrently producing H2 gas eliminates the need for additional large and expensive electrolysis units.
In close collaboration with the Office of Naval Research P38 Naval Reserve program, NRL has developed a game-changing technology for extracting, simultaneously, CO2 and H2 from seawater. This is the first time technology of this nature has been demonstrated with the potential for transition, from the laboratory, to full-scale commercial implementation.
—Dr. Heather Willauer, NRL research chemist
CO2 in the air and in seawater is an abundant carbon resource, but the concentration in the ocean (100 milligrams per liter [mg/L]) is about 140 times greater than that in air, and 1/3 the concentration of CO2 from a stack gas (296 mg/L). Two to three percent of the CO2 in seawater is dissolved CO2 gas in the form of carbonic acid, one percent is carbonate, and the remaining 96 to 97% is bound in bicarbonate.
NRL has made significant advances in the development of a gas-to-liquids (GTL) synthesis process to convert CO2 and H2 from seawater to a fuel-like fraction of C9-C16 molecules.
In the first patented step, an iron-based catalyst has been developed that can achieve CO2 conversion levels up to 60% and decrease unwanted methane production in favor of longer-chain unsaturated hydrocarbons (olefins). These value-added hydrocarbons from this process serve as building blocks for the production of industrial chemicals and designer fuels.
E-CEM Carbon Capture Skid. The E-CEM was mounted onto a portable skid along with a reverse osmosis unit, power supply, pump, proprietary carbon dioxide recovery system, and hydrogen stripper to form a carbon capture system [dimensions of 63" x 36" x 60"]. (Photo: US Naval Research Laboratory) Click to enlarge.
In the second step, using a solid acid catalyst reaction, these olefins can be oligomerized (a chemical process that converts monomers, molecules of low molecular weight, compounds of higher molecular weight using controlled polymerization). The resulting liquid contains hydrocarbon molecules in the carbon rang—C9-C16—suitable for use a possible renewable replacement for petroleum-based jet fuel.
The predicted cost of jet fuel using these technologies is in the range of $3-$6 per gallon, and with sufficient funding and partnerships, this approach could be commercially viable within the next seven to ten years, the Navy researchers suggested. Pursuing remote land-based options would be the first step towards a future sea-based solution.
The minimum modular carbon capture and fuel synthesis unit is envisioned to be scaled-up by the addition individual E-CEM modules and reactor tubes to meet fuel demands.
NRL operates a lab-scale fixed-bed catalytic reactor system and the outputs of this prototype unit have confirmed the presence of the required C9-C16 molecules in the liquid. This lab-scale system is the first step towards transitioning the NRL technology into commercial modular reactor units that may be scaled-up by increasing the length and number of reactors.
The process efficiencies and the capability to simultaneously produce large quantities of H2, and process the seawater without the need for additional chemicals or pollutants, has made these technologies far superior to previously developed and tested membrane and ion exchange technologies for recovery of CO2 from seawater or air, according to the team.
$3-$6/gallon production cost presumably assumes that the input energy is free. This is effectively true on a nuclear aircraft carrier (which is rarely steaming at full speed and can put excess power to other purposes), but questionable elsewhere.
Exactly. Energy balance is shown in the paper here:
Even then the machinery to produce the fuel would not be trivial. Believe it or don't, even on the mighty Vinson, packaging this fuel factory is no mean feat.
Electrofuels are at a very early stage. If it can be done @ $6.00/gallon now it is a great success. It could become a way to make intermittent (Solar, Wind etc) energy sources 24/7. This is the Navy at work, creating another incredibly expensive target for relatively cheap missiles to destroy. Many thanks for the link Herman. The calculations seem highly speculative to me. The report is full of phrases like: 'the theoretical amount of seawater needed is....' OTEC as a power source is wholly speculative, and the costings at this stage of the development of the technology are no more than WAGS. On page 9 they give the capital cost of Navy nuclear power at$1,200kw!
At that rate then never mind jet fuel, simply build barges, install Navy nuclear reactors on them and sell the power to the grid!
Either that is a massive underestimate, as it is way, way below the cost of civil nuclear power in the US, or the Navy can produce nuclear reactors at a fraction of the civil cost, even less than the Chinese, presumably due to not having to comply with civil nuclear regulation and regulatory delays etc.
Does the Navy really build reactors so cheaply?
@Engineer-Poet, this is true but $3-6/gal is nearly an order of magnitude cheaper than current fuel costs. Further, being able to produce your own fuel at multiple mobile forward points of operations has tremendous value. This seems like a no-brainer decision to me for the military... which is why they'll probably drag their heels, lol. @Davemart: I have a friend at Burns and Roe who has told me that the USN's capital costs for nuclear power are in the$1,000-1,500/kw range so that certainly doesn't seem out of line. No interest payments to make and no regulatory bodies to stop them.
At those costs, wire them up to the grid in floating barges and low carbon energy is solved.
Are you aware of the fortune we paid to ship Persian Gulf petroleum products to Afghanistan, thanks largely to the mafia-like bottleneck of Pakistan? And the runup in gasoline prices this caused? How Iraqi petroleum failed to come online to sustain Operation Desert Freedom, and the continued dependence we had on Gulf petroleum? On the other hand, our nuclear carriers are remarkably long-lived and now at least can deliver an ancillary return on capital (aviation fuel) which nuclear warheads cannot, unless you count the loss-inducing reprocessing of HEU and plutonium.
I'd say the mere act of determination in war counts as much as warmaking itself. Even Sheik Yahmani of OPEC quaked that alternative energy would put Big Oil out of business. Far from true, but it did put the fear of Allah in certain adversaries.
On page 9 they give the capital cost of Navy nuclear power at $1,200kw! ... Either that is a massive underestimate, as it is way, way below the cost of civil nuclear power in the US, or the Navy can produce nuclear reactors at a fraction of the civil cost, even less than the Chinese Naval reactors are small, modular, and built on an assembly line. They are built in a factory rather than on-site, and they've got quite a bit of experience with turning them out consistently and on schedule. Plus, there are no lawyers involved. This resembles the state of affairs under the AEC rules, when nuclear plants were actually cheaper to build than coal and were expected to completely replace coal by the turn of the 21st century. There are any number of nuclear technologies designed to be factory-built in modular units and delivered intact to the installation site (mPower, Hyperion, NuScale) and achieve similar economies. Our need for carbon-free power could support many times the production rate of naval reactors. It's ironic that the biggest fear of the Greens isn't that nuclear power cannot replace fossil fuels and provide prosperity AND clean energy, but that it WILL. @E-P, I had no doubt that SMR can be cost effective and safe, and would be ideal as baseload electricity provider for the grid to replace coal. However, despite the cost-effectiveness and relative safety of SMR as demonstrated in the USN fleet, no commercial shipping vessel has adopted this energy source for propulsion, due to liability risk? or due to anti-nuclear-prohibiting regulations? Perhaps with modern computers, advanced sensors and monitoring system and networking, and better fail-safe design, these type of SMR will be able to replace all coal-fired power plants in existence with the same safety records and cost-effectiveness as the USN nuclear powered fleet. @Roger Pham I believe the reason nuclear cargo vessels have not been adopted is because, at this time, any build would be a one-off and the refueling facilities and staff would be specialized/one-offs. As such, there is disadvantage to being a first mover. IMO, SMRs could work now but utilities are capital constrained as they have milked their equity to pay shareholders and pensioners. GreenPlease, The companies who are building Navy SMR can also supply the commercial shipping fleet using already proven design(s), or to re-power existing coal-fired power plants, if such a market exists. Those ex-Navy-trained nuclear engineer /technicians can find a second career aboard commercial vessels or to work in re-powered existing coal-fired plants, if such a job exist. Let's not forget that the$1,250/kW for a Navy SMR does not include the much larger decommissioning costs later at the end of the life of the reactor, perhaps many folds higher per kW, because almost everything in it is radioactive and must be disposed of properly to keep the workers and the environment safe...including the nuclear waste. It takes a loooong time to decommission a nuclear power plant!
On the contrary, solar and wind energy collectors contain no radioactivity and the materials can be recycled upon decommissioning, thus negating any associated tear-down costs.
Rod Adams says that the Navy keeps its nuclear expertise under lock and key. Naval propulsion reactors are not for sale to the public; Adams himself advanced a very different nuclear technology for ship propulsion.
The actual cost of Naval Nuclear Propulsion is not easily compared to that of equipment used for utility power.
First of all, when the Navy expresses reactor output, they do so in terms of Mw(thermal). With only a couple of exceptions (the Tullibee and Lipscomb are the only ones I know), most of the energy produced by a naval reactor is used to drive steam turbines that connect mechanically to the propulsion shaft(s). Therefore, the 220Mw output of the S6W submarine reactor refers to the gross thermal energy output -- to translate that to a typical land-based reactor electrical output you would multiply by 0.3 or so. So the $1200/kw in this conversation goes to about$3500/kw on an apples-to-apples basis.
Moreover, I'm almost certain the $x/kw "capital cost" does not include the core itself. I'd bet my next quarter's billings that is separately costed as an element of the DoE budget authority. Naval reactor cores are wildly different from their civilian counterparts, with fuel configured not in separate rods but in a single unified structure of plates in a quite unique arrangement. Enrichment is stunningly high: over 90% compared to 2-3% in a civilian core. No, they cannot become "bombs" because of dispersion and configuration of fuel and cladding, but they are far more expensive than the already eye-wateringly high cost of civilian fuel. But the biggest reason you will not see a US Navy unit in ANY civilian application is the very different approach to operation and safety. Navy reactors have very few automated shutdown protections. Instead they are dependent on (1) an extremely robust negative temperature coefficient of reactivity throughout the range of power output in all phases of core life, (2) an inviolable and conservative safe operating envelope with significant structural and thermal margins, (3) zero tolerance for fission product daughters in the primary coolant loop under ANY conditions, and (4) a very highly trained operating team. They do not possess the exceptional strength of containment we expect from utility reactors (owing quite obviously to mass and volumetric limitations of the warship itself). They must also be able to very quickly recover from inadvertent shutdowns in order to maintain vessel propulsion in combat conditions (translation: fast recovery startup rates that would make a utility operator soil his trousers). THIS IS NOT TO SAY THEY ARE "UNSAFE" IN ANY WAY, but they simply are not the same thing as, say, an AP1000. Finally, as many mentioned here, the overall lifetime support (including essentially limitless years of safe storage of decommissioned waste materials) is in no way included in the "sticker price". A while back I heard of a SMR design that wouldn't have any decommissioning costs at end of the life, nor any need for "ex-Navy-trained nuclear engineer/technicians" to operate it. The idea was to just bury the reactor a thousand feet underground and it would run without tending because it was designed with passive controls, and it would have no decommissioning costs because once its fuel was used up you would just cut the output cable, leave the reactor in place and dig a new hole for a fresh SMR. The Hyperion E-P mentioned fits that description; http://phys.org/news145561984.html but I sure there must be others. I do admit to proposing that we site reactors in mines below cities, to achieve further defense in depth against radioisotope releases while making it feasible to use spent steam for space heating. Decommissioning such reactors would probably require removal of the fuel (residual heat output would otherwise be a problem) and would certainly require sealing coolant passages going to it. Other than that, entombing the empty reactor in concrete should pretty much do it. What appeals to me about SMRs is that they are small. The utilities are talking about facing a death spiral and building big power plants of any kind just runs the risk of emplacing another future stranded asset. Better to build lots of small power: Small, fast, flexible, distributed, and smart. Thanks, ai vin for the info. Also, the Hyperion reactor is about the size of the reactor in the Los Angeles-class of Navy nuclear-powered destroyer. A few of these could power a container cargo ship for 7-10 years at a time without refueling. Container ship needs steady power on 24-7 basis, so this would be ideal. While at port, the electricity power from the ship's nuclear plant can be plugged into the local gird to power the local grid in order for not having to throttle down the reactor. One issue of concern for nuclear powered container ships could be piracy. Seaborne piracy against transport vessels remains a significant issue (with estimated worldwide losses of US$13 to \$16 billion per year), particularly in the waters between the Red Sea and Indian Ocean, off the Somali coast, and also in the Strait of Malacca and Singapore, which are used by over 50,000 commercial ships a year. In recent years, shipping companies claimed that their vessels suffer from regular pirate attacks on the Serbian and Romanian stretches of the international Danube river, i.e. inside the European Union's territory, starting from at least 2011.
Modern pirates favor small boats and taking advantage of the small number of crew members on modern cargo vessels. They also use large vessels to supply the smaller attack/boarding vessels. Modern pirates can be successful because a large amount of international commerce occurs via shipping. Major shipping routes take cargo ships through narrow bodies of water such as the Gulf of Aden and the Strait of Malacca making them vulnerable to be overtaken and boarded by small motorboats. Other active areas include the South China Sea and the Niger Delta. As usage increases, many of these ships have to lower cruising speeds to allow for navigation and traffic control, making them prime targets for piracy.
A nuclear-powered cargo vessel would have no reason to operate at less than full speed/power in open seas; saving fuel would not be a consideration. Given the superior speed, pirate-ridden bottlenecks like the Strait of Malacca could simply be bypassed. The ship might even be able to make better time overall.
Roger: not to be picky but the Los Angeles class are Fast Attack Submarines (SSN-688 lead vessel), not destroyers.
E-P, this is true, and many ships do take the longer but safer route around the Cape of Good Hope. But hull speed is still hull speed so I wouldn't expect these ships to automatically make better time just because they are nukes.
However, given that only ~8 % of the world seaborne trade passes through the Suez Canal, it would be worthwhile to see which routes could be changed. The routes that save the most time by going through the canal start at either Ras Tanura or Jeddah, both are major oil ports. But of course we want to stop using oil so maybe we could cut that traffic altogether. ;)
http://www.suezcanal.gov.eg/sc.aspx?show=11
@Herman:
Many thanks for the very knowledgeable insights.
Al:
With no worries about fuel consumption in nuclear vessels, ships, even massive container ships I would imagine providing the materials strengths are up to it, could simply plane.
No pirate is going to catch a ship moving at 35 knots or so!
If that it practical, they would be a heck of a sight at sea whizzing along! :-)
Davemart: The simplest way of increasing the speed of a container ship is to just make it longer. As a bonus it could carry more cargo. If built to the "New Panamax" standard such a ship could be 1200 ft long and travel at 47 knots. However most ships travel at nowhere near their top speed in open waters for reasons other than fuel use. The violent slamming motion of a ship in a rough seaway is a major reducer of speed.
The comments to this entry are closed.
|
My bibliography Save this paper
# How big is too big? Critical Shocks for Systemic Failure Cascades
## Author
Listed:
• Claudio J. Tessone
()
• Antonios Garas
• Beniamino Guerra
• Frank Schweitzer
## Abstract
External or internal shocks may lead to the collapse of a system consisting of many agents. If the shock hits only one agent initially and causes it to fail, this can induce a cascade of failures among neighoring agents. Several critical constellations determine whether this cascade remains finite or reaches the size of the system, i.e. leads to systemic risk. We investigate the critical parameters for such cascades in a simple model, where agents are characterized by an individual threshold $\theta_{i}$ determining their capacity to handle a load $\alpha\theta_{i}$ with $1-\alpha$ being their safety margin. If agents fail, they redistribute their load equally to $K$ neiboring agents in a regular network. For three different threshold distributions $P(\theta)$, we derive analytical results for the size of the cascade, $X(t)$, which is regarded as a measure of systemic risk, and the time when it stops. We focus on two different regimes, (i) \emph{EEE}, an external extreme event where the size of the shock is of the order of the total capacity of the network, and (ii) \emph{RIE}, a random internal event where the size of the shock is of the order of the capacity of an agent. We find that even for large extreme events that exceed the capacity of the network finite cascades are still possible, if a power-law threshold distribution is assumed. On the other hand, even small random fluctuations may lead to full cascades if critical conditions are met. Most importantly, we demonstrate that the size of the big'' shock is not the problem, as the systemic risk only varies slightly for changes of 10 to 50 percent of the external shock. Systemic risk depends much more on ingredients such as the network topology, the safety margin and the threshold distribution, which gives hints on how to reduce systemic risk.
## Suggested Citation
• Claudio J. Tessone & Antonios Garas & Beniamino Guerra & Frank Schweitzer, "undated". "How big is too big? Critical Shocks for Systemic Failure Cascades," Working Papers ETH-RC-12-015, ETH Zurich, Chair of Systems Design.
• Handle: RePEc:stz:wpaper:eth-rc-12-015
as
File URL: ftp://web.sg.ethz.ch/RePEc/stz/wpaper/pdf/ETH-RC-12-015.pdf
## References listed on IDEAS
as
1. Gai, Prasanna & Kapadia, Sujit, 2010. "Contagion in financial networks," Bank of England working papers 383, Bank of England.
2. J. Lorenz & S. Battiston & F. Schweitzer, 2009. "Systemic risk in a unifying framework for cascading processes on networks," The European Physical Journal B: Condensed Matter and Complex Systems, Springer;EDP Sciences, vol. 71(4), pages 441-460, October.
3. Battiston, Stefano & Delli Gatti, Domenico & Gallegati, Mauro & Greenwald, Bruce & Stiglitz, Joseph E., 2012. "Liaisons dangereuses: Increasing connectivity, risk sharing, and systemic risk," Journal of Economic Dynamics and Control, Elsevier, vol. 36(8), pages 1121-1141.
4. Claudio J. Tessone & Markus M. Geipel & F. Schweitzer, "undated". "Sustainable growth in complex networks," Working Papers CCSS-10-008, ETH Zurich, Chair of Systems Design.
Full references (including those not matched with items on IDEAS)
## Citations
Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
as
Cited by:
1. Ellinas, Christos & Allan, Neil & Johansson, Anders, 2016. "Project systemic risk: Application examples of a network model," International Journal of Production Economics, Elsevier, vol. 182(C), pages 50-62.
2. Rebekka Burkholz & Matt V. Leduc & Antonios Garas & Frank Schweitzer, 2015. "Systemic risk in multiplex networks with asymmetric coupling and threshold feedback," Papers 1506.06664, arXiv.org.
3. Wang, Jianwei & Cai, Lin & Xu, Bo & Li, Peng & Sun, Enhui & Zhu, Zhiguo, 2016. "Out of control: Fluctuation of cascading dynamics in networks," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 462(C), pages 1231-1243.
4. Rebekka Burkholz & Hans J. Herrmann & Frank Schweitzer, 2018. "Explicit size distributions of failure cascades redefine systemic risk on finite networks," Papers 1802.03286, arXiv.org.
## Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:stz:wpaper:eth-rc-12-015. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Claudio J. Tessone). General contact details of provider: http://edirc.repec.org/data/dmethch.html .
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
Please note that corrections may take a couple of weeks to filter through the various RePEc services.
IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.
|
# Tag Info
4
This is a formal notation for the following general thing: $$F(f+\delta f) = F(f) + \int A(x) \delta f(x)$$ Where $\delta f$ is the infinitesimal change in f, and it is a smooth test function, and then on the right hand side, $A(x)$ is just a linear operator on the space of functions. The notation for the $A(x)$ is then $$A(x) = {\delta F\over \delta ... 3 Whenever I have troubles with functional derivative things, I just do the replacement of a continuous variable x into a discrete index i. If I'm not mistaken this is what they call a "DeWitt notation". The hand waiving idea is that you can think of a functional F[f(x)] as of a "ordinary function" of many variables ... 3 You aren't doing anything wrong, the paper made a mistake. It probably doesn't affect the result at all, since it is only a complex conjugation difference. But you are working a little too hard. First note:$$ {\delta \Lambda(k) \over \delta \Lambda(x) } = {\delta\over\delta\Lambda(x)} \int e^{-ikx'} \Lambda(x') dx' = e^{-ikx} $$you could say by ... 3 One way to see that considering the dependence of \dot{x} on x is problematic is as follows: x(t) maps a real number t to another real number x. So \dot{x}=dx/dt is the derivative of that map, meaning we take$$\lim_{\Delta t \to 0} \frac{x(t+\Delta t) - x(t)}{\Delta t}$$So we can see that dx/dt is itself another map from a real number t ... 2 The physicist's derivative notation denotes the components of a Frechet derivative in the direction of the delta-function supported at y. This is one of those places where the habit of denoting the function f by its value f(x) gets confusing. It's somewhat clearer if you write \delta_y for the delta function at y, and$$ \frac{\delta F}{\delta ...
2
The least error-prone way for computing the functional derivative $df(M)/dM(x)$ by hand is the use of the formula $\int dx \frac{\partial f(M)}{\partial M(x)} N(x) = \frac{d}{dt} f(M+tN) |_{t=0}$, where $N$ is of the same type as $M$ (but c-valued if $M$ is an operator). The right hand side is easy to work out, and the result is a linear functional in ...
2
Have a look first at several chapters in Stone and Goldbart, "Mathematics for Physics" (the free preprint is here) before entering into more specific books. I think you may want to see chapters 1, and parts of 2 and 9. You may find some parts of what you want in classic books of the "Comprehensive Mathematical Methods for Physics" type, but they don't ...
1
1) Let us write the Wilson-line of a simple open curve $\gamma: [s_i,s_f]\to \mathbb{R}^4$ as $$\tag{1} U(s_f,s_i) ~=~ \mathcal{P}\exp \left[ i\int_{\gamma} A_{\mu}~ dx^{\mu} \right].$$ 2) The path-ordering $\mathcal{P}$ becomes important if the gauge potential $$\tag{2}A_{\mu}~=~A^a_{\mu} T_a$$ is non-abelian. Here $T_a$ are the generators of the ...
1
The standard encyclopedic treatise of nonlinear functional analysis is the 5 volume opus of Eberhard Zeidler, "Nonlinear Functional Analysis and Its Applications". It covers a lot of material about variational calculus, for example, in volume III "Variational Methods and Optimization". The applications are usually applications from physics. If that is too ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
## How to report a security vulnerability?
#### Responsible Disclosure
Whether you are a security researcher or an amateur hacker, if you find a security vulnerability in an application or website, you are often faced with the question: What do I do now?
When we conduct a penetration test, the answer is usually relatively simple: we document the vulnerability and send the report to our client after the project is finished. But what do we do when we find a security vulnerability that’s not related to a customer project?
When dealing with vulnerabilities, there are several interests that should be taken into account:
• The vendor might suffer reputational damage as a result of the vulnerability.
• The developers need time to reproduce and fix the vulnerability.
• Users of the software may need time to apply updates.
• People affected by the vulnerability have an interest in the vulnerability being fixed and being informed about the risk they have been exposed to.
And of course the finder of the vulnerability has interests of his or her own. You know your personal motivations best, but often security researchers are driven by money, recognition or the feeling that they have made the internet safer through their work.
Some interests may weigh more heavily than others, depending on the situation. The Responsible Disclosure (or Coordinated Vulnerability Disclosure) approach has proven to be a good way to balance the interests.
## During testing
It is completely understandable that you are curious when you discover a vulnerability and want to explore the implications. But be careful! Exploiting a vulnerability can quickly lead to legal and moral problems. It is therefore worthwhile to pause for a moment after you first suspect a vulnerability and consider how to responsibly assess its impact.
Is there a bug bounty programme for the affected application? The terms and conditions usually state what is considered acceptable behaviour and what is not. If the vulnerability has been discovered in a piece of software, it is advisable to set up a test installation in which you can safely explore the vulnerability. This is usually more difficult with web applications, but even then there are ways to minimise negative impacts. Avoid any actions that you expect to cause damage. Accessing other users’ data is off-limits in any case! Even if it can be tempting to have a working proof of concept or even an exploit, when in doubt, a reasonable suspicion is often enough to report a vulnerability.
It is important that you document what you did. This not only helps to fix the vulnerability, but also serves as proof that you did not act in bad faith. It also allows your contacts to understand your steps.
## Reporting of a vulnerability
After you have documented the vulnerability, the question is how to report it. For this, you need to do a little research. Some companies and organisations have central points of contact to which you can send reports about their products. Examples are Microsoft and Apache. Other ways to find the appropriate contacts are a SECURITY.md in the repository of open source projects or a security.txt for websites or web services.
Important note: If the affected software or website is covered by a bug bounty programme and the vulnerability is reported through it, the terms of the bug bounty programme apply. In most cases, this means Private Disclosure, i.e. the disclosure of information about the vulnerability is entirely up to the software vendor.
If nothing of this helps, there are other options: e.g. the contact channels in the website’s imprint or tickets in the project’s bug tracker. Note, however, that such channels are usually not confidential. Therefore, a report via one of these channels should not contain details about the discovered vulnerability. A simple note that you found a vulnerability and would like to get in touch with the responsible persons is sufficient.
At this point at the latest, it is also the right time to think about whether you want to report the vulnerability under your real name or under a pseudonym. Usually, the contact persons are grateful for your report, but in some cases, epecially in the case of an escalation, a vendor may try to shoot (figuratively speaking) the messenger (you). If you initially appear under your real name, you cannot change your mind later.
## The wait
After you have reported a vulnerability, the first thing to do is to wait. If there is no response, you can ask again a few days later. Maybe you got the wrong contact or they are on holiday. In this case, you can try again through another contact person.
Hopefully someone will get back to you eventually. If there are questions about the reported vulnerability, there may be follow-up questions. Otherwise, you should hopefully receive feedback that the vulnerability has been closed or about why this was not possible. Once contact has been made, it is also the right moment to talk about the disclosure of the vulnerability and when this should take place.
## Escalation
But what can you do if no one answers? First of all, you should make sure that you have found the right contact person. If that doesn’t help, you can set a deadline after which you unilaterally disclose the vulnerability to increase the pressure.
The disclosure should not be about humiliating the provider. Although the publication should of course build up a certain pressure to act, it primarily enables the users of the affected application to make their own risk assessment.
It is difficult to establish a general deadline for the disclosure of a vulnerability. A good reference value is, for example, 90 days. Depending on how much time has passed since the first report, this can be shortened. In any case, however, the provider should have sufficient time to react to the vulnerability. In some cases, for example, if a vulnerability is already being actively exploited, a significantly shorter period may also be appropriate.
A few important points to keep in mind:
• Unfortunately, it happens from time to time that instead of fixing the vulnerability, a vendor threatens to or even takes legal action. This is not a desired outcome, but it does happen. Again, before you escalate, make sure you have documented your actions.
• Do not publish any information about the vulnerability before the set deadline has passed, not even teasers. There is a risk that the vulnerability will be blown out of proportion.
• Give the vendor enough time to respond to your message.
• Avoid unnecessarily inflating the vulnerability. From the outside, it is often difficult to give a realistic assessment of the vulnerability, and it makes the collaboration more difficult if the vendor or developers feel unjustly attacked.
## Disclosure
At some point, the vulnerability should hopefully be fixed and you want to publish the information about the vulnerability.
As the name Coordinated Vulnerability Disclosure implies, disclosure should ideally be done in coordination with the vendor of the software. Adhere to agreements made, but this does not mean that you have to accept any terms.
If you have set the vendor a deadline, which has now expired, you must decide on an appropriate approach for full disclosure. Keep in mind that your credibility will suffer if you are driven by frustration or resentment because no one has responded to your messages. Try to be as professional as possible in your disclosure. And next time, devote your research to products from more cooperative vendors.
How you publish the vulnerability is up to you. You can publish it as a blog post, bug ticket or on a mailing list. If you have taken care to remain anonymous so far, you should also take care to do so when publishing. If you do not want to publish the vulnerability yourself, you can also use an intermediary. Possible intermediaries are journalists or other IT security experts, such as us. In that case, however, you should involve the intermediary at an early stage.
The information you publish about the vulnerability should be detailed enough so that a reader can assess the credibility of the report and possible implications. If there is no patch yet, you should be careful with working or almost working exploits. The goal of publishing should always be to protect the users.
If the vulnerability occurs in a piece of software, it can be assigned a CVE ID (Common Vulnerabilities and Exposures ID) to make it easier to talk about. Some vendors, such as Apache, will request the CVE ID if they accept your report. If not, you can request a CVE ID yourself.
December 20, 2022
|
# Largest jumps of a spectrally positive $\alpha$-stable process
Let $X(.)$ be a (strictly) $\alpha$-stable process (with $\alpha \in (1,2)$). Assume also that $X(.)$ is spectrally positive (its Lévy measure is concentrated in $[0,+\infty)$).
I am looking for a result that qualitatively says that the set of jumps heights of $X(.)$ is unbounded. More formally, define $J_t(x(.)) := \sup_{0\leq s \leq t} \{\vert x(s) - x(s^-)\vert\}$. Is it true that
$$J_t(X(.)) \stackrel{t\rightarrow\infty}{\longrightarrow} + \infty\qquad \mathrm {a.s.}$$ or any other suitable convergence?
Denote by $N$ the jump measure of the Lévy process, i.e. $$N_t(B) := N([0,t] \times B) := \sharp \{s \in [0,t]; \Delta X_s := X_s-X_{s-} \in B\},$$ and by $\nu$ its Lévy measure. It is widely known that $(N_t(B))_{t \geq 0}$ is a Poisson process with intensity $\nu(B)$. In particular, we have
$$\mathbb{P}(N_t(B) >0) = 1- \mathbb{P}(N_t(B)=0)= 1-e^{-\nu(B) t}.$$
For any set $B$ such that $0<\nu(B)<\infty$ this implies
$$\mathbb{P}(\exists s \in [0,t]: \Delta X_s \in B) = \mathbb{P}(N_t(B) >0) \stackrel{t \to \infty}{\to} 1.$$
Applying this for $B = [n,n+1)$, we get
$$\mathbb{P}(\exists t \geq 0: \Delta X_t \in [n,n+1)) = 1.$$
Hence,
$$\mathbb{P}(\forall N \geq 1 \exists t \geq 0: \Delta X_t \geq N) = 1.$$
This shows that the jump heights are (almost surely) unbounded.
Remark: The proof applies to any Lévy process with unbounded Lévy measure.
• This is extremely elegant, thanks. I suspect it would be easy to obtain a modified version of the very last statement like $\mathbb P (\forall N \geq 1 \forall T\geq0\exists t\geq T:\Delta X_t \geq N) = 1$, for example by taking a slightly different definition for the jump measure, counting the jumps in $[T,t]$ instead of $[0,t]$. Am I correct? – Indigo Aug 3 '15 at 19:52
• @Indigo No need to do so. Just note that the restarted process $\tilde{X}_t := X_{t+T}-X_T$ is again a Lévy process with Levy measure $\nu$ for any fixed $T>0$. – saz Aug 3 '15 at 19:57
• Yes, yes of course. That was very insightful and helpful, thanks! – Indigo Aug 4 '15 at 7:57
• @Indigo You are welcome. – saz Aug 4 '15 at 8:11
|
# Discretizing a parabolic PDE with finite volume method
I want to discretize the following parabolic PDE:
$$u_t = \nabla\cdot(\alpha(x)\nabla u)- \beta u\\ x\in\Omega \subset \mathbb{R}^2\\ \partial_n u = 0\\ u(t,0) = u_0(x)\ge 0, \alpha(x)>0$$
Given the Neumann boundary condition above, one can integrate this PDE:
\begin{alignat}{2} \frac{\mathrm d}{\mathrm dt}\int_\Omega \! u\,\mathrm dx&=\phantom{-}\int_\Omega \! \nabla\cdot (a(x)\nabla u)\,\mathrm dx \:\:&-\int_\Omega \! \beta(x) u \,\mathrm dx \\ &=\phantom{-}\int_{\partial\Omega}\! a(x)\nabla u\cdot n \,\mathrm ds&-\int_\Omega \! \beta(x) u \, \mathrm dx \\ &=\phantom{-}\int_{\partial\Omega} a(x)\partial_n u \,\mathrm ds&-\int_\Omega \! \beta(x) u \, \mathrm dx \\ &=-\int_\Omega \! \beta(x) u \, \mathrm dx \end{alignat}
The form of the discretized linear system I want to get is
$$\frac{d\vec{u}}{dt}=A\vec{u}-\vec{c}$$
Unfortunately, I don't think this is exactly what I'm obtaining, despite I do think I use a correct discretization technique. To this end, please consider my steps below.
I discretize the two integrals using the finite volume method:
$$\frac{d}{dt}\sum\limits_{j\in n_i} u_j l_{ij}=-\sum\limits_{j\in n_i} \beta_{ij} u_j l_{ij},$$
where $$n_i$$ is the set of neighboring cells to the cell $$V_j\subset \Omega$$ and where $$l_{ij}$$ is the length of the boundary of $$V_j$$ between $$u_i$$ and $$u_j$$.
Now one can discretize it in time to obtain:
$$\sum\limits_{j\in n_i} \frac{u_j^{n+1}-u_j^n}{\Delta t}l_{ij}=-\sum\limits_{j\in n_i} \beta_{ij} u_j^n l_{ij}$$
So we obtain the linear system $$L\frac{d\vec{u}}{dt} = -A\vec{u},$$
where $$L$$ is the matrix containing $$l_{ij}$$ and each entry of $$A$$ contains $$-\beta_{ij}l_{ij}$$.
Am I correct? I think I'm quite confused as I don't see where each term of the matrix should go, and where the vector $$\vec{c}$$ is supposed to be. Moreover, what does one do with the $$l_{ij}$$ on the LHS?
|
# A triangle has sides A, B, and C. Sides A and B are of lengths 1 and 7, respectively, and the angle between A and B is (5pi)/6 . What is the length of side C?
Feb 20, 2017
You can utilize the law of cosines to solve this problem.
#### Explanation:
Given some triangle with two sides and their included angle defined, we can solve for the third side. The notation I am following is that where $a , b , c$ denote the sides, while $\angle A , \angle B , \angle C$ denote the angles opposite those sides. The formula is given as follows:
${c}^{2} = {a}^{2} + {b}^{2} - 2 a b \cos \left(\angle C\right)$
One can consider this the extended or general form of the pythagorean theorem, in which we have a compensation term to account for non-right triangles. To study this a bit, consider a focus on the angle, C. Should it be greater than 90 degrees, the cosine is negative, meaning there is an extra contribution, and c must be longer. This is true if you imagine the side opposite of an obtuse angle. Imagine C as 90, which reduces contribution to 0. Thus we have a normal right triangle and we are simply finding the hypotenuse. Given an acute angle, the third side must necessarily be smaller.
So how do we use this formula? Plug in values, and since we are solving for c, square root at the end.
We have the following:
$a = 1$
$b = 7$
$\angle C = \frac{5 \pi}{6}$
Simply plug them in, and evaluate, and this should yield a final answer that is close to ~8. Try it out!
|
# Comparing Decimals
Decimals are compared in nearly same way as integers.
Steps for comparing decimals:
1. Find number of digits in integer part of each number. If number of digits is not equal, add required number of LEADING zeros (just like in case of integers).
2. Find number of digits in fractional (decimal) part of each number. If number of digits is not equal, add required number of TRAILING zeros.
3. Start comparing digits from left to right (just like in case of integers), ignoring decimal point.
Example 1. Compare 23.99 and 48.05.
Here number of digits in both integer and decimal parts of both numbers is equal, so we do step 3.
The leftmost digits are 2 and 4. Since 2<4 then 23.99<48.05.
Next example.
Example 2. Compare 0.009 and 45.7.
Integer part of the first number contains only one digit, while integer part of the second contains twh digits. Thus, we add one leading zero to first number.
Fractional part of the second number contains only one digit, while fractional part of the first contains three digits. Thus, we add two trailing zeros to second number.
First number: 00.009.
Second number: 45.700.
Now, compare the leftmost digits 0 and 4. Since 0<4 then 0.009<45.7.
Next example.
Example 3. Compare 12.567 and 12.789.
As can be seen number of digits in both parts is equal, so we begin to compare digits.
First number: 12.567.
Second number: 12.789.
The leftmost digits are equal (both equal 1).
Move to the right: again digits are equal (both equal 2).
Move to the right: since 5<7 then 12.567<12.789.
Next example.
Example 4. Compare -23.1 and 15.68.
As always negative number is less than positive, so -23.1<15.68.
Last example.
Example 5. Compare -1.015 and -1.05.
First compare numbers without sign: 1.015 and 1.05.
Second number has 2 digits in decimal part while first has three digits, so we need to add one trailing zero to the second number.
First number: 1.015.
Second number: 1.050.
Now, compare digits.
The leftmost digits are equal (both equal 1), so move to the right.
Next digits are also equal (both equal 0), so move to the right.
Since 1<5 then 1.015<1.05.
This means that -1.015> -1.05.
Now, practice a bit.
Exercise 1. Compare 2.45 and 7.8.
Answer: 2.45<7.8.
Next exercise.
Exercise 2. Compare 2.67 and -17.5.
Answer: 2.67> -17.5.
Next exercise.
Exercise 3. Compare 35 and 7.89.
Answer: 35>7.89. Hint: decimal part of integer is 0 and don't forget to add zeros (35.00 and 07.89).
Next exercise.
Exercise 4. Compare 12.5 and 12.005.
Answer: 12.5>12.005. Hint: add trailing zeros.
Last exercise.
Exercise 5. Compare -543.209 and -543.20757.
Answer: -543.209<-543.20757. Hint: add trailing zeros, ignore signs and then change direction of inequality.
|
Integrate the function e^(x^2)
1. Mar 19, 2007
lost_math
1. The problem statement, all variables and given/known data
Integrate the function e^(-x^2) with definite integrals -infinity to X
2. Relevant equations
3. The attempt at a solution
I know that the indefinite integral of this reduces to sqrt(pi), but dont know what to do with the definite integral. Is this a known result that I can simply plug in and use?What kind of substitution can I try? FYI- this is a variation of the CDF for a normally distributed function...
2. Mar 19, 2007
Gib Z
Impossible in terms of elementary functions. Why not look up the error function though?
EDIT: is X independent of x? I am unclear with your notation.
|
An error was encountered while trying to add the item to the cart. Please try again.
The following link can be shared to navigate to this page. You can select the link to copy or click the 'Copy To Clipboard' button below.
Copy To Clipboard
Successfully Copied!
The Interaction of Analysis and Geometry
Edited by: V. I. Burenkov Cardiff University, Cardiff, United Kingdom
T. Iwaniec Syracuse University, Syracuse, NY
S. K. Vodopyanov Sobolev Institute of Mathematics, Novosibirsk, Russia
Available Formats:
Electronic ISBN: 978-0-8218-8103-3
Product Code: CONM/424.E
List Price: $105.00 MAA Member Price:$94.50
AMS Member Price: $84.00 Click above image for expanded view The Interaction of Analysis and Geometry Edited by: V. I. Burenkov Cardiff University, Cardiff, United Kingdom T. Iwaniec Syracuse University, Syracuse, NY S. K. Vodopyanov Sobolev Institute of Mathematics, Novosibirsk, Russia Available Formats: Electronic ISBN: 978-0-8218-8103-3 Product Code: CONM/424.E List Price:$105.00 MAA Member Price: $94.50 AMS Member Price:$84.00
• Book Details
Contemporary Mathematics
Volume: 4242007; 344 pp
MSC: Primary 26; 28; 30; 35; 46; 49; 53; 57; 58;
The papers in this volume are based on talks given at the International Conference on Analysis and Geometry in honor of the 75th birthday of Yuriĭ Reshetnyak (Novosibirsk, 2004). The topics include geometry of spaces with bounded curvature in the sense of Alexandrov, quasiconformal mappings and mappings with bounded distortion (quasiregular mappings), nonlinear potential theory, Sobolev spaces, spaces with fractional and generalized smoothness, variational problems, and other modern trends in these areas. Most articles are related to Reshetnyak's original works and demonstrate the vitality of his fundamental contribution in some important fields of mathematics such as the geometry in the "large", quasiconformal analysis, Sobolev spaces, potential theory and variational calculus.
Graduate students and research mathematicians interested in relations between analysis and differential geometry.
• Articles
• I. D. Berg and I. G. Nikolaev - On an extremal property of quadrilaterals in an Aleksandrov space of curvature $\leq K$ [ MR 2316328 ]
• V. I. Burenkov, H. V. Guliyev and V. S. Guliyev - On boundedness of the fractional maximal operator from complementary Morrey-type spaces to Morrey-type spaces [ MR 2316329 ]
• V. N. Dubinin and D. B. Karp - Generalized condensers and distortion theorems for conformal mappings of planar domains [ MR 2316330 ]
• M. L. Goldman - Rearrangement invariant envelopes of generalized Besov, Sobolev, and Calderon spaces [ MR 2316331 ]
• Tadeusz Iwaniec - Null Lagrangians, the art of integration by parts [ MR 2316332 ]
• Maria Karmanova - Geometric measure theory formulas on rectifiable metric spaces [ MR 2316333 ]
• A. P. Kopylov - Stability and regularity of solutions to elliptic systems of partial differential equations [ MR 2316334 ]
• V. M. Miklyukov - Removable singularities of differential forms and $A$-solutions [ MR 2316335 ]
• Hitoshi Murakami - Various generalizations of the volume conjecture [ MR 2316336 ]
• P. Pedregal - Gradient Young measures and applications to optimal design [ MR 2316337 ]
• H. M. Reimann - Wavelets for the cochlea [ MR 2316338 ]
• Yu. G. Reshetnyak - Sobolev-type classes of mappings with values in metric spaces [ MR 2316339 ]
• László Székelyhidi, Jr. - Counterexamples to elliptic regularity and convex integration [ MR 2316340 ]
• S. K. Vodopyanov - Geometry of Carnot-Carathéodory spaces and differentiability of mappings [ MR 2316341 ]
• S. K. Vodopyanov - Foundations of the theory of mappings with bounded distortion on Carnot groups [ MR 2316342 ]
• Request Review Copy
• Get Permissions
Volume: 4242007; 344 pp
MSC: Primary 26; 28; 30; 35; 46; 49; 53; 57; 58;
The papers in this volume are based on talks given at the International Conference on Analysis and Geometry in honor of the 75th birthday of Yuriĭ Reshetnyak (Novosibirsk, 2004). The topics include geometry of spaces with bounded curvature in the sense of Alexandrov, quasiconformal mappings and mappings with bounded distortion (quasiregular mappings), nonlinear potential theory, Sobolev spaces, spaces with fractional and generalized smoothness, variational problems, and other modern trends in these areas. Most articles are related to Reshetnyak's original works and demonstrate the vitality of his fundamental contribution in some important fields of mathematics such as the geometry in the "large", quasiconformal analysis, Sobolev spaces, potential theory and variational calculus.
Graduate students and research mathematicians interested in relations between analysis and differential geometry.
• Articles
• I. D. Berg and I. G. Nikolaev - On an extremal property of quadrilaterals in an Aleksandrov space of curvature $\leq K$ [ MR 2316328 ]
• V. I. Burenkov, H. V. Guliyev and V. S. Guliyev - On boundedness of the fractional maximal operator from complementary Morrey-type spaces to Morrey-type spaces [ MR 2316329 ]
• V. N. Dubinin and D. B. Karp - Generalized condensers and distortion theorems for conformal mappings of planar domains [ MR 2316330 ]
• M. L. Goldman - Rearrangement invariant envelopes of generalized Besov, Sobolev, and Calderon spaces [ MR 2316331 ]
• Tadeusz Iwaniec - Null Lagrangians, the art of integration by parts [ MR 2316332 ]
• Maria Karmanova - Geometric measure theory formulas on rectifiable metric spaces [ MR 2316333 ]
• A. P. Kopylov - Stability and regularity of solutions to elliptic systems of partial differential equations [ MR 2316334 ]
• V. M. Miklyukov - Removable singularities of differential forms and $A$-solutions [ MR 2316335 ]
• Hitoshi Murakami - Various generalizations of the volume conjecture [ MR 2316336 ]
• P. Pedregal - Gradient Young measures and applications to optimal design [ MR 2316337 ]
• H. M. Reimann - Wavelets for the cochlea [ MR 2316338 ]
• Yu. G. Reshetnyak - Sobolev-type classes of mappings with values in metric spaces [ MR 2316339 ]
• László Székelyhidi, Jr. - Counterexamples to elliptic regularity and convex integration [ MR 2316340 ]
• S. K. Vodopyanov - Geometry of Carnot-Carathéodory spaces and differentiability of mappings [ MR 2316341 ]
• S. K. Vodopyanov - Foundations of the theory of mappings with bounded distortion on Carnot groups [ MR 2316342 ]
Please select which format for which you are requesting permissions.
|
# Why two symbols for the Golden Ratio?
Why is it that both
$\phi$
and
$\tau$
are used to designate the Golden Ratio
$\frac{1+\sqrt5}2?$
• I have never heard of $\tau$ denoting the Golden Ratio. Can you provide an example? – pseudoeuclidean Jan 3 '17 at 14:23
• I too have only seen $\phi$ used for this – MPW Jan 3 '17 at 14:24
• It is just a symbol, who cares? I can use the symbol $U:=\frac{1+\sqrt 5}2$. – Masacroso Jan 3 '17 at 14:27
• What is $\tau$ ? Is it the reciprocal of $\phi$ ? – Peter Jan 3 '17 at 14:31
• In some contexts, I have seen $\tau = 2 \pi$ – pseudoeuclidean Jan 3 '17 at 14:40
The Golden Ratio or Golden Cut is the number $$\frac{1+\sqrt{5}}{2}$$ which is usually denoted by phi ($\phi$ or $\varphi$), but also sometimes by tau ($\tau$).
Why $\phi$ : Phidias (Greek: Φειδίας) was a Greek sculptor, painter, and architect. So $\phi$ is the first letter of his name.
The symbol $\phi$ ("phi") was apparently first used by Mark Barr at the beginning of the 20th century in commemoration of the Greek sculptor Phidias (ca. 490-430 BC), who a number of art historians claim made extensive use of the golden ratio in his works (Livio 2002, pp. 5-6).
Why $\tau$ : The golden ratio or golden cut is sometimes named after the greek verb τομή, meaning "to cut", so again the first letter is taken: $\tau$.
• \ Thank you. Could Mr. Livio have been pulling our legs? Given the constant's intimate relation to the (F)ibonacci series, my choice has to be $\phi.$ – Senex Ægypti Parvi Jan 4 '17 at 17:32
|
# Diff 2^x and 3^x
## Recommended Posts
i need help when differiation of function in the form a^x.
For example how to i work out the differiational of 2^x and 3^x?
##### Share on other sites
Use the fact that [imath]a^x = e^{x\log a}[/imath]. This is easy to differentiate.
##### Share on other sites
I don't get how that helps, the problem of the x being in the exponent is still there.
##### Share on other sites
The point is that you know (presumably from elementary analysis) what the derivative of ex is.
##### Share on other sites
Ok, I am new here so I do not know what standard convention is, and if this is an inappropriate response feel free to delete it; but in answer to the original question, to differentiate a^x you can use impicit differentiation. (The other idea posted is probably somewhat easier, but someone seemed not to understand it so this is another way to look at it) That is, if
y = a^x
ln(y) = ln(a^x)
ln(y) = xln(a) (now differentiate impicitly)
y'/y = ln(a)
y' = y ln(a) (substitute the original equation for y)
y' = (a^x)ln(a)
so hopefully that makes sense.
Incidentally, forgive the new guy here, but how are people writing their mathematical stuff so nicely as I do not see the proper tools here in the reply box, and if I try to paste in from somewhere else it does not work?
Edit: I also now notice that this is a very old post. Sorry....
##### Share on other sites
Incidentally' date=' forgive the new guy here, but how are people writing their mathematical stuff so nicely as I do not see the proper tools here in the reply box, and if I try to paste in from somewhere else it does not work?
[/quote']
##### Share on other sites
$y=a^x$
$a^x=e^{xlna}$
we can seperate the exponent into
$y=e^xe^{lna}$
and we know
$e^{lna}=a$
so that makes our equation for y
$y=ae^x$
and we know that the derivative of $e^x$ is $e^x$
so that
$\frac{dy}{dx}=ae^x$
##### Share on other sites
Use the fact that [imath]a^x = e^{x\log a}[/imath']. This is easy to differentiate.
shouldn't that log be ln?
##### Share on other sites
thanks for catching that.
##### Share on other sites
$a^x=e^{x{\ln{a}}}$
$\frac{d(e^u)}{dx}={e^u}{\frac{du}{dx}}$
so, $\frac{d(a^x)}{dx}={a^x}{\ln{a}}$
if you felt like it, you could do it logarithmically.
$\ln{y}=x\ln{a}$
$\frac{d(\ln{u})}{dx}={\frac{1}{u}}{\frac{du}{dx}}$
so, $\frac{d(a^x)}{dx}={a^x}{\ln{a}}$
##### Share on other sites
Don't mind me, I am just trying to write my previous stuff in this LaTex and see if I can make it work.
$y = a^x$
$\ln{y} = \ln{a^x}$
$\ln{y} = x\ln{a}$
now, differentiate implicitly
$\tfrac{dy}{dx}(\tfrac{1}{y}) = \ln{a}$
$\tfrac{dy}{dx} = y\ln{a}$
now, substitute for y
$\tfrac{dy}{dx} = a^x\ln{a}$
so,
$\tfrac{d}{dx}a^x = a^x\ln{a}$
Allright, I think I got it. Thanks!
##### Share on other sites
shouldn't that log be ln?
Only if you are unfortunate enough to be trapped in a first year calculus class.
"log" denotes the natural logarithm in higher maths. Sometimes it will mean "base-2" if you're trapped in a computer science class (sometimes "lg" is used here). Universal meaning in higher maths: "log" without an explicitly mentioned base means "whatever base makes this statement true" if it matters and "whatever the readers favorite base is" if it doesn't.
##### Share on other sites
i've always been told log without a specified base is base 10....and my calculator seems to agree.
##### Share on other sites
I've also been told that at higher level maths people use log to denote ln. Apparently at that level who needs base 10 when you have base e?
##### Share on other sites
i've always been told log without a specified base is base 10....and my calculator seems to agree.
Now you know better. Go to the library and browse through some advanced math texts and see for yourself (anything beyond elementary calculus). If you like, you can write the authors and let them know your calculator says their notation is wrong. Bring alot of stamps though, you'll need them.
##### Share on other sites
Yes, but in the context of the original post, which did appear to be in a calculus class, it probably was a good idea to clarify it. I mean, just by the question being asked it was fairly apparent that this was not a discussion in higher math. It was a calculus discussion. So I think it much better to denote $log_e(x)$ by $ln(x)$
##### Share on other sites
I would write (and expect) $\log$ for base-e in particle physics theory papers.
##### Share on other sites
they why teach no specified base as base 10 at lower levels? why use ln notation at all?
##### Share on other sites
Hang over from the old days, laziness, the fact you don't want to explain what e is, easier to comprehend, inaccuracy, the fact that logs to different bases only differ by a constant multiple anyway so it doesn't actually matter arithmetically what one you use (ie the all behave the same arithmetically), the fact that base 10 is what people know and only know at that stage, necessity. Take your pick or invent your own reason.
##### Share on other sites
how far in maths do you have to go until log goes from log10 to loge?
##### Share on other sites
Anything beyond intro calculus texts and "ln" has largely vanished, though even in these texts it's not a given that you'll see "ln".
##### Share on other sites
There also is the whole thing of context. If there is an exponential in the expression, you can pretty much assume that "log" is base e. Even at lower levels. Like in the expression that started this whole thing.
[imath]
a^x = e^{x\log a}
[/imath]
It is pretty obvious what is meant. The clarification is pretty much unnecessary, but why bother confusing a Calc 1 student over it? Incidentally, in my opinion most of the reason that math books don't bother writing "ln" is because in straight math I do not think that you will hardly ever see any log but base e after you get into calculus. In the few astronomy classes that I took though, they used both base 10 (written as log) and base e (written as ln).
## Create an account
Register a new account
|
# Need help in proving this Moment of Inertia equation
1. Jul 11, 2014
### null void
1. The problem statement, all variables and given/known data
Suppose I have a rod with a know length, L, and 2 point-liked mass with know mass, m, each of them are fixed at both ends of the rod. And the rode has an axle connected at its midpoint where the axle is connect to a circular spring.
The rod(with the 2 mass) is rotated 180 degree or ∏ in radian, and is released to oscillate. How to find the period, T of the oscillation?
The equation I want to prove is:
T2 = 8m2r2/τ + T02
m is the mass of the point like mass,
r is the distance of the point like masses from the midpoint of the rode,
T0 is contributed from the Rod...(base on my guessing)
2. Relevant equations
3. The attempt at a solution
My attempt of solution is in the attachment. The final expression is pretty much different from the equation i want to prove...Can anyone help me to check where did i did wrongly?
#### Attached Files:
• ###### delete.doc.docx
File size:
14.4 KB
Views:
73
2. Jul 12, 2014
### Simon Bridge
Please do not post proprietary format attachments - not everyone can read them.
doc and docx formats can be particularly troublesome even if you have MSOffice or use google-docs.
you can post in pdf format, or use the LaTeX functions.
I think I've managed to open the docx though.
You write:
Total energy from potential energy of circular spring: $$\frac{1}{2}\tau\pi = \text{KE}+\text{PE} =\frac{1}{2}I\omega^2+\frac{1}{2}\tau\pi\left( 1-\frac{\theta}{\pi} \right)$$
$\tau\theta = I\omega^2$
$$\omega = \frac{d\theta}{dt}=\sqrt{\frac{\theta\tau}{I}}\\ \frac{d\theta}{\sqrt{\theta}}=\sqrt{\frac{\tau}{I}}dt$$
By integrating (I consider al of the parameters are constant except $\theta$ and $\omega^2$ are changing.)
$$2\sqrt{\theta} = t\sqrt{\frac{\tau}{I}}\\ t=2\sqrt{\frac{I\theta}{\tau}}$$
After 1/4 of the T time pass from the moment the rod is released $\theta = \pi$
$$T=8\sqrt{\frac{I\pi}{\tau}}$$
Squaring both sides an inserting the moment of point-like mass and rod into the equation:
$$T^2=64\frac{\pi}{\tau}\left( 2mr^2+\frac{m_{rod}L^2}{12} \right)$$
... is all that correct?
I take it that this system is basically a dumbell rotating back-and-forth about it's center and the spring is linear - so the restoring torque is proportional to the angle of deflection from equilibrium $\tau=-k\theta$?
You appear to have not yet finished - there is no "$r$" or "$\tau$" in the problem statement for example.
If that $\tau$ in your equation is, indeed, torque, then it is not a constant.
You should already know what sort of equation to expect for $\theta(t)$ in terms of the period T ... why not just plug that into Newton's second law?
Last edited: Jul 12, 2014
3. Jul 13, 2014
### null void
I hope i get it right...Since the oscillation oscillate maximally 180° degree or ∏ in radian, my equation of angular displacement is:
Θ(t) = ∏ sin(2∏ft)
then i differentiate the equation to get the angular velocity and acceleration:
ω(t) = 2∏2f cos(2∏ft)
α(t) = -4∏3f2 sin(2∏ft)
Then from Newton's 2nd law,
τ=Iα
τ=-4I∏3f2sin(2∏ft)
The maximum τ happens when the sin(...) = 1
Then I can be decomposed to mr2 and MR2/12, and the f is the inverse of T,
.....well i am not very sure if i can simply ignore the negative sign.....
T2=8mr23/τ + MrodLrod23/3τ
The problem still at the ∏...i must have done something wrong again...the equation is:
T2 = 8m∏2r2/τ + T02
The T02 should be something contribute by the rod.
Last edited by a moderator: Jul 13, 2014
4. Jul 13, 2014
### Simon Bridge
You still have a $\tau$ and an $r$ term. Why don't you express these in terms of $\theta$ and $L$? (Why did you make the equations so big?!)
Newtons second law for rotation is: $\tau = I\ddot\theta$
If the spring is linear with spring constant k, then $\tau = -k\theta$
(I think this second equation is the one you are missing.)
You suggested $\theta(t)=\pi\sin (2\pi f t)$
... what is the angular displacement at t=0 in this equation?
... what is the angular displacement at t=0 in the problem statement?
Since you want to find the period, T, then why not express the $\theta(t)$ equation in terms of T instead of f?
5. Jul 13, 2014
### null void
sorry I didn't notice i mixed-up the term τ-instantaneous and τ-maximum in the previous post.
I started to doubt if my concept is right or not. My current idea is :
the rod is rotated 180o before released to oscillate, so the maximum amplitude of the Θ(t) is ∏
Θ(t) = ∏sin(2∏t/T)
angular acceleration = α(t)= ( -4∏3/T2 )cos(2∏t/T)
....i guess the Θ with 2 dots means double differentiation of Θ...............
from the Newton's second Law,
I α(t) = -k Θ(t)
I ( ( -4∏3/T2 )sin(2∏t/T) ) = -k ( ∏sin(2∏t/T) )
[T2= 4 I ∏3/k∏
and the k∏ is the maximum torque force generated by the restoring spring.
T2 = 4 I ∏3 / τmax
is this correct?
Last edited by a moderator: Jul 13, 2014
6. Jul 13, 2014
### Simon Bridge
According to that equation, Θ(0)=0, but according to the problem statement, Θ(0)=∏
But it does not make any difference for the period calculation but it can cost you marks.
If you use: $\Theta(t)=\pi\cos 2\pi t/T$, then $$\ddot\Theta = -\frac{4\pi^3}{T^2}\cos 2\pi t/T = -\frac{4\pi^2}{T^2}\Theta(t)$$ Note: You are correct- $\ddot y \implies \frac{d^2}{dt^2}y$ while $y'' \implies \frac{d^2}{dx^2}y$ ... it just lets you write out the derivatives on one line. If y=f(x,t) then the implied derivatives are partials.
You end up with: $$T^2=\frac{4\pi^2}{k}I=\frac{4\pi^3}{\tau_{max}}I$$ ... well done.
The next step is to get the correct expression for the moment of inertia.
All you need to finish that is to find an expression for r in terms of L.
Some pointers:
∏ is the cap-pi and refers to a product.
The ratio of the circumference to the diameter is a lower-case pi: π
It looks a bit like an "n" I know. That's why I use LaTeX rather than the quick-symbols.
7. Jul 14, 2014
### null void
yeah the Torque and the period symbol almost look the same too......
In my lab manual, the period calculation is:
$T^2 = \frac{8m\pi^2r^2}{D} + T^2_0$
where the $T_0$ is the period of oscillation when the mass is removed from the rod.
And the $D$ is stated in my manual that it is the restoring torque, i suppose it means the maximum torque generated by the 'circular spring'.
But no matter how i think and try, i never get the equation with $\pi^2$, do u think there is a mistake in my manual? I think the $D$ should be the spring constant.
$r = \frac{1}{2}L$
so should I use the $2mr^2$ and $\frac{M_{rod} L^2_{rod}}{12}$ ?
if so i will end up getting this equation:
$T^2 = \frac{4\pi^3}{\tau}(2m(\frac{L_{rod}}{2})^2) + \frac{M_{rod}L^2_{rod}}{12})$
or if I keep the r,
$T^2 = \frac{4\pi^3}{\tau}(2m(r^2) + \frac{M_{rod}L^2_{rod}}{12})$
Last edited: Jul 14, 2014
8. Jul 14, 2014
### Simon Bridge
You need to check the definitions of the variables in the model equation.
I see now why you didn't want to lose the "r" ... so you have to put all lengths in terms of r.
What is this "r" as it is used in your book?
It may well be a typo of course.
9. Jul 14, 2014
### null void
$r$ is the distance of the point-like mass to the center of axis and the $L_{rod}$ is the length of the rode. The point like mass can be moved closer to the center, so the $r$ can be change, but the $L_{rod}$ always fixed at a length.
10. Jul 14, 2014
### Simon Bridge
Ah that makes sense - you left that off the description before.
The more I think about it the more it looks like a typo in your book.
Devil is in the details - go through the book carefully.
There are not many places to misplace a pi.
11. Jul 14, 2014
### null void
12. Jul 14, 2014
### Simon Bridge
For the mass on a spring:
$m\ddot x = -kx$
$x=A\cos 2\pi t/T$
... if we say that D = the max restoring force then D=kA
then: $$m\frac{4\pi^2}{T^2} = k \implies T^2=4\pi^2\frac{m}{k}= 4\pi^2A\frac{m}{D}$$
... you can see that this has to be correct.
...you can see that if $A=\pi$ then you get a $\pi^3$ term.
I think that D, for your version, has to be understood as a spring constant (dimensions of torque per radien)... especially considering the form of eq(1) in the leaflet.
13. Jul 26, 2014
### null void
Sorry for late reply, I think everything is fine now. Thanks you very much Simon, for guiding me to get the right equation.
14. Jul 27, 2014
Well done.
|
# Error while writing equation
\documentclass[12pt,a4paper]{article}
\usepackage[a4paper, total={6in, 8in}]{geometry}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{fourier}
\usepackage{mathtools}
\usepackage{amsmath}
\usepackage[english]{babel}
\usepackage{graphicx}
\usepackage{marginnote}
\begin{document}
\underline {Chandrashekhar EOS (1935)} : \\
\begin{align*}
\overline \rho &= $K$ ~\Big( $sinh ~t$ ~- ~$t$\Big)\\
\overline P &= \frac {1}{3} $K$ ~ \Big($sinh~t$ - 8 $sinh$ \frac {1}{2} t + $3t$\Big)\\
\end {align*}
\end {document}
can you tell me why I have error in the right side of the equations?
• Don't use \$ inside align, as the material is already in math mode. Use \sinh for the hyperbolic sine and \Bigl in front of the opening parentheses, \Bigr with the closing ones (but in those formulas you don't need larger parentheses). Don't use ~ in math mode. – egreg Aug 7 '14 at 10:17
In addition to @egreg's comments, I would suggest to replace \overline here (too large) with widebar, borrowed from the mathx font (mathabx package) and for the fraction coefficients, medium-sized fractiions, defined in the nccmath package. Also, don't load amsmath, since mathtoolsdoes it for you. You can replace some of the ~ with a thin space (\,).
\documentclass[12pt,a4paper]{article}
\usepackage[a4paper, total={6in, 8in}]{geometry}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{fourier}
\usepackage{mathtools}
\usepackage[english]{babel}
\usepackage{graphicx}
\usepackage{nccmath}
\DeclareFontFamily{U}{mathx}{\hyphenchar\font45}
\DeclareFontShape{U}{mathx}{m}{n}{ <-> mathx10 }{}
\DeclareSymbolFont{mathx}{U}{mathx}{m}{n}
\DeclareFontSubstitution{U}{mathx}{m}{n}
\DeclareMathAccent{\widebar}{0}{mathx}{"73}
\usepackage{marginnote}
\begin{document}
\underline {Chandrashekhar EOS (1935)} :
\begin{align*}
\widebar{\rho} &= K \,\bigl( \sinh t - t \bigr)\\
\widebar{P} &= \mfrac {1}{3}K\, \Bigl(\sinh t - 8 \sinh \mfrac {1}{2} t + 3t \Bigr)
\end {align*}
\end {document}
• \bigl and \bigr, if you really want larger delimiters. Since you're defining the font shape, use <-> instead of fixed sizes, as mathx10 is available in Type1 format. – egreg Aug 7 '14 at 10:46
• You're right. I'll change it at once. – Bernard Aug 7 '14 at 10:47
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 25 May 2015, 14:01
# Today:
Free Access to GMAT Club Tests - May 25th for Memorial Day!!
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# If n is a positive integer and p = 3.021 10n, what is the
Author Message
TAGS:
Senior Manager
Joined: 24 Mar 2011
Posts: 465
Location: Texas
Followers: 4
Kudos [?]: 78 [0], given: 20
If n is a positive integer and p = 3.021 10n, what is the [#permalink] 24 Apr 2011, 08:22
00:00
Difficulty:
(N/A)
Question Stats:
30% (02:15) correct 70% (00:41) wrong based on 9 sessions
If n is a positive integer and p = 3.021 × 10n, what is the value of n?
1. 3,021 < p < 302,100
2. 103 < p < 105
If n is a positive integer and $$p = 3.021 * (10)^n$$, what is the value of n?
1. $$3,021 < p < 302,100$$
2. $$(10)^3 < p < (10)^5$$
Last edited by fluke on 28 Apr 2011, 02:18, edited 2 times in total.
Modifying the question to be correct
Manager
Joined: 03 Apr 2011
Posts: 106
Location: United States
GMAT 1: 760 Q50 V42
GPA: 3.54
WE: Military Officer (Military & Defense)
Followers: 4
Kudos [?]: 23 [0], given: 0
Re: Data sufficiency - Inequalities and basic equation [#permalink] 26 Apr 2011, 11:32
If n is a positive integer and p = 3.021 × 10n, what is the value of n?
1. 3,021 < p < 302,100
2. 103 < p < 105
Are you asking for how to solve this? Here is how I would approach it
Look at it as p = 30.21*n
1. I would immediately see that setting n =100 would make p = 3021, so n>1000. Setting n = 10000 sets p = 302100.
Thus 100 > n > 10000 NOT SUFFICIENT
2. If we had n = 3 it would be ~90 , if we had n = 4 it would be ~120. Every other n would not work.
NOT SUFFICIENT
IMPORTANT:
This is not a valid DS question for the GMAT. The two options will never contradict each other. Option 2 clearly contradicts option 1. Option 1 allows for a giant range of values while option 2 implies there is no answer at all
Manager
Status: It's "Go" Time.......
Affiliations: N.C.C.
Joined: 22 Feb 2011
Posts: 180
Location: India
Followers: 6
Kudos [?]: 31 [0], given: 2
Re: Data sufficiency - Inequalities and basic equation [#permalink] 26 Apr 2011, 12:58
This isn't a GMAT question for sure!!
agdimple333 wrote:
If n is a positive integer and p = 3.021 × 10n, what is the value of n?
1. 3,021 < p < 302,100
2. 103 < p < 105
_________________
We are twice armed if we fight with faith.
He who knows when he can fight & when He can't will be victorious.
Math Forum Moderator
Joined: 20 Dec 2010
Posts: 2031
Followers: 137
Kudos [?]: 1129 [0], given: 376
Re: Data sufficiency - Inequalities and basic equation [#permalink] 27 Apr 2011, 12:54
agdimple333 wrote:
If n is a positive integer and p = 3.021 × 10n, what is the value of n?
1. 3,021 < p < 302,100
2. 103 < p < 105
agdimple333:
Is the actual question as follows?
If n is a positive integer and $$p = 3.021 * (10)^n$$, what is the value of n?
1. $$3,021 < p < 302,100$$
2. $$(10)^3 < p < (10)^5$$
_________________
Senior Manager
Joined: 24 Mar 2011
Posts: 465
Location: Texas
Followers: 4
Kudos [?]: 78 [0], given: 20
Re: Data sufficiency - Inequalities and basic equation [#permalink] 27 Apr 2011, 14:05
i think you are right. And now i think that i didnt get the answer because it was printed incorrect on pdf file tht i was reading.
VP
Status: There is always something new !!
Affiliations: PMI,QAI Global,eXampleCG
Joined: 08 May 2009
Posts: 1359
Followers: 13
Kudos [?]: 164 [0], given: 10
Re: Data sufficiency - Inequalities and basic equation [#permalink] 28 Apr 2011, 00:29
fluke wrote:
agdimple333 wrote:
If n is a positive integer and p = 3.021 × 10n, what is the value of n?
1. 3,021 < p < 302,100
2. 103 < p < 105
agdimple333:
Is the actual question as follows?
If n is a positive integer and $$p = 3.021 * (10)^n$$, what is the value of n?
1. $$3,021 < p < 302,100$$
2. $$(10)^3 < p < (10)^5$$
combining options 1 and 2, n can be either 3 or 4.
hence E.
_________________
Visit -- http://www.sustainable-sphere.com/
Promote Green Business,Sustainable Living and Green Earth !!
Math Forum Moderator
Joined: 20 Dec 2010
Posts: 2031
Followers: 137
Kudos [?]: 1129 [0], given: 376
Re: Data sufficiency - Inequalities and basic equation [#permalink] 28 Apr 2011, 02:04
If n is a positive integer and $$p = 3.021 * (10)^n$$, what is the value of n?
1. $$3,021 < p < 302,100$$
2. $$(10)^3 < p < (10)^5$$
1.
$$3021=3.021*10^3$$
$$302100=3.021*10^5$$
$$3.021*10^3 < p < 3.021*10^5$$
$$3.021*10^3 < 3.021*10^4 < 3.021*10^5$$
n=4
Sufficient.
2. $$(10)^3 < p < (10)^5$$
$$1000 < p < 100000$$
p can be 3021 or 30210 making n=3 or n=4 respectively.
Not Sufficient.
Ans: "A"
_________________
Re: Data sufficiency - Inequalities and basic equation [#permalink] 28 Apr 2011, 02:04
Similar topics Replies Last post
Similar
Topics:
If n is a positive integer and k=5.1*10^n, what is the value 4 21 Jan 2012, 08:01
3 If n is a positive integer and k=5.1*10^n, what is the value 9 21 May 2011, 09:08
7 If n is a positive integer and k = 5.1 x 10^n , what is the 3 09 Mar 2011, 06:52
if n is a positive integer and k=5.1*10^n, what is the value 3 17 Feb 2008, 17:41
If n is a positive integer and k=5.1*10^n , what is the 4 03 Jun 2005, 11:31
Display posts from previous: Sort by
|
# Infinite Degree Algebraic Field Extensions
In I. Martin Isaacs Algebra: A Graduate Course, Isaacs uses the field of algebraic numbers $$\mathbb{A}=\{\alpha \in \mathbb{C} \; | \; \alpha \; \text{algebraic over} \; \mathbb{Q}\}$$ as an example of an infinite degree algebraic field extension. I have done a cursory google search and thought about it for a little while, but I cannot come up with a less contrived example.
My question is
What are some other examples of infinite degree algebraic field extensions?
-
One example which is quite natural is the field given by adjoining all roots of unity or all radicals to $\mathbb Q$. But I must say, I don't like this question: It is not that hard to give you any number of infinite algebraic field extensions (take a suitable collection of polynomials and consider the splitting field), but it seems like there is very little to be learned from such an exercise in thinking up any examples. – Sam May 30 '12 at 13:50
@MTurgeon $\pi$ is transcendental over $\mathbb{Q}$, hence $\mathbb{Q}(\pi)/\mathbb{Q}$ is not algebraic. – Holdsworth88 May 30 '12 at 14:07
How are the algebraic numbers a contrived example? They are the largest algebraic extension of $\mathbb{Q}$! Any infinite algebraic extension lies in them! – Qiaochu Yuan May 30 '12 at 14:08
@Holdsworth88 I misread the requirements for this infinite degree extension. – M Turgeon May 30 '12 at 14:28
– lhf Nov 14 '13 at 11:44
Another simple example is the extension obtained by adjoining all roots of unity.
Since adjoining a primitive $n$-th root of unity gives you an extension of degree $\varphi(n)$ and $\varphi(n)=n-1$ when $n$ is prime, you get algebraic numbers of arbitrarily large degree when you adjoin all roots of unity.
-
And, as is well-known, this is the largest abelian algebraic extension – M Turgeon May 30 '12 at 14:30
The field of algebraic numbers is important, as is the field of real algebraic numbers. There are plenty of other examples of the same nature. The field of Euclidean constructible numbers is an extension field of the rationals, of infinite degree over the rationals, that comes up "naturally."
-
$\mathbb Q[\sqrt 2, \sqrt 3, \sqrt 5, \cdots]$, obtained by adjoining the square root of the primes, is an example because if you use just $n$ primes, you get an extension of degree $2^n$.
-
This is far from obvious, though (see qchu.wordpress.com/2009/07/02/… for a proof and a link to other proofs). If you want to be lazy, take $\mathbb{Q}[2^{1/2}, 2^{1/4}, 2^{1/8}, ...]$. – Qiaochu Yuan May 30 '12 at 14:10
For a proof, see math.stackexchange.com/questions/30687/… – lhf May 30 '12 at 14:12
So I suppose $\mathbb{Q}(\sqrt{m_1},\sqrt{m_2},...)$, where the $m_i$ are square-free coprime integers would also be an example then. – Holdsworth88 May 30 '12 at 14:13
@QiaochuYuan: Dear Qiaochu, While you are right that this result is not obvious, it is not that hard; it follows from the most basic algebraic number theory (more precisely, computations of discriminants for quadratic extensions). Regards, – Matt E Jun 1 '12 at 18:35
How about the following example: for any field $k$, consider the field extension $\cup_{n\geq 1} k(t^{2^{-n}})$ of the field $k(t)$ of rational functions. This extension is algebraic and of infinite dimension. The idea behind is quite simple. But I admit it require some work to define the extension rigorously.
-
Let $\{n_1,n_2,...\}$ be pairwise coprime, nonsquare positive integers. Then $\mathbb{Q}(\sqrt{n_1},\sqrt{n_2},...)$ is an algebraic extension of infinite degree.
-
|
How long will the footprints on the moon last? What angle is the clock making when is hits 5 o clock? If a jogger runs 22 miles/hour for five hours. What is the answer to the middle school math pizzazz book of e-28? Clock Angle Calculator. Discussion. Calculator for the angle of direction on the face of a clock. What is the solutions to y plus 3 squared minus 81? At 5 o’clock, the hour hand is at 5 and the minute hand is at 12. Calculate the angle between the hands of the clock if the time is 10: 00 H = 10 M = 00 Calculate the Angle between 12 and the Hour hand 10: Since there are 360 degrees in a full circle (clock), and there are 12 hours, each hour represents 360/12 = 30 degrees The other reflex angle is 360–150 = 210 degrees. How long will the footprints on the moon last? Since there are 12 hours on the clock, each hour mark is 30 degrees. In our last example, we have a clock where the hands form a full angle, that is, an angle … What is the WPS button on a wireless router? The angle between the 4 and the 5 is 30°. What are the qualifications of a parliamentary candidate? = | 5.5 x 30 – 30 x 10 | when the minute hand points towards south, the hour hand will be to the right of the minute hand which is North-west. Below is a list of the times when the minute and hour hands are at right angles. Hence, between 10 o’clock and 11 o’clock, the minute and hour hands are at right angles at 10:38 2 / 11 and 10:5 5 / 11. On the left side, it's 210 degrees. How to calculate the two angles with respect to 12:00? How far did the runner run in five hours? Angle is given as a clock number, and distance as a decimal percentage of the radius through the object. Let x be the number of seconds after 5. Why don't libraries smell like bookstores? As per above statement 4 o'clock means calculate between 12 and 4 minutes is 20. What is the number of degrees between the hands of a clock at 5? On the next clock, where the hands point to 6 o’clock sharp, the hands form a 180° flat angle. 3. ¼ x 30° = 7.5° Therefore the total distance travelled will be 30 minutes*6° = 180 °. e'(t) = rad/min %3D Why don't libraries smell like bookstores? The angle changes by 5.5o in 1 minute. Instrumentation For example, “3,9” means 3:00 o’clock at 9 tenths of the radius. Calculate Angle between Clock Hands. (Use symbolic notation and fractions where needed.) Is Betty White close to her stepchildren? What is the first and second vision of mirza? What are the difference between Japanese music and Philippine music? 1.4K views A-120 degrees B-60 degrees C-30 degrees D-90 degrees #2 Find the coordinates of K, the midpoint of segment MN, if the endpoints of segment MN are M(-4, 6) and N (2, 5). What are the advantages and disadvantages of individual sports and team sports? The hour hand of a 12-hour analogue clock turns 360° in … This Online test is based on MCQs for MBA, SSC, IBPS/SBI Bank PO, Clerk, Other Competitive Exams. The angle is 120 degree. The direction angles of both hands will be calculated, where 12 o'clock is 0 degrees, 3 o'clock is 90 degrees, 6 o'clock is 180 degrees, and so on. So let us consider the angles clockwise from 12 o'clock so. What angle do the hands make at 5 o'clock? For eg, angle is 15° for 5:30.. The angle between the hands becomes 34o when the angle changes by 116o and 184o i.e., (150o - 34o) and (150o +34o). Who is the longest reigning WWE Champion of all time? Copyright © 2021 Multiply Media, LLC. The idea is to consider the rate of change of the angle in degrees per minute. Since a full circle is 360 degrees, each of the twelvetimes on a clock can be divided in to 30 degrees because360/12 = 30. A-649 cm B-342 cm C-285 cm D-228 cm #4 All right angles are __________. No Related Subtopics. M = 5 5 / 11 minute is another solution. The reference point 12 o'clock commonly refers to the line of sight and means an angle of 0 degrees. Angle between minute and hour hand when clock sweep 5 mins = 360/12 = 30 Since we want angle b/w two hands at 5:30 , at that time the minute hand also sweep half = 15° So, Now resulting angle b/w minute and hour hand at 5:30 be = 30° - 15° = 15° The minute hand on the clock will point at 15 minutes, allowing us to calculate it's position on the circle. Thomas Calculus in SI Units 13th. 12 o'clock = 0° 1 o'clock = 30° 2 o'clock = 60° 3 o'clock = 90° 6 o'clock = 180° The simplest way to think of this is that each hour is made up of 30° and that the hour hand will have moved one quarter of an hour past three o'clock. At 10:30 the angle between minute hand and hour hand will be ? At 11:00, the hour hand is at 11 and it forms an angle = 11*30 = 330 deg from 12. Notice that if M = 1 minute, x = 0.5°. The difference between the two angles is the angle between the two hands. At 5 o clock the angle is 150° which is obtuse cliffffy4h found this answer helpful 0.0 (0 votes) Derivatives. Therefore the angle between the hands is. https://www.answers.com/Q/What_angle_do_the_hands_make_at_5_o'clock The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. What is 0'(t) at 5 o'clock? Difference Between 4 and 5'o Clock Using by Angle / Home / Questions / Categories / Aptitude Questions / Logical Reasoning. Therefore. All Rights Reserved. For MBA, SSC, IBPS/SBI Bank PO, Clerk, other Exams... We 'll assume that you have a 12-hour clock, not a military clock remaining! Disadvantages of individual sports and team sports 1+5+5+5+5+5+4 = 30 divisions = 11 * 30 0... = 7.5° the face of a normal 12-hour analogue clock turns 360° in minutes! C-285 cm D-228 cm # 4 All right angles are 90 degrees of 30 minutes with speed... Opposite side 4 and the short hand is pointing to the middle school math pizzazz book of e-28.! Turns 360° in 12 hours on the circle of view of the hand... Point to 6 o ’ clock sharp, the what angle is 5 o'clock between the minute hand rotates through 360° in hours... Thus, the angle between the hands make at 5 o'clock = 5 5 / 11 is. M = 1 minute, x = 0.5° Jun 9, 2016 at,. Using even digits only 5 o'clock + x seconds, hour hand of a clock will. Angle in degrees per minute =130 degrees clock Reasoning Questions and Answers with Explanations 30! 6° = 180 ° and hence the angle changes by 116o in 5.51 M = 5 5 11. Always contains 360 degrees and 5 ' o clock Using by angle / Home / Questions Categories... ( 720 minutes ) or 0.5° per minute angle changes by 116o in 5.51 M = 5 /... 5 o clock Using by angle / Home / Questions / Categories / Aptitude Questions / Logical Reasoning \mathrm! Clock will point at 15 minutes, it 's not so bad with old-fashioned! 720 minutes ) or 0.5° per minute / 11 minute is another solution 2: the... Cm D-228 cm # 4 All right angles are __________ what are the difference between the 3 and the and... Making when is hits 5 o ’ clock sharp, the minute and! By 116o in 5.51 M = 5 5 / 11 minute is another solution hand at?! Step 2: Press the calculate '' button is formed answer to the school. Ibps/Sbi Bank PO, Clerk, other Competitive Exams when the minute hand at... 'S position on the moon last is 90 degrees, 5 o'clock clock! 2:30 = 30° + 15° = 105° at 9 tenths of the times when the fingers of a clock,... / \mathrm { h } Topics tenths of the angle hour. Angles with respect to 12:00 angle do the hands of a clock at 9 tenths of the hand! From 12 at 9 tenths of the angle between the 3 and the angle formed be! '' shows a right-angle ( 90 degrees or PM, we 'll that! What angle do the hands is 180° } / \mathrm { h } Topics be Using... Consider the rate of change of the story servant girl by estrella alfon... In 30 minutes with a speed of 6° per minute and minute hand on the clock making when is 5. At 5 o'clock turns 360° in 12 hours ( 720 minutes ) or 0.5° per minute 30° 7.5°..., SSC, IBPS/SBI Bank PO, Clerk, other Competitive Exams distance as decimal... Source activities in your personal capacity angle … clock angle calculator ( Use symbolic notation and fractions needed. What are the advantages and disadvantages of individual sports and team sports at... Squared minus 81 the reference point 12 o'clock commonly refers to the school... Plus 3 squared minus 81 first and second vision of mirza in what angle is 5 o'clock afternoon … clock angle calculator to! 2 o'clock 150 degrees at 5:00, the hands make at 5 ’... Idea is to consider the rate of change of the angle between two! Of e-28 the object, “ 3,9 ” means 3:00 o ’ clock is a list of angle. What rate is the clock making when is hits 5 o ’ clock,. At what rate is the longest reigning WWE Champion of All time at time 5 o'clock flag flying at White! When there is no flag flying at the White House the rate of change of angle... Hence the angle of the angle between the hands are three and one-third number positions.. Angles with respect to 12:00 9, 2016 clock show 3 o'clock, the of. = 1+5+5+5+5+5+4 = 30 divisions when the minute hand and the 4 is 30° rotates 360°! Actual hands going to be a difficult problem with a digital clock o'clock means calculate between 12 and minutes! ’ clock at 5 o'clock + x seconds, hour hand and hand. * 6° = 180 ° hands of a clock rate is the angle of the angle formed be... A clock 's minute and hour hands changing at 4 o'clock in the afternoon the... Using by angle o'clock = 150 degrees there are 60 minutes or 6° per minute a right-angle 90! ( Use symbolic notation and fractions where needed. 3 digit numbers can be formed even. Means calculate between 12 and it forms an angle of direction on the circle when the fingers a! Is pointing to the line of sight and means an angle of 0 degrees at time o'clock... Digital clock the lens advantages and disadvantages of individual sports and team?. Formed will be 30 * 6 = 180 ° that is, angle. Pizzazz book of e-28 it 's position on the circle the size of the times when the minute hand pointing! The radius disadvantages of individual sports and team sports with actual hands to. To 6 o ’ clock, not a military clock is formed o'clock. Seconds, hour hand is pointing to the five at angle 3,9 ” means 3:00 ’... Hand on the moon last angle shown is 90 degrees ) line of and... Right-Angle ( 90 degrees ) the remaining angle is the answer to the line of sight and means angle! Minutes and between the 4 and 5 ' o clock cm # 4 All right angles are __________ opposite.. Even digits only 3 o'clock are 90 degrees minutes ) or 0.5° per minute sight and means an of... Means an angle = 11 * 30 = 0 * 30 = 0 * 30 = 0 deg from.! Since there are 12 hours on the moon last clock Using by angle ”... O'Clock means calculate between 12 and 4 minutes is 20 = 150 degrees to! Conceived centered on the circle at 15 minutes, allowing us to calculate the two hands a. O'Clock, the angle between the two hands of a clock at 5, angle … clock angle.! 11 \pi } { 6 } \mathrm { rad } / \mathrm { rad } / \mathrm { }! Deg from 12 personal capacity did the runner run in five hours,!, where the hands of a normal 12-hour analogue clock turns 360° in 12 hours 720... Ibps/Sbi Bank PO, Clerk, other Competitive Exams point at 15 minutes, it 210. The time is 5:24 hand of a normal 12-hour analogue clock turns 360° in 60 minutes or 6° minute... To the middle school math pizzazz book of e-28 radius through the object answer -\frac { \pi! Are 90 degrees ) a speed of 6° per minute step 2: the! In 30 minutes with a digital clock five hours can be formed Using even digits only at angle 3 the. Right angles are __________ not specify AM or PM, we 'll assume that you have 12-hour..., 6 o'clock = 180 ° αₕ=150+30x/3600 = 150 + x/120 [ starts at o'clock. 11 minute is another solution degrees per minute ” means 3:00 o ’ clock at 9 tenths the. 9 tenths of the story servant girl by estrella d alfon \mathrm { h } $! Https: //www.answers.com/Q/What_angle_do_the_hands_make_at_5_o'clock what is 0 ' ( t ) measures the angle... To consider the rate of change of the hour hand at five o'clock is.... -\Frac { 11 \pi } { 6 } \mathrm { rad } \mathrm. * 30 = 330 deg from 12 hands in radians a 180° flat angle 3,9... 4 to 5 o'clock * 6° = 180 degrees the time is 5:24 minute! 5 and the minute hand at five o'clock is 150o president again reflex angle is ½ × 30° 15°. T ) measures the minimum angle between the hands point to 6 o clock. Difference between the minute and hour hands are at right angles are __________ 0 30! Rad } / \mathrm { h }$ \$ -\frac { 11 }! Plus 3 squared minus 81 \pi } { 6 } \mathrm { rad /! … clock angle calculator other reflex angle is given as a decimal percentage of the hour.. An old-fashioned clock, where the hands point to 6 o ’ sharp! Military clock is hits 5 o clock Using by angle / Home / Questions / /... Positions apart Press the calculate '' button between 54 minutes and between the hands! Angle made is zero * 30 = 0 deg from 12 impeached you... Hands form a 180° flat angle is 20 for MBA, SSC, IBPS/SBI PO... 'S not so bad with an old-fashioned clock, not a military.! The 3 and the 5 is 30° minutes ) or 0.5° per....
|
lilypond-user
[Top][All Lists]
## Re: Notation Reference 1.8 "Text" : ready for review
From: Valentin Villenave Subject: Re: Notation Reference 1.8 "Text" : ready for review Date: Sun, 5 Oct 2008 17:46:45 +0200
2008/10/5 Graham Percival <address@hidden>:
> Do you honestly consider LaTeX to be a "word processor"?
Well, at least my personal source of knowledge and wisdom says so:
http://en.wikipedia.org/wiki/LaTeX
"LaTeX (pronounced /ˈleɪtɛx/ or /ˈleɪtɛk/) is a document markup
language and document preparation system"
click on "document preparation system": it redirects to
http://en.wikipedia.org/wiki/Word_processor
That being said, if you do not regard LaTeX as a word processor, I can
understand while you don't want me to call LilyPond a word processor.
> {
> \quoteARandomInstrument \RestABit
> \putCoolSoloHere
> }
> (ok, now I actually want to write a piece like that. This is the
> first time I've ever been tempted to write a Cage-like piece. :)
Really? I thought you had already been working on Strasheela :-)
> That said, I don't think you need a multi-page example here. Just
> dump the example currently in "Multi-page markup" in here.
1- ... As a duplicate of the existing example?
2- ... While people can access the already existing example and the
already existing explanations though the already existing obvious
> You don't need to quote stuff that you've done. I know that I'm
> right. I really don't need the ego boost of having you tell me. :)
We all need ego boosts every now and then: look, even I always get
plenty from you :-)
> characters, as mentioned in @ref{New dynamic marks} and
> @ref{Manual repeat marks}.
>
> Where's this magical comma after the first @ref{} ?!
I thought you meant the @seealso list. AFAIC(H)R, the "commas after
each @ref{}" discussion was always about @seealso lists, not plain
sentences.
> I'd actually rather kill \larger. \smaller \bigger sounds better
> than \smaller \larger. But yeah; either way, we should remove one
> of them before 2.12. Take it up on -devel.
\smaller \bigger may sound better, but we already have
\teeny \tiny \small \large \huge markup commands, not
\teeny \tiny \small \big \huge.
> - markup as variables: why the complicated example? And why the
> lack of [relative]?
Ever tried to define a variable inside a \relative block?
> * Selecting font and font size
> - second example: why the first note so much higher than the rest?
Because someone (can't remember who) told us to use relative=2 :-)
I've removed it now.
> It look a bit weird, as does the a,^\markup. Also, what do
> the final two bars add to the example?
They make the line long enough to make sure the markup won't go outside.
> * Text alignment
> - first example: WTM are you using fifths here? You shouldn't
> need any ', in these examples.
Because of line-length synchronization (the first line has an "a1", so
I needed two-characters pitches :-)
I've replaced the fifths with thirds now. So long for synchronization.
> * Graphic notation inside markup
> - another "using specific markup commands". Could you do a search
> for any "specific" inside text.itely and fix it? Other than
> very specific [sic] instances, "specific" is a useless fluff word.
OK, I'll use "particular" instead :-)
OK, let me see. Oh, indeed, I must like this word: I used it about 20
times in text.itely alone. I find it reassuring for the user; it means
"we know this syntax seems exotic to you, we know you won't understand
nor remember it at first, but it's normal: it's *specific*".
I replaced every "specific commands" with "markup commands" now.
As for the word-processor thingy, I replaced
Using a specific syntax, text blocks can be spread over multiple
pages, making it possible to print text documents or books (and
therefore to use LilyPond as a word processor). This syntax is
described in Multi-page markup.
with
Separate text blocks can be spread over multiple pages,
making it possible to print text documents or books entirely
within LilyPond. This feature, and the specific syntax it
requires, are described in @ref{Multi-page markup}.
> ok, I'm bored again.
Did I ever mention you're easily bored? :-)
Cheers,
Valentin
reply via email to
|
# For a free group G with finite presentation, show there exists a pointed space whose fundamental group is G
Suppose $G=\langle g_1,...,g_n|r_1,...r_m\rangle$. Show there is a pointed space $(X_G,x_0)$ with $\pi_1(X_G,x_0)=G$.
Hint: Use Van Kampen's theorem.
My attempt: First note that $G$ is the amalgamation of $G_i:=\langle g_i|r_1,...,r_m\rangle$, ie: $G= G_1\ast...\ast G_n$. Then since $G_i\cap G_j={e}$, the identity, for $i\neq j$, the pairwise intersections are trivially path connected. If we let the base point be $e$, then all I need to do to show is a that the space can be somehow represented by a union of the $G_i$, but don't know how.
• Take a peek at "bouquet" or wedge sum of circles, perhaps also under van Kampen. – DonAntonio Feb 20 '17 at 17:32
• What does $G_i$ mean if, say, $G =\langle x,y| x^2y^2, x^{-1}y x\rangle$? More specifically, if the relations $r_i$ all involve all the generators what is $G_i$? – Jason DeVito Feb 20 '17 at 17:46
• Would it then make more sense to define the $G_i=\langle g_i\rangle$ and then $G=G_1\ast...\ast G_n /\{r_1,...,r_n\}$? – George Feb 20 '17 at 22:39
|
# Neutron electric dipole moment and $T$ symmetry violation
Our textbook (and other sources I have found) says that non-zero electric dipole moment of neutron would violate $T$ symmetry. They prove this statement by first assuming $\boldsymbol{D}=\beta\boldsymbol{J}$, where $\boldsymbol{D}$ is the dipole moment, $\boldsymbol{J}$ is the angular momentum, and $\beta$ is a constant.
But why? Why is $\boldsymbol{D}$ proportional to $\boldsymbol{J}$? Why is $\boldsymbol{D}$ related to $\boldsymbol{J}$ at all? And how can't this argument be applied to other composite particles such as atoms and molecules, thereby breaking T symmetry for most of the world?
-
In classical mechanics, we have this identity for spinning bodies of charge: $\frac{\mu}{L}=\frac{q}{2m}$, $\mu$ is dipole moment, $L$ is angular momentum. I dunno how this translates to particle physics, but it may help.. – Manishearth Mar 8 '12 at 16:19
@Manishearth: I'm talking about electric dipole moment, where as $\mu$ is magnetic dipole moment. – Siyuan Ren Mar 9 '12 at 4:26
Aah, my bad. Didn't see the 'electric' in the question and i'm not familiar with your usage of symbols (I use p for electric dipole) – Manishearth Mar 9 '12 at 4:46
As the neutron is not point-like, consider it has a continuous distribution of charge $\rho(\mathbf{r})$ confined in a volume $\Omega$. The dipole electric moment is then given by
$\mathbf{D}(\mathbf{r})=\int_\Omega \rho(\mathbf{r}')\delta(\mathbf{r}-\mathbf{r}')d^3r'$
where the coordinates are measured from the centre of mass of the distribution. For a charged particle, this definition implies that for $\mathbf{D} \neq\mathbf{0}$ the "centre of charge" is displaced from the centre of mass of the distribution. For a distribution which has no net charge, that is
$Q=\int_\Omega \rho(\mathbf{r}) d^3r=0$
this definition implies that a there is a greater positive charge side of your distribution and a correspondingly greater negative charge in the other side.
Consider now that your particle has angular momentum $\mathbf{J}$ and that its orientation is given by $m$ (the eigenvalue of the $\hat{J}_z$ operator) relative to the $\hat{\mathbf{z}}$ axis. Notice that the only way to know the orientation of your charge distribution ("particle") is by the orientation of the angular momentum.
As a consequence, both $\mathbf{J}$ and $\mathbf{D}$ must transform equally under parity $P$ and time reversal $T$ if $\mathbf{D} \neq \mathbf{0}$ and if there is $P$ and $T$ symmetries. But $\mathbf{D}$ changes its sign under $P$ whereas $\mathbf{J}$ does not so $\mathbf{D}$ must vanish if there is $P$ symmetry. In a similar way, $\mathbf{D}$ does not change sign under $T$ but $\mathbf{J}$ does, so $\mathbf{D}$ has to vanish if there is $T$ symmetry. Hence if the neutron electric dipole is not zero we will have a violation of $PT$ symmetry.
Remark: This argument only applies to particles with non-zero dipole moment.
Experimental searches of the neutron electric dipole moment can be found:
• Smith et al. Phys. Rev. 108, 120 (1957) [link to paper].
• Baker et al. Phys. Rev. Lett. 97, 131801 (2006) [link to paper].
The upper bound in the last one for $|\mathbf{D}|$ is $2.9 \cdot 10^{-26}$ e cm.
D.
EDIT: As David said below, there is not $CPT$ violation in the, hypothetical case, of having $PT$ violation [=existence of non zero electric dipole moment].
-
The neutron is composed of $\mathrm{udd}$ valence quarks so charge conjugation would switch it to $\mathrm{\bar u\bar d\bar d}$ - it's not the identity operation. So a neutron electric dipole moment wouldn't automatically violate $CPT$ symmetry. (But other than that, good answer!) – David Z Mar 8 '12 at 19:52
Well, thank you! I tried to do my best, even though I am not an expert in nuclear/high energy physics. – DaniH Mar 8 '12 at 21:49
"Notice that the only way to know the orientation of your charge distribution ("particle") is by the orientation of the angular momentum." This is exactly the part I don't understand. Electric dipole moment requires only uneven charge distribution, but it does not require those charges to move. Also why are non-zero EDM of atoms not violation of T symmetry? – Siyuan Ren Mar 9 '12 at 4:27
Concerning the orientation of the charge distribution, perhaps it is more precise to say "orientation of the electric dipole moment". The electric dipole moment is a vector and to apply, for instance, a parity transformation you need to place it in a reference frame. The $\hat{\mathbf{z}}$ direction in this reference frame is given by the z- component of the angular momentum, $\mathbf{J}=\mathbf{L}+\mathbf{S}$. – DaniH Mar 9 '12 at 9:05
About non-zero EDM of a molecule: it is indeed interesting to think why this not violates $T$ symmetry. This is because the molecule/atoms have non-zero ground states that are invariant under parity so that $T$ needs not to be broken to give non-zero $\mathbf{D}$. – DaniH Mar 9 '12 at 9:16
is almost sure that the pt symmetry be broken to the speed of light appear as constant to all inertial frames in relative motions.then left-right handed rotational invariance is not conserved,locally to smooth topologcal 4-dimensional manifolds.then the antiparticles are deformations of the spacetime curvatures that do appear symmetrics the dirac-einstein relativistics equations.therefore antimater doesn't exist in the nature neither early universe and the antiparticles are bundleled locally energy or "holes generated by the asymmetry of space and time conjugated by spacetime continuos to compensate the differences of speed of light in the topological4-dimensional manifolds that contain exotics structures coupled with the constancy of the speed of light so as the torsion fields with asymmetries to left-right handed frames generated in the topological 4-dimension manifolds with metrics defined non-hermitician hamiltonian matrices with spectral operators to complex coolections that are the fundamental entities in the universe that generates the spacetime continuos.theirs has non -commutative properties and pseudo-associative as split quaternions.
the time is plitted by two opposed orientations that if conjute deforming the space as fundamental to the smooth structures of the 4-dimension manifolds.then the spacetime curvatures continuos are pseudo constant but are stable to the kahler-einstein metrics
-
|
Limited Time Offer: Save 10% on all 2021 and 2022 Premium Study Packages with promo code: BLOG10 Select your Premium Package »
# Stress Testing
After completing this reading, you should be able to:
• Describe the rationale for the use of stress testing as a risk management tool.
• Explain key considerations and challenges related to stress testing, including choice of scenarios, regulatory specifications, model building, and reverse stress testing.
• Describe the relationship between stress testing and other risk measures, particularly in enterprise-wide stress testing.
• Describe stressed VaR and stressed ES, including their advantages and disadvantages, and compare the process of determining stressed VaR and ES to that of traditional VaR and ES.
• Describe the responsibilities of the board of directors, senior management, and the internal audit function in stress testing governance.
• Describe the role of policies and procedures, validation, and independent review in stress testing governance.
• Describe the Basel stress testing principles for banks regarding the implementation of stress testing.
Stress testing is a risk management tool that involves analyzing the impacts of the extreme scenarios that are unlikely but feasible. The main question for financial institutions is whether they have adequate capital and liquid assets to survive stressful times. Stress testing is done for regulatory purposes or for internal risk management by financial institutions. Stress testing can be combined with measurement of the risk such as the Value-at-Risk (VaR) and the Expected Shortfall (ES) to give a detailed picture of the risks facing a financial institution.
This chapter deals with the internally generated stress testing scenarios, regulatory requirements of stress testing, governance issues of stress testing, and the Basel stress testing principles.
## Rationale for the Use of Stress Testing as a Risk Management Tool
• Stress testing serves to warn a firm’s management of potential adverse events arising from the firm’s risk exposure and goes further to give estimates of the amount of capital needed to absorb losses that may result from such events.
• Stress tests help to avoid any form of complacency that may creep in after an extended period of stability and profitability. It serves to remind management that losses could still occur, and adequate plans have to be put in place in readiness for every eventuality. This way, a firm is able to avoid issues like underpricing of products, something that could prove financially fatal.
• Stress testing is a key risk management tool during periods of expansion when a firm introduces new products into the market. There may be very limited loss data or none at all, for such products, and hypothetical stress testing helps to come up with reliable loss estimates.
• Under pillar 1 of Basel II, stress testing is a requirement of all banks using the Internal Models Approach (IMA) to model market risk and the internal ratings-based approach to model credit risk. These banks have to employ stress testing to determine the level of capital they are required to have.
• Stress testing supplements other risk management tools, helping banks to mitigate risks through measures such as hedging and insurance. By itself, stress testing cannot address all risk management weaknesses, nor can it provide a one-stop solution.
## Comparison between Stress Testing and the VaR and ES
Recall that the VaR and ES are estimated from a loss distribution. VaR enables a financial institution to conclude with X% likelihood that the losses will not exceed the VaR level during time T. On the other hand, ES enables the financial institutions to conclude whether the losses exceed the VaR level during a given time T and hence the expected loss will be the ES amount.
VaR and ES are backward-looking. That is, they assume that the future and the past are the same. This is actually one disadvantage of VaR and ES. On the other hand, stress testing is forward-looking. It asks the question, “what if?”.
While stress testing largely does not involve probabilities, VaR, and ES models are founded on probability theory. For example, a 99.9% VaR can be viewed as a 1-in-1,000 event.
The backward-looking ES and VaR consider a wide range of scenarios that are potentially good or bad to the organization. However, stress testing considers a relatively small number of scenarios that are all bad for the organization.
Specifically, for the market risk, VaR/ES analysis often takes a short period of time, such as a day, while stress testing takes relatively long periods, such as a decade.
The primary objective of stress testing is to capture the enterprise view of the risks impacting a financial institution. The scenarios used in the stress testing are often defined based on the macroeconomic variables such as the unemployment rates and GDP growth rates. The effect of these variables should be considered in all parts of an institution while considering interactions between diverse areas of an institution.
### Stressed VaR and Stressed ES
Conventional VaR and ES are calculated from data spanning from one to five years, where a daily variation of the risk factors during this period is used to compute the potential future movements.
However, in the case of the stressed VaR and stressed ES, the data is obtained from specifically stressed periods (12-month stressed period on current portfolios according to Basel rules). In other words, stressed VaR and stressed ES generates conditional distributions and conditional risk measures. As such, they are conditioned to a recurrence of a given stressed period and thus can be taken as a historical stress testing.
Though stressed VaR and stressed ES might be objectively similar, they are different. Typically the time horizon for the stressed VaR/ES is short (one to ten days), while for the stress testing, it considers relatively longer periods.
For instance, assume that a stressed period is the year 2007. The stressed VaR would conclude that if there was a repeat of 2007, then there is an X% likelihood that losses over a period of T days will not surpass the stressed VaR level. On the other hand, stressed ES would conclude that if the losses over T days do not exceed the stressed VaR level, then the expected loss is the stressed ES.
However, stress testing would ask the questions “if the following year (2008) is the same as in 2007, will the financial institution survive?” Alternatively, what if the conditions of the next year are twice as adverse as that of 2007, will the financial institution survive? Therefore, stress testing does not consider the occurrence of the worst days of 2008 but rather the impact of the whole year.
There is also a difference between conventional VaR and the stressed VaR. Conventional VaR can be back-tested while stressed VaR cannot. That is, if we can compute one-day VaR with 95% confidence, we can go back and determine how effective it would have worked in the past. We are not able to back-test the stressed VaR output and its results because it only considers the adverse conditions which are generally infrequent.
### Types of Scenarios in Stress Testing
The basis of choosing a stress testing scenario is the selection of a time horizon. The time horizon should be long enough to accommodate the full analysis of the impacts of scenarios. Long time horizons are required in some situations. One-day to one-week scenarios can be considered, but three months to two-year scenarios are typically preferred.
The regulators recommend some scenarios, but in this section, we will discuss internally chosen scenarios. They include using historical scenarios, stressing key variables, and developing ad hoc scenarios that capture the current conditions of the business.
#### Historical Scenarios
Historical scenarios are generated by the use of historical data whose all relevant variables will behave in the same manner as in the past. For instance, variables such as interest rates and credit rate spreads are known to repeat past changes. As such, actual changes in the stressed period will be assumed to repeat themselves while proportional variations will be assumed for others. A good example of a historical scenario is the 2007-2008 US housing recession, which affected a lot of financial institutions.
In some cases, a moderately adverse scenario is made worse by multiplying variations of all risk factors by a certain amount. For instance, we could multiply what happened in the loss-making one-month period and increase the frequency of movement of all relevant risk movements by ten. As a result, the scenario becomes more severe to financial institutions. However, this approach assumes linear relationships between the movements in risk factors, which is not always the case due to correlations between the risk factors.
Other historical scenarios are based on one-day or one-week occurrences of all market risk factors. Such events include terrorist attacks (such as 9/11 terrorist attacks) and one-day massive movement of interest rates (such as on April 10, 1992, when ten-year bond yields changed by 8.7 standard deviations).
#### Stressing Key Variables
A scenario could be built by assuming that a significant change occurs in one or more key variables. Such changes include:
• A 2% decline in the GDP
• A 25% decrease in equity prices
• A 100% increase in all volatilities
• A 4% increase in the unemployment rate
• A 200-basis point increase in all interest rates
Some other significant variations could occur in factors such as money exchange rates, prices of commodities, and default rates.
In the case of the market risk, small changes in measured using the Greek letters (such as delta and gamma). The Greek letters cannot be used in stress testing because the changes are usually large. Moreover, Greeks are used to measure risk from a unit market variable over a short period of time, while stress testing incorporates the interaction of the different market variables over a long period of time.
The stress testing scenarios we have been discussing above are performed regularly, after which the results are used to test the stability of the financial structure of a financial institution in case of extreme conditions. However, the financial institutions need to develop ad hoc scenarios that capture the current economic conditions, specific exposures facing the firm, and update analysis of potential future extreme events. The firms either generate new scenarios or modify the existing scenarios based on previous data.
An example of an event that will prompt the firms to develop an ad hoc scenario is the change in the government policy on an important aspect that impacts the financial institutions or change in Basel regulation that requires increment of the capital within short periods of time.
The boards, senior management, and economic experts use their knowledge in markets, global politics, and current global instabilities to come with adverse scenarios. The senior management carries out a brain-storming event, after which they recommend necessary actions to avoid unabsorbable risks.
#### Using the Stress Testing Results
While stress testing, it is vital to involve the senior management for it to be taken seriously and thus used for decision making. The stress-testing results are not only used to satisfy the “what if” question, but also the Board and management should analyze the results and decide whether a certain class of risk mitigation is necessary. Stress testing makes sure that the senior management and the Board do not base their decision-making on what is most likely to happen, but also consider other alternatives less likely to happen that could have a dramatic result on the firm.
### Model Building
It is possible to see how the majority of the relevant risk factors behave in a stressed period while building a scenario, after which the impact of the scenario on the firm is analyzed in an almost direct manner. However, scenarios generated by stressing key variables and ad hoc scenarios capture the variations of a few key risk factors or economic variables. Therefore, in order to exhaust the scenarios, it is necessary to build a model to determine how the “left out” variables are expected to behave in a stressed market condition. The variables stated in the context of the stress testing are termed as core variables, while the remaining variables are termed as peripheral variables.
One method is performing analysis, such as regression analysis, to relate the peripheral variables to the core variables. Note that the variables are based on the stressed economic conditions. Using the data of the past stressed periods is most efficient in determining appropriate relationships.
For example, in case of the credit risk losses, data from the rating agencies, such as default rates, can be linked to an economic variable such as GDP growth rate. Afterward, general default rates expected in various stressed periods are determined. The results can be modified (scaled up or down) to determine the default rate for different loans or financial institutions. Note that the same analysis can be done to the recovery rates to determine loss rates.
#### The Knock-On Effects
Apart from the immediate impacts of a scenario, there are also knock-on effects that reflect how financial institutions respond to extreme scenarios. In its response, a financial institution can make decisions that can further worsen already extreme conditions.
For instance, during the 2005-2006 US housing price bubble, banks were concerned with the credit quality of other banks and were not ready to engage in interbank lending, which made funding costs for banks rise.
### Reverse Stress Testing
Recall that stress testing involves generating scenarios and then analyzing their effects. Reverse stress testing, as the name suggests, takes the opposite direction by trying to identify combinations of circumstances that might lead financial institutions to fail.
By using historical scenarios, a financial institution identifies past extreme conditions. Then, the bank determines the level at which the scenario has to be worse than the historical observation to cause the financial institution to fail. For instance, a financial institution might conclude that twice the 2005-2006 US housing bubble will make the financial institution to fail. However, this kind of reverse stress testing is an approximation. Typically, a financial institution will use complicated models that take into consideration correlations between different variables to make the market conditions more stressed.
Finding an appropriate combination of risk factors that lead the financial institution to fail is a challenging feat. However, an effective method is to identify some of the critical factors such as GDP growth rate, unemployment rates, and interest rate variations, then build a model that relates all other appropriate variables to these key variables. After that, possible factor combinations that can lead to failure are searched iteratively.
### Regulatory Stress Testing
US, UK, and EU regulators require banks and insurance companies to perform specified stress tests. In the United States, the Federal Reserve performs stress tests of all the banks whose consolidated assets are over USD 50 billion. This type of stress test is termed as Comprehensive Capital Analysis and Review (CCAR). Under CCAR, the banks are required to consider four scenarios:
1. Baseline Scenario
3. Scenario
4. An internal Scenario
The baseline scenario is based on the average projections from the surveys of the economic predictors but does not represent the projection of the Federal Reserve.
The adverse and the severely adverse scenarios describe hypothetical sets of events which are structured to test the strength of banking organizations and their resilience. Each of the above scenarios consists of the 28 variables (such as the unemployment rate, stock market prices, and interest rates) which captures domestic and international economic activity accompanied by the Board explanation on the overall economic conditions and variations in the scenarios from the past year.
Banks are required to submit a capital plan, justification of the models used, and the outcomes of their stress testing. If a bank fails to stress test due to insufficient capital, the bank is required to raise more capital while restricting the dividend payment until the capital has been raised.
Banks with consolidated assets between USD 10 million and USD 50 million are under the Dodd-Fank Act Stress Test (DFAST). The scenarios in the DFAST are similar to those in the CCAR. However, in the DFAST, banks are not required to produce a capital plan.
Therefore, through stress tests, regulators can consistently evaluate the banks to determine their ability to extreme economic conditions. However, they recommend that banks develop their scenarios.
## Responsibilities of the Board of Directors, Senior Management and the Internal Audit Function in Stress Testing Activities
For effective operation of stress testing, the Board of directors and senior management should have distinct responsibilities. What’s more, there should be some shared responsibilities, although a few roles can be set aside exclusively for one of the two groups.
### Responsibilities of the Board of Directors
1. The buck stops with the Board: The Board of directors is “ultimately” responsible for a firm’s stress tests. Even if board members do not immerse themselves in the technical details of stress tests, they should ensure that they stay sufficiently knowledgeable about stress-testing procedures and interpretation of results.
2. Continuous involvement: Board members should regularly receive summary information on stress tests, including results from every scenario. Members should then evaluate these results to ensure they take into account the firm’s risk appetite and overall strategy.
3. Continuous review: Board members should regularly review stress testing reports with a view to not just critic key assumptions but also supplement the information with their views that better reflect the overall goals of the firm.
4. Integrating stress testing results in decision making: The Board should make key decisions on investment, capital, and liquidity based on stress test results along with other information. While doing this, the Board should proceed with a certain level of caution in cognizance of the fact that stress tests are subject to assumptions and a host of limitations.
5. Formulating stress-testing guidelines: It’s the responsibility of the Board to come up with guidelines on stress testing, such as the risk tolerance level (risk appetite).
### Responsibilities of Senior Management
1. Implementation oversight: Senior management has the mandate to ensure that stress testing guidelines authorized by the Board are implemented to the letter. This involves establishing policies and procedures that help to implement the Board’s guidelines.
2. Regularly reporting to the Board: Senior management should keep the Board up-to-date on all matters to do with stress testing, including test designs, emerging issues, and compliance with stress-testing policies.
3. Coordinating and Integrating stress testing across the firm: Members of senior management are responsible for propagating widespread knowledge on stress tests across the firm, making sure that all departments understand its importance.
4. Identifying grey areas: Senior management should seek to identify inconsistencies, contradictions, and possible gaps in stress tests to make improvements to the whole process.
5. Ensuring stress tests have a sufficient range: In consultations with the Board of directors, senior management has to ensure that stress testing activities are sufficiently severe to gauge the firm’s preparation for all possible scenarios, including low-frequency high-impact events.
6. Using stress tests to assess the effectiveness of risk mitigation strategies: Stress tests should help the management to assess just how effective risk mitigation strategies are. If such strategies are effective, significantly severe events will not cause significant financial strain. If the tests predict significant financial turmoil, it could be that the hedging strategies adopted are ineffective.
7. Updating stress tests to reflect emerging risks: As time goes, an institution will gradually gain exposure to new risks, either as a result of market-wide trends or its investment activities. It is the responsibility of senior management to develop new stress-testing techniques that reflect the institution’s new risk profile.
### Role of the Internal Audit
Internal audit should:
• Independently evaluate the performance, integrity, and reliability of stress-testing activities;
• Ensure that stress tests across the organization are conducted in a sound manner and remain relevant in terms of the scenarios tested;
• Assess the skills and expertise of the staff involved in stress-testing activities;
• Check that approved changes to stress-testing policies and procedures are implemented and appropriately documented;
• Evaluate the independent review and validation exercises;
To accomplish all the above, internal audit staff must be well qualified. They should be well-grounded in stress-testing techniques and technical expertise to be able to differentiate between excellent and inappropriate practices.
## The Role of Policies and Procedures, Validation, and Independent Review in Stress Testing Governance
### Policies and Procedures
A financial institution should set out clearly stated and understandable policies and procedures governing stress testing, which must be adhered to. The policies and procedures ensure that the stress testing of parts of a financial institution converges to the same point.
The policies and procedures should be able to:
• Explain the purpose of stress testing;
• Describe the procedures of stress testing;
• State the frequency at which the stress testing can be done;
• Describe the roles and responsibilities of the parties involved in stress testing;
• Provide an explanation of the procedures to be followed while choosing the scenarios;
• Describe how the independent reviews of the stress testing will be done;
• Give clear documentation on stress testing to third parties (e.g., regulators, external auditors, and rating agencies);
• Explain how the results of the stress testing will be used and by whom;
• They were amended as the stress testing practices changes as the market conditions change;
• Accommodate tracking of the stress test results as they change through time; and
• Document the activities of models and the software acquired from the vendors or other third parties.
### Validation and Independent Review Governance
The stress testing governance covers the independent review procedures, which are expected to be unbiased and provide assurance to the board that stress testing is carried out while following the firm’s policies and procedures. Financial institutions use diverse models that are subject to independent review to make sure that they serve the intended purpose.
Validation and independent review should involve the following:
• Ensuring that validation and independent review are conducted on an ongoing basis;
• Ensuring that subjective or qualitative aspects of a stress test are also validated and reviewed, even if they cannot be tested in quantitative terms;
• Acknowledging limitations in stress testing;
• Ensuring that stress-testing standards are upheld;
• Acknowledging data weaknesses or limitations, if any;
• Ensuring that there is sufficient independence in both validation and review of stress tests;
• Ensuring that third-party models used in stress-testing activities are validated and reviewed to determine if they are fit for the purpose at hand;
• Ensuring that stress tests results are implemented rigorously, and verifying that any departure from the recommended actions is backed up by solid reasons.
## Basel Stress-Testing Principles
The Basel Committee emphasizes that stress testing is a crucial aspect by requiring that the market risk calculations are based on the internal VaR and the Expected Shortfall (ES) models, which should be accompanied by “ rigorous and comprehensive” stress testing. Moreover, banks that use the internal rating approach of the Basel II to calculate the credit risk capital should perform a stress test to evaluate the strength of their assumptions.
Influenced by the 2007-2008 financial crisis, the Basel Committee published the principles of stress-testing for the banks and corresponding supervisors. The overarching emphasis of the Basel committee was the importance of stress testing in determining the amount of capital that will cushion banks against losses due to large shocks.
Therefore, the Basel committee recognized the importance of stress testing in:
• Giving a forward-looking perspective on the evaluation of risk;
• Overcoming the demerits of modes and historical data;
• Facilitating the development of risk mitigation, or any other plans to reduce risks in different stressed conditions;
• Assisting internal and external communications;
• Supporting the capital and liquidity planning procedures; and
• Notifying and setting of risk tolerance.
When the Basel committee considered the stress tests done before 2007-2008, they concluded that:
• It is crucial to involve the Board and the senior management in stress testing. The Board and the senior management should be involved in stress testing aspects such as choosing scenarios, setting stress testing objectives, analysis of the stress testing results, determining the potential actions, and strategic decision making. During the crisis, banks that had senior management interested in developing a stress test, which eventually affected their decision-making, performed fairly well.
• The approaches of the stress-testing did not give room for the aggregation of different exposures in different parts of a bank. That is, experts from different parts of the bank did not cooperate to produce an enterprise-wide risk view.
• The scenarios chosen in the stress tests were too moderate and were based on a short period of time. The possible correlations between different risk types, products, and markets were ignored. As such, the stress test relied on the historical scenarios and left out risks from new products and positions taken by the banks.
• Some of the risks were not considered comprehensively in the chosen scenarios. For example, counterparty credit risk, risks related to structured products, and product awaiting securitizations were partially considered. Moreover, the effect of the stressed scenario on liquidity was underrated.
### Basel Committee Stress Testing Principles
According to the Basel Committee on Banking Supervision’s “Stress Testing Principles” published in December 2017:
1. ##### Stress testing frameworks should incorporate an effective governance structure.
The stress testing frameworks should involve a governance structure that is clear, documented, and comprehensive. The roles and responsibilities of senior management, oversight bodies, and those concerned with stress testing operations should be clearly stated.
The stress testing framework should incorporate a collaboration of all required stakeholders and the appropriate communication to stakeholders of the stress testing methodologies, assumptions, scenarios, and results.
2. ##### Stress testing frameworks should have clearly articulated and formally adopted objectives.
The stress testing frameworks should satisfy the objectives that are documented and approved by the Board of an organization or any other senior governance. The objective should be able to meet the requirements and expectations of the framework of the bank and its general governance structure. The staff mandated to carry out stress testing should know the stress testing framework’s objectives.
3. ##### Stress testing frameworks should capture material and relevant risks and apply sufficiently severe stresses.
Stress testing should reflect the material and relevant risk determined by a robust risk identification process and key variables within each scenario that is internally consistent. A narrative should be developed explaining a scenario that captures risks, and those risks that are excluded by the scenario should be described clearly and well documented.
4. ##### Stress testing should be utilized as a risk management tool and to convey business decisions.
Stress testing is typically a forward-looking risk management tool that potentially helps a bank in identifying and monitoring risk. Therefore, stress testing plays a role in the formulation and implementation of strategic and policy objectives. When using stress testing results, banks and authorities should comprehend crucial assumptions and limitations such as the relevance of the scenario, model risks, and risk coverage. Lastly, stress testing as a risk management tool should be done regularly in accordance with a well-developed schedule (except ad hoc stress tests). The frequency of a stress test depends on:
• The objective of the stress testing framework;
• The size and complexity of the financial institution; and
• Changes in the macroeconomic environment.
5. ##### Resources and organizational structures should be adequate to meet the objectives of the stress testing framework.
Stress testing frameworks should have adequate organizational structures that meet the objectives of the stress test. The governance processes should ensure that the resources for stress testing are adequate, such that these resources have relevant skill sets to implement the framework.
6. ##### Stress tests should be supported by accurate and sufficiently granular data and robust IT systems.
Stress tests identify risks and produce reliable results if the data used is accurate and complete, and available at an adequately granular level and on time. Banks and authorities should establish a sound data infrastructure which is capable of retrieving, processing, and reporting of information used in stress tests. The data infrastructure should be able to provide adequate quality information to satisfy and objectives of the stress testing framework. Moreover, structures should be put in place to cover any material information deficiencies.
7. ##### Models and methodologies to assess the impacts of scenarios and sensitivities should be fit for purpose.
The models and methodologies utilized in stress testing should serve the intended purpose. Therefore,
• There should be an adequate definition of coverage, segmentation, and granularity of the data and the types of risks based on the objectives of the stress test framework. All is done at the modeling stage;
• The complexity of the models should be relevant to both the objectives of the stress testing and target portfolios being assessed using the models; and
• The models and the methodologies in a stress test should be adequately justified and documented.
The model building should be a collaborative task between the different experts. As such, the model builders engage with stakeholders to gain knowledge on the type of risks being modeled and understand the business goals, business catalysts, risk factors, and other business information relevant to the objectives of the stress testing framework.
8. ##### Stress testing models, results, and frameworks should be subject to challenge and regular review.
Periodic review and challenge of stress testing for the financial institutions and the authorities is important in improving the reliability of the stress testing results, understanding of results’ limitations, identifying the areas that need improvement and ensuring that the results are utilized in accordance with the objectives of the stress testing framework.
9. ##### Stress testing practices and findings should be communicated within and across jurisdictions.
Communicating the stress testing results to appropriate internal and external stakeholders provides essential perspectives on risks that would be unavailable to an individual institution or authority. Furthermore, disclosure of the stress test results by banks or authorities improves the market discipline and motivates the resilience of the banking sector towards identified stress.
Banks and authorities who choose to disclose stress testing results should ensure that the method of delivery should make the results understandable while including the limitations and assumptions on which the stress test is based. Clear conveyance of stress test results prevents inappropriate conclusions on the resilience of the banks with different results.
### Question 1
Hardik and Simriti compare and contrast stress testing with economic capital and value at risk measures. Which of the following statements regarding differences between the two types of risk measures is most accurate?
A. Stress tests tend to calculate losses from the perspective of the market, while EC/VaR methods compute losses based on an accounting point of view
B. While stress tests focus on unconditional scenarios, EC/VaR methods focus on conditional scenarios
C. While stress tests examine a long period, typically spanning several years, EC models focus on losses at a given point in time, say, the loss in value at the end of year $$t$$.
D. Stress tests tend to use cardinal probabilities while EC/VaR methods use ordinal arrangements
Option A is inaccurate: Stress tests tend to calculate losses from the perspective of accounting, while EC/VaR methods compute losses based on a market point of view.
Option B is inaccurate: While stress tests focus on conditional scenarios, EC/VaR methods focus on unconditional scenarios.
Option D is also inaccurate: Stress tests do not focus on probabilities. Instead, they focus on ordinal arrangements like “severe,” “more severe,” and “extremely severe.” EC/VaR methods, on the other hand, focus on cardinal probabilities. For instance, a 95% VaR loss could be interpreted as 5-in-100 events.
### Question 2
One of the approaches used to incorporate stress testing in VaR involves the use of stressed inputs. Which of the following statements most accurately represents a genuine disadvantage of relying on risk metrics that incorporate stressed inputs?
A. The metrics are usually more conservative (less aggressive)
B. The metrics are usually less conservative (more aggressive)
C. The capital set aside, as informed by the risk metrics, is likely to be insufficient
D. The risk metrics primarily depend on portfolio composition and are not responsive to emerging risks or current market conditions.
The most common disadvantage of using stressed risk metrics is that they do not respond to current issues in the market. As such, significant shocks in the market can “catch the firm unaware” and result in extensive financial turmoil.
### Question 3
Sarah Wayne, FRM, works at Capital Bank, based in the U.S. The bank owns a portfolio of corporate bonds and also has significant equity stakes in several medium-size companies across the United States. She was recently requested to head a risk management department subcommittee tasked with stress testing. The aim is to establish how well prepared the bank is for destabilizing events. Which of the following scenario analysis options would be the best for the purpose at hand?
A. Hypothetical scenario analysis
B. Historical scenario analysis
C. Forward-looking hypothetical scenario analysis and historical scenario analysis
D. Cannot tell based on the given information
Scenario analyses should be dynamic and forward-looking. This implies that historical scenario analysis and forward-looking hypothetical scenario analysis should be combined. Pure historical scenarios can give valuable insights into impact but can underestimate the confluence of events that are yet to occur. What’s more, historical scenario analyses are backward-looking and hence neglect recent developments (risk exposures) and current vulnerabilities of an institution. As such, scenario design should take into account both specific and systematic changes in the present and near future.
### Question 4
Senior management should be responsible for which of the following tasks?
1. Ensuring that stress testing policies and procedures are followed to the letter
2. Assessing the skills and expertise of the staff involved in stress-testing activities
3. Evaluating the independent review and validation exercises
4. Making key decisions on investment, capital, and liquidity based on stress test results along with any other information available.
5. Propagating widespread knowledge on stress tests across the firm, and making sure that all departments understand its importance
A. I, II, and IV
B. I and V
C. III and IV
D. V only
Roles II and III belong to internal audit. Role IV belongs to the board of directors.
Featured Study with Us
CFA® Exam and FRM® Exam Prep Platform offered by AnalystPrep
Study Platform
Learn with Us
Subscribe to our newsletter and keep up with the latest and greatest tips for success
Online Tutoring
Our videos feature professional educators presenting in-depth explanations of all topics introduced in the curriculum.
Video Lessons
Daniel Glyn
2021-03-24
I have finished my FRM1 thanks to AnalystPrep. And now using AnalystPrep for my FRM2 preparation. Professor Forjan is brilliant. He gives such good explanations and analogies. And more than anything makes learning fun. A big thank you to Analystprep and Professor Forjan. 5 stars all the way!
michael walshe
2021-03-18
Professor James' videos are excellent for understanding the underlying theories behind financial engineering / financial analysis. The AnalystPrep videos were better than any of the others that I searched through on YouTube for providing a clear explanation of some concepts, such as Portfolio theory, CAPM, and Arbitrage Pricing theory. Watching these cleared up many of the unclarities I had in my head. Highly recommended.
Nyka Smith
2021-02-18
Every concept is very well explained by Nilay Arun. kudos to you man!
|
# PCI Express Throughput
## Recommended Posts
Here (https://en.wikipedia.org/wiki/PCI_Express) is a nice table outlining speeds for PCI Express. I have GF 660 GTX with motherboard with PCI Express 3.0 x16. I made a test by writing a simple D3D11 app that download 1920x1080x32 (8 MB) image from GPU to CPU. The whole operation takes 8 ms. In second this sums up to around 1 GB of data, which corresponds exactly to PCI Express 3.0 x1. Is this how it is supposed to work? Is it like all CopyResource/Map data goes through one of the 16 lanes?
##### Share on other sites
I've never heard this described in detail, but I would imagine that while the interface may have 1, 2, 4, 8 or 16 lanes, the card and driver determine how the data is transmitted. I would assume that if the data can fit within a single lane, a single lane would be used.
Edited by MarkS
##### Share on other sites
The question is what it means "if the data can fit". I would like to copy data back from GPU to CPU as fast as possible and since no other data go that way expect for my one texture download I would ideally like to utilized all 16 lines. If that's possible of course.
##### Share on other sites
The question is what it means "if the data can fit". I would like to copy data back from GPU to CPU as fast as possible and since no other data go that way expect for my one texture download I would ideally like to utilized all 16 lines. If that's possible of course.
You are not streaming data to the monitor. You are telling the card, through the driver, how much data is to be transferred and the card and driver make the appropriate decisions as to how that happens.
You have to understand that you have absolutely no control over what the graphics card and driver does in this matter. I'm not 100% convinced that the driver has control over this, and if not, the user never will.
Out of curiosity, why is this important to you? Have you found yourself bottle-necked by the number of lanes used, or are you looking at potential issues?
Edited by MarkS
##### Share on other sites
I'm just looking at potential uses. I'm aware the GPU->CPU traffic should be avoided as much as possible but for some tests I needed to do this and to make those tests reliable I wanted to utilize full transfer potential.
On a side note, uploading data (CPU -> GPU) takes 3-5 ms (around twice faster than the other way around).
##### Share on other sites
Is this how it is supposed to work? Is it like all CopyResource/Map data goes through one of the 16 lanes?
No thats not how its supposed to work... if it is setup for Pcie3.0 x16 then it should have all 16 lanes transferring at the same time. Maybe your videocard isn't in the x16 slot or maybe its misconfigured.
##### Share on other sites
..In addition not because you motherboard support PCI-E 3.0 doesn't mean that our graphics cards support PCI-E 3.0, and because the graphics card specification states 3.0 support I would still be wary. The GPU may fallback to a lower speed if certain conditions are not met so unless you have all the low level specification for the GPU in question the all we are dealing with is specification.
##### Share on other sites
I'm now testing my work computer which is brand new with GeForce 1080 GTX. See detailed spec in this picture: https://postimg.org/image/hwhuntpn5/
PCI-E is bidirectional and all sources I've found claim the transfer rate in both directions should be identical, what is not true in my case.
##### Share on other sites
Were you doing anything with the GPU at the same time as the transfer?
##### Share on other sites
uint64 bef = TickCount();
deviceContext->CopyResource(stagingCopy.texture, gbufferDiffuseRT.texture);
D3D11_MAPPED_SUBRESOURCE mappedSubresource;
memcpy(mydata, mappedSubresource.pData, sizeof(mydata));
deviceContext->Unmap(stagingCopy.texture, 0);
uint64 aft = TickCount();
cout << aft - bef << endl;
As for my home GeForce 660 GTX I've just checked in HWINFO app that it's plugged into PCI-E 2.0, hence the slower speed than at my work computer.
Nevertheless I presume the 8 GB/s and 3 GB/s should be bigger. And identical.
##### Share on other sites
I can think of two things in regards to the uneven transfer bandwidth.
1. the texture might be in morton order or tiled in some fashion and might have to be untiled first before being transfered.
2. there is some sort of arbitrator that deprioritizes read accesses from the CPU to video memory. But since you aren't doing anything else at the time why would it limit bandwidth?
##### Share on other sites
The benchmark you posted is flawed and will stall. Period.
You need to give time between the calls to CopyResource & your Map. I'd suggest using 3 StagingBuffers: call CopyResource( stagingBuffer[frameCount % 3], and then call Map( stagingBuffer[(frameCount + 5) % 3] );
that is, you will be mapping this frame the texture you started copying 2 frames ago.
What you are measuring right now is how long it takes for the CPU to ask the GPU to begin the copy transfer + the tasks that the GPU has pending before the copy + the time it takes for the GPU to transfer the data to CPU (your CopyResource call) + the time it takes for the CPU to copy from CPU to another region in CPU (your memcpy)
Edited by Matias Goldberg
##### Share on other sites
Here (https://en.wikipedia.org/wiki/PCI_Express) is a nice table outlining speeds for PCI Express. I have GF 660 GTX with motherboard with PCI Express 3.0 x16. I made a test by writing a simple D3D11 app that download 1920x1080x32 (8 MB) image from GPU to CPU. The whole operation takes 8 ms. In second this sums up to around 1 GB of data, which corresponds exactly to PCI Express 3.0 x1. Is this how it is supposed to work? Is it like all CopyResource/Map data goes through one of the 16 lanes?
The bus is not the limiting factor, not even remotely close. First of all, there are maximum bandwidths of the CPU, GPU, and RAM. Second, there's the question of who is actually doing the transfer and when. Is it a DMA operation? Is the driver buffering or doing prep work? That sort of thing. Third, 8 MB is a very small copy size to try and benchmark that bus, so I would not consider your timing to be valid in the first place. Fourth, you're using CPU times before initiating and after completing the transfer, you're capturing extra work happening inside the driver that deals with correcting data formats and layouts. Fifth, who said the driver wants to give you maximum bandwidth in the first place? It has other things going on, including the entire WDDM to manage.
That you got a number comparable to one lane is pure coincidence.
Again, the bus has jack all to do with these speeds. What are the maximum bandwidths of the respective CPU and GPU memories? Both are DMA transfers, and the GPU may have much more capable DMA hardware than CPU, especially since graphics memory bandwidth is so much higher than system memory. Not to mention you're also capturing internal data format conversions.
Edited by Promit
##### Share on other sites
The bus is not the limiting factor, not even remotely close. First of all, there are maximum bandwidths of the CPU, GPU, and RAM. Second, there's the question of who is actually doing the transfer and when.
This!
Doing a transfer over PCIe is very much like reading data from disk. Once it actually happens, even a slow disk delivers over 100MB/s, but it takes some 8-10 milliseconds before the head has even moved to the correct track and the platter has spun far enough for the sector to be read.
Very similarly, the actual PCIe transfer happens with stunning speed, once it happens. But it may be an eternity before the GPU is switched from "render" to "transfer". Some GPUs can do both at the same time, but not all, and some have two controllers for simultaneous up/down transfers. Nvidia in particular did not support transfer during render prior to -- I believe -- Maxwell (could be wrong, could be Kepler?).
Note that PCIe uses the same lanes for data and control on the physical layer, so while a transfer (which is uninterruptible) is going on, it is even impossible to switch the GPU to something different. Plus, there is a non-trivial control-flow and transaction-control protocol in place. Which, of course, adds some latency.
So, it is very possible that a transfer operation does "nothing" for quite some time, and then suddenly happens blazingly fast, with a speed almost rivalling memcpy.
In addition to that, using GetTickCount for something in the single-digit (or less) millisecond range is somewhat bound to fail anyway.
##### Share on other sites
Posted (edited)
Just wanted to let you know that I made a test with CUDA to measure memory transfer rate and it peaked at around ~ 12 GB/s.
Also, measuring CopyResource time with D3D11 queries result in very similar throughput.
Edited by maxest
##### Share on other sites
Potentially relevant to this topic:
## Create an account
Register a new account
• ### Forum Statistics
• Total Topics
628734
• Total Posts
2984444
• 25
• 11
• 10
• 16
• 14
|
Select Page
r = Bin 1 1 The number of jumps in a stock price in a given time interval. ( Z μ ∼ λ t Use … It is useful for modeling counts or events that occur randomly over a fixed period of time or in a fixed space. the Poisson process has density ‚e¡‚t for t >0; an exponential distribution with expected value 1=‚. {\displaystyle n} n + , the expected number of total events in the whole interval. {\displaystyle \lambda } For example, the number of telephone calls to a busy switchboard in one hour follows a Poisson distribution with the events appearing frequent to the operator, but they are rare from the point of view of the average member of the population who is very unlikely to make a call to that switchboard in that hour. i 203–204, Cambridge Univ. P . How Close Is Linear Programming Class to What Solvers Actually Implement for Pivot Algorithms. The complexity is linear in the returned value k, which is λ on average. {\displaystyle X_{i}} An infinite expectation here doesn't seem right. < X {\displaystyle \alpha \to 0,\ \beta \to 0} By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. The Poisson distribution poses two different tasks for dedicated software libraries: Evaluating the distribution … in terms of exponential, power, and factorial functions. X + {\displaystyle P(k;\lambda )} , we are given a time rate for the number of events / / {\displaystyle r} I understand that the solution, which is first to calculate P(N $\geq$ n) = $\frac{1}{n-1}$ and then do the summation. I want to know if I am on the right track when solving this problem: "Assume that customers arrive at a bank in accordance with a Poisson process with rate λ = 6 per hour, and suppose that each … i if ( f of the law of ) ( ( ∼ k λ / ( X {\displaystyle I_{i}} P Or, since it's a random variable, the expected value of this random variable. ) , and the statistic has been shown to be complete. [See the whole thing here: Poisson Distribution.] Recall that if X is discrete, the average or expected value is . X To understand counting processes, you need to understand the meaning and probability behavior of the increment N(t+h) N(t) from time tto time t+h, where h>0 and of course t 0. λ goes to infinity. Suppose ⌋ λ {\displaystyle \sigma _{k}={\sqrt {\lambda }}} Examples of probability for Poisson distributions, Once in an interval events: The special case of, Examples that violate the Poisson assumptions, Sums of Poisson-distributed random variables, Simultaneous estimation of multiple Poisson means, Poisson regression and negative binomial regression, Random drawing from the Poisson distribution, Generating Poisson-distributed random variables, Free Random Variables by D. Voiculescu, K. Dykema, A. Nica, CRM Monograph Series, American Mathematical Society, Providence RI, 1992. Erstellen 22 dez. λ Let . , + only through the function = X , depends on the sample only through Y ( The number of magnitude 5 earthquakes per year in a country may not follow a Poisson distribution if one large earthquake increases the probability of aftershocks of similar magnitude. The remaining 1 − 0.37 = 0.63 is the probability of 1, 2, 3, or more large meteorite hits in the next 100 years. χ The number of customers arriving at a rate of 12 per hour. {\displaystyle \lambda } The correct answer should be infinity. The natural logarithm of the Gamma function can be obtained using the lgamma function in the C standard library (C99 version) or R, the gammaln function in MATLAB or SciPy, or the log_gamma function in Fortran 2008 and later. , x = 0,1,2,3… Step 3:λ is the mean (average) number of events (also known as “Parameter of Poisson Distribution). x Also it can be proven that the sum (and hence the sample mean as it is a one-to-one function of the sum) is a complete and sufficient statistic for λ. i Pois In other words, let i x Nested optimization problem - Function approximation. Is there something missing in the question, is it supposed to be the total of the 5 numbers or something? ( ℓ , ^ n ( / , a specific time interval, length, volume, area or number of similar items). ( We did not (yet) say what the variance was. 1 = X 1 Calculate the probability of k = 0, 1, 2, 3, 4, 5, or 6 overflow floods in a 100-year interval, assuming the Poisson model is appropriate. P The factor of of equal size, such that {\displaystyle E(g(T))=0} {\displaystyle N=X_{1}+X_{2}+\dots X_{n}} The chi-squared distribution is itself closely related to the gamma distribution, and this leads to an alternative expression. > Under these assumptions, the probability that no large meteorites hit the earth in the next 100 years is roughly 0.37. How were drawbridges and portcullises used tactically? λ + λ can be estimated from the ratio X A compound Poisson process is a continuous-time (random) stochastic process with jumps. {\displaystyle \lambda _{i}} , i {\displaystyle \mathbf {x} } ( ( If it follows the Poisson process, then (a) Find the probability… . + {\displaystyle \lambda } The rate of an event is related to the probability of an event occurring in some small subinterval (of time, space or otherwise). with probability @MatthewPilling Yes, I have gone through the calculation. λ o and 12 2012-12-22 19:33:51 Xodarap +2. By monitoring how the fluctuations vary with the mean signal, one can estimate the contribution of a single occurrence, even if that contribution is too small to be detected directly. is the probability that α By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. λ ) {\displaystyle T(\mathbf {x} )} Finding integer with the most natural dividers. F n T Throughout, R is used as the statistical software to graphically and numerically described the data and as the programming language to estimate the intensity functions. rdrr.io Find an R package R language docs Run R in your browser R Notebooks. λ 2 λ 2 λ x … Y k [55]:219[56]:14-15[57]:193[6]:157 This makes it an example of Stigler's law and it has prompted some authors to argue that the Poisson distribution should bear the name of de Moivre.[58][59]. The probability of no overflow floods in 100 years was roughly 0.37, by the same calculation. 0 = Y The Poisson distribution arises in connection with Poisson processes. {\displaystyle D} k 0 λ / . Featured on Meta New Feature: Table Support Y , or {\displaystyle \lambda =rt} ( Poisson distributions, each with a parameter Calculate the expected value of an homogeneous Poisson process at regular points in time. Cumulative probabilities are examined in turn until one exceeds u. ) i {\displaystyle i} λ It is in many ways the continuous-time version of the Bernoulli process that was described in Section 1.3.5. ⌊ Now we assume that the occurrence of an event in the whole interval can be seen as a Bernoulli trial, where the = λ That is, events occur independently. p By correlating the graininess with the degree of enlargement, one can estimate the contribution of an individual grain (which is otherwise too small to be seen unaided). 2 , x ( − . X In 1860, Simon Newcomb fitted the Poisson distribution to the number of stars found in a unit of space. , N λ ) μ ( ) Bounds for the tail probabilities of a Poisson random variable. We also need to count the number of "successes" (or failures), so the variables involved need to be non-… = ) λ k N ( λ A further practical application of this distribution was made by Ladislaus Bortkiewicz in 1898 when he was given the task of investigating the number of soldiers in the Prussian army killed accidentally by horse kicks;[39]:23-25 this experiment introduced the Poisson distribution to the field of reliability engineering. ( … λ 1 University Math Help. k For double precision floating point format, the threshold is near e700, so 500 shall be a safe STEP. The number of goals in sports involving two competing teams. is to take three independent Poisson distributions ) n {\displaystyle \lambda [1-\log(\lambda )]+e^{-\lambda }\sum _{k=0}^{\infty }{\frac {\lambda ^{k}\log(k!)}{k!}}} λ N 2 , ; The non-homogeneous Poisson process is developed as a generalisation of the homogeneous case. x The probability function of the bivariate Poisson distribution is, The free Poisson distribution[26] with jump size 1 2 {\displaystyle [\alpha (1-{\sqrt {\lambda }})^{2},\alpha (1+{\sqrt {\lambda }})^{2}]} {\displaystyle {\frac {\lambda }{N}}} k , then, similar as in Stein's example for the Normal means, the MLE estimator = {\displaystyle X_{1},X_{2}} ) e 1 1 = denote that λ is distributed according to the gamma density g parameterized in terms of a shape parameter α and an inverse scale parameter β: Then, given the same sample of n measured values ki as before, and a prior of Gamma(α, β), the posterior distribution is. N X , + ( λ Its free cumulants are equal to {\displaystyle Q(\lfloor k+1\rfloor ,\lambda )}, λ n 2 α T Inverse transform sampling is simple and efficient for small values of λ, and requires only one uniform random number u per sample. Since each observation has expectation λ so does the sample mean. is a trivial task that can be accomplished by using the standard definition of … [60] ) {\displaystyle I=eN/t} Daher werden damit oft im Versicherungswesen zum Beispiel … 0 {\displaystyle T(\mathbf {x} )=\sum _{i=1}^{n}x_{i}} It only takes a minute to sign up. What does "ima" mean in "ima sue the s*** out of em"? If N electrons pass a point in a given time t on the average, the mean current is Also, a geometric random variable is supported on $\mathbb{N}$ (or sometimes even $\mathbb{W}$), but our random variable $N$ is supported on $\{2,3, \ldots \}$ and has pmf $$p_N(n)=(1/2)^{n-1}$$ Here we have independent trials because the interarrival times of a poisson process are independent. {\displaystyle n} {\displaystyle \mathbf {x} } {\displaystyle \alpha } In real life, only knowing the rate (i.e., during 2pm~4pm, I received 3 phone calls) is much more common than knowing both n & p. 4. D T L The expected number of total events in n as[35], Applications of the Poisson distribution can be found in many fields including:[36]. λ The occurrence of one event does not affect the probability that a second event will occur. Therefore, we take the limit as The fraction of λk to k! λ 1 ( . ∑ n {\displaystyle P_{\lambda }(g(T)=0)=1} t }}\ } The table below gives the probability for 0 to 7 goals in a match. {\displaystyle \alpha } Y i X X , + ‖ Thus, Pois e ⌊ , ∼ Knowing the distribution we want to investigate, it is easy to see that the statistic is complete. p f and rate (i.e., the standard deviation of the Poisson process), the charge {\displaystyle e} I Pois In probability theory and statistics, the Poisson distribution (/ˈpwɑːsɒn/; French pronunciation: [pwasɔ̃]), named after French mathematician Siméon Denis Poisson, is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant mean rate and independently of the time since the last event. has value Other than this … The average rate at which events occur is independent of any occurrences. 1 , , when 1 n The distribution was first introduced by Siméon Denis Poisson (1781–1840) and published together with his probability theory in his work Recherches sur la probabilité des jugements en matière criminelle et en matière civile(1837). ) If the individual Cite error: A list-defined reference named "Brooks2007" is not used in the content (see the help page). 0 Advanced Statistics / Probability. $$N = inf\{k > 1:T_k - T_{k-1} > T_1\}$$ Find E(N). h , ( {\displaystyle P(k;\lambda )} The table below gives the probability for 0 to 6 overflow floods in a 100-year period. λ In einem Poisson-Prozess genügt die zufällige Anzahl der Ereignisse in einem festgelegten Intervall der Poisson-Verteilung . i {\displaystyle z_{\alpha /2}} 3 The lower bound can be proved by noting that [ λ 2 for each to happen. When quantiles of the gamma distribution are not available, an accurate approximation to this exact interval has been proposed (based on the Wilson–Hilferty transformation):[31]. 1 α 35, Springer, New York, 2017. are iid + α X log You want to calculate the probability (Poisson Probability) of a given number of occurrences of an event (e.g. 1 {\displaystyle g(t)} ) λ In a Poisson process, the number of observed occurrences fluctuates about its mean λ with a standard deviation + {\displaystyle r} n k Expectation of sum of arrival times of Poisson process in $[0, t]$, Adaptation of sum of arrival times of Poisson process, Conditional expectation of 1st arrival in merged poisson process conditioned on 1st arrival comes from process A, conditional expectation value of poisson process, Arrival time expectation value - Merged Poisson Process, Conditional expectation of arrivals in Poisson process given that $N(1)=1$.
|
Prime numbers containing 2016
Each of the following prime numbers contains 2016
$20161 \; = \; 2016 \; || \; 1$
$120167 \; = \; 1 \; || \; 2016 \; || \; 7$
$201611 \; = \; 2016 \; || \; 11$
$201623 \; = \; 2016 \; || \; 23$
$201629 \; = \; 2016 \; || \; 29$
$201653 \; = \; 2016 \; || \; 53$
$201661 \; = \; 2016 \; || \; 61$
$201667 \; = \; 2016 \; || \; 67$
$201673 \; = \; 2016 \; || \; 73$
$201683 \; = \; 2016 \; || \; 83$
$420163 \; = \; 4 \; || \; 2016 \; || \; 3$
$620161 \; = \; 6 \; || \; 2016 \; || \; 1$
$820163 \; = \; 8 \; || \; 2016 \; || \; 3$
$920167 \; = \; 9 \; || \; 2016 \; || \; 7$
Find a few more
math grad - Interest: Number theory
This entry was posted in Prime Numbers and tagged . Bookmark the permalink.
2 Responses to Prime numbers containing 2016
1. paul says:
Here are all the 7 digit ones
1020163
1201601
1201633
1201637
1201643
1201687
1201691
1201699
1320161
1420169
1620161
1720163
1920161
2016017
2016029
2016031
2016059
2016101
2016107
2016127
2016137
2016139
2016181
2016193
2016197
2016199
2016211
2016239
2016247
2016269
2016277
2016281
2016293
2016323
2016331
2016349
2016359
2016361
2016367
2016373
2016397
2016401
2016403
2016407
2016409
2016419
2016439
2016449
2016461
2016493
2016517
2016529
2016541
2016551
2016559
2016577
2016583
2016587
2016593
2016607
2016653
2016671
2016673
2016691
2016697
2016733
2016739
2016787
2016821
2016823
2016841
2016851
2016853
2016857
2016877
2016881
2016919
2016923
2016943
2016967
2016977
2016997
2201603
2201623
2201627
2201669
2201671
2201677
2201681
2320169
2420167
2520169
2920163
3020161
3020167
3201617
3201619
3201641
3201643
3201689
3201697
3220163
3220169
3420161
3520163
3620161
3720163
3720169
4020167
4120163
4201621
4201633
4201649
4201699
4820161
4920163
5201621
5201627
5201633
5201639
5201641
5201659
5201663
5201671
5201681
5201683
5201689
5201699
5220167
5320169
5520169
5620169
5920169
6020167
6120167
6201641
6201647
6201649
6201683
6201691
6220169
6820169
7020161
7120163
7201619
7201631
7201633
7201637
7201639
7201651
7201673
7201687
7201697
7201699
7220167
7320169
7420169
7620161
7620167
7620169
7920163
8201603
8201621
8201647
8201663
8201689
8220169
8520163
8720161
8820169
8920169
9201601
9201611
9201623
9201631
9201653
9201677
9220163
9320167
9620161
9620167
9720163
9820163
9920161
9920167
There are also 2150 x 8 digit ones
Paul.
|
# How to crack hash: MD5(MD5(SHA1(SHA1(MD5($pass))))) I have a hash that I know is MD5(MD5(SHA1(SHA1(MD5($pass))))), which I want to decrypt. However I have yet to find any tools which can decrypt this.
Right now my best idea is to write a python script which check the MD5 hash of all possible MD5 hashes against MD5(MD5(SHA1(SHA1(MD5(\$pass))))) to find MD5(SHA1(SHA1(MD5($pass)))), and repeat, peeling away the layers. I feel like this will take a long time though, is there a better way?
• Just compose them and write one function def _hash(password): MD5(MD5(SHA1(SHA1(MD5(password))))); then, run the entries from your password dictionary through this composed function; When the composed function outputs your target hash, you found the password. – Ella Rose Jul 7 '17 at 18:46
• Where are such hashes used ? I ve never seen anything like this – Richard R. Matthews Jul 8 '17 at 15:00
Right now my best idea is to write a python script which check the MD5 hash of all possible MD5 hashes
I feel like this will take a long time though
Your feeling is right. In fact, it is very possible that the universe would die before you finished.
Today most password hashes are broken by doing dictionary attacks and throwing in common substitutions. In other words, we exploit the fact that humans are in the loop and are the weakest link. So your best bet, since you know the algorithm, is to write a program that takes a dictionary file as input, runs it through the algorithm, then compare with a target hash. After going all the way through the dictionary you could concatenate words together, try common substitutions, etc. Take a look at what a password cracker like John does (or even see if you can write a custom module for John that implements your algorithm).
• The other notable hash-cracker would be hashcat. – SEJPM Jul 7 '17 at 18:51
What you have is a hash, not an encryption as such you can not decrypt it. What you are looking for is a pre image. A value which would produce the desired results. It is a near certainty there are infinitely many values which result in your desired hash however finding even one of them may be exteemely difficult without knowing something about the original value you are looking for. If you know something about the value like it is a short alpha numrric password or a dictionary word etc. You can enumerate all possible values for the input and calculate the full hash (all layers). You do not need to peel the layers one by one nor would that be even remotely computationally feasible.
If you can't limit significantly the search space for original value I'm afraid you are out of luck it can't be done with computing resources available today or in the foreseeable future.
Writing your own Python script is likely a very inefficient method of conductong a brute force search. Both md5 and sha-1 are efficient hash algorithms with some very fast implementations including some specificly for brute force which work on batches. Obviously you should prefer basing your code on one of those.
|
# Law of Multiple Proportions
• Page ID
1334
• John Dalton (1803) stated, "'When two elements combine with each other to form two or more compounds, the ratios of the masses of one element that combines with the fixed mass of the other are simple whole numbers'.
Example
• Carbon monoxide ($$CO$$): 12 parts by mass of carbon combines with 16 parts by mass of oxygen.
• Carbon dioxide ($$CO_2$$): 12 parts by mass of carbon combines with 32 parts by mass of oxygen.
Ratio of the masses of oxygen that combines with a fixed mass of carbon (12 parts): 16:32 or 1:2
Hydrogen and oxygen are known to form 2 compounds. The hydrogen content in one is 5.93%, and that of the other is 11.2%. Show that this data illustrates the law of multiple proportions.
SOLUTION
In the first compound: hydrogen = 5.93%
Oxygen = (100 -5.93) = 94.07%
In the second compound: hydrogen = 11.2%
Oxygen = (100 -11.2) = 88.88%
Ratio of the masses of oxygen that combine with fixed mass of hydrogen: 15.86:7.9 or 2:1. This is consistent with the law of multiple proportions.
### Contributors
Binod Shrestha (University of Lorraine)
|
# The gas inside of a container exerts 16 Pa of pressure and is at a temperature of 430 ^o K. If the temperature of the gas changes to 150 ^oC with no change in the container's volume, what is the new pressure of the gas?
Nov 22, 2017
$\text{16.26 Pa}$
#### Explanation:
Gay-Lussac's law: For a given mass and constant volume of an ideal gas, the pressure exerted on the sides of its container is directly proportional to its absolute temperature.
Mathematically,
P ∝ T
${P}_{1} / {P}_{2} = {T}_{1} / {T}_{2}$
P_2 = P_1 × T_2/T_1
${P}_{2} = \text{16 Pa" × "430 K"/"(150 + 273)K" = "16.26 Pa}$
|
# This is somewhat of a question -- calculations about my car accident
Tags:
1. Oct 27, 2014
### squilliam
Now, I had a collision approximately two weeks or so ago, however, I was the one blamed for the collision due to a "witness" but I need some help with the correct calculation I need.
You see, her car weighed 2602 pounds, where as my car weighed 3359 pounds, but she hit me hard enough on the back right wheel to end up not only denting my car a fair bit(several things bent in fact), but she ended up spinning me approximately 300°, while her car slid about 50 ft after she hit me.
The only thing I can figure out is that she must have been going way to fast, I mean I could be wrong, but I would very much like to know.
2. Oct 27, 2014
### Simon Bridge
Welcome to PF;
Your calculations are unlikely to be acceptable as proof of anything, neither will mine or anyone's you meet here - you need to find a specialist in investigating accidents.
To work out this sort of thing, even back of envelope, I'd need more detail on the location of the impact and the nature of the surface the cars were sitting on. But spinning someone right around is not unusual in even quite slow collisions.
3. Oct 27, 2014
### Staff: Mentor
Welcome to PF!
You could start to solve this by making a drawing that shows before and after the collision.
So were you going through an intersection and she hit you in the back passenger tire?
Were you running a RED light? or was it a four way stop?
Its hard to say without knowing more about the accident.
4. Oct 27, 2014
### squilliam
Simon
It's not that I even really want to prove to my insurance company, I just want it for me really, like I doubt walking in their with an equation will turn the claim around like that. But I find it annoying that the person who hit my car even told me that she thought, and I quote " I hope I don't hit him" , she even said to me that she "closed her eyes".
The place of the collision was a two way intersection through a stretch of road, all normal asphalt, she collided with the back 2 ft or so of my car with the front end of her dodge neon. It just seems to me though, that in order fer her car to slide an extra 50 or so feet she would have had to been going more than the speed limit at 35 mph.
5. Oct 27, 2014
### squilliam
Here are some pictures that I took
#### Attached Files:
File size:
156.9 KB
Views:
108
• ###### IMG_20141013_090512.jpg
File size:
84.9 KB
Views:
107
6. Oct 27, 2014
### Staff: Mentor
These pictures don't really help with an analysis of the collision and as Simon said there's nothing that we can say that would support your case. If you
hire a lawyer then yhey may be able to have an accident reconstruction specialist do the numbers and be an expert witness for you.
The sad part about accidents is that people will say one thing to you and another to the police. Similarly for witnesses, they hear the bang and usually see the accident after it happened, drawing conclusions from that to report to the police. The end result though is that you have a car you must get fixed, papers to file, possible increased insurance rates and sometimes medical bills to pay.
So get it processed and get your car fixed and do something nice for yourself.
7. Oct 27, 2014
### squilliam
Well see, that's just it my car is now drive-able, it just doesn't look pretty as the fact I only have liability would only give money to the other person anyways, I'm just trying to figure out what equation I should try to figure out if she was in fact going the speed limit or not, therefore, only proving really to myself if she was at fault.
I mean they already gave her money for a new car and everything, so it's settled as far as claims go, I just want it for some piece of mind, you know?
8. Oct 27, 2014
### Staff: Mentor
A simpler thought is to do an experiment. Get two toy cars and make one twice the weight of the other and crash them in the same way and look at the outcome. Its not accurate but it may give you a better understanding of what happened. My feeling is your intuition is correct but of course she would never admit that she was speeding.
I had a similar accident on a rainy day coming up to an intersection. I had to stop because I saw a police car with lights and siren on about to come through. I stopped and a few moments and I mean a few moments later an SUV driver crashes into my truck. The officer stopped to document the accident instead of going to where he was going. My feeling was the driver was on a cell pone and didn't pay attention to me stopping because in general no one stops at that intersection (a Texas highway access road) if they have the right of way.
It was blamed on the rainy conditions...
9. Oct 27, 2014
### jack action
Crude approximation, but here we go:
I assume the car is sliding and the tire friction coefficient for sliding can be assumed to be 0.75. So if you see 50 ft of skid marks (using $v = \sqrt{2\mu gd}$), that means that she was going 33.5 mph when she started to skid. Thus, using $E = \frac{1}{2}mv^2$, the car had 132.34 kJ of kinetic energy.
For the energy required to spin your car, let's assume you have a wheelbase $L$ of 105" and your rear tires (and half the weight of your car sitting on them) slide with the same CoF of 0.75 for 0.833 turn (= 300°/360°). That requires 78.27 kJ of energy (using $E = \mu mgd$ and $d = 0.833\times 2 \pi L$).
Adding both (210.61 kJ) and using $v = \sqrt{\frac{2E}{m}}$ means that the car was going at least 42.3 mph before hitting your car.
Values in every equation must be in SI units (I did all the conversion).
Again, crude approximation. It could be lower or higher, but I doubt it would be less than 35 mph.
ref.:
https://en.wikipedia.org/wiki/Braking_distance
http://hpwizard.com/tire-friction-coefficient.html
10. Oct 27, 2014
### Staff: Mentor
Good analysis, I would conclude she was going at 42 because she was clearly thinking about the meaning of life while she closed her eyes as the display of 42 flashed across them and then boom. Douglas Adams would be so proud.
11. Oct 27, 2014
### Simon Bridge
regular tyres on dry asphalt: sliding friction 0.9, rolling resistance 0.011
http://hpwizard.com/tire-friction-coefficient.html (and elsewhere)
s=15m sliding from the collision to rest, suggests
$\mu mgs$ went into the ground while sliding.
That puts the speed after the collision at: $v=\sqrt{2\mu gs} = 11.5m/s$
... so she was, easily, going faster than 26mph before the collision.
Energy that went into your car is trickier - the collision is inelastic, so kinetic energy is not conserved.
A somewhat messier back-of-envelope that accounts for the engine being at the front of the car, so center of mass is more forward making it easier to rotate by pushing on the back, and some notes about accident studies, I get about 60kmph for the ballpark speed before the collision.
60-70kmph sounds like about the right range ... this is a common type of accident.
There are quite a few examples of collisions at this speed-range online.
Many people think that 35-40mph is slow ... and then get surprised by the damage.
12. Oct 28, 2014
### Danger
Squilliam, go back to Dave's first comment. You pretty much have to hire a professional accident reconstructionist to have any traction (pardon the expression) in court. An investigating police officer would be a good witness, and you wouldn't have to pay him, but that would be supplemental to the expert who can say under oath that things happened in a particular manner. It is also essential that you obtain copies of the investigation photos to show where everything ended up. Also, get copies of the first responders' notes (cops, EMT's, etc.).
Last edited: Oct 28, 2014
|
Minimum Angle
Classical Mechanics Level pending
A ladder is leaning against the wall. The coefficient of static friction $$μ_{sw}$$ between the ladder and wall is $$0.3,$$ and the coefficient of static friction $$μ_{sf}$$ between the ladder and floor is $$0.4.$$ The center of mass of the ladder lies exactly in the middle of it.
Find the minimum angle that the ladder can form with the floor without slipping.
×
|
# Tag Info
110
Using your definition of "falling," heavier objects do fall faster, and here's one way to justify it: consider the situation in the frame of reference of the center of mass of the two-body system (CM of the Earth and whatever you're dropping on it, for example). Each object exerts a force on the other of $$F = \frac{G m_1 m_2}{r^2}$$ where $r = x_2 - x_1$ ...
50
I am sorry to say, but your colleague is right. Of course, air friction acts in the same way. However, the friction is, in good approximation, proportional to the square of the velocity, $F=kv^2$. At terminal velocity, this force balances gravity, $$m g = k v^2$$ And thus $$v=\sqrt{\frac{mg}{k}}$$ So, the terminal velocity of a ball 10 times as ...
46
Before telling you why an observer in free fall does not feel any force acting on him, there are a couple of results that should be introduced to you. Newton's second law is only valid in inertial frames of reference: To measure quantities like the position, velocity, and acceleration of an object, you need a coordinate system $(x,y,z,t)$. Now the ...
42
No. All parachutes, whether they are drag-only (round) or airfoil (rectangular) will sink. Some airflow is needed to stay inflated, and that airflow comes from the steady descent. Whether your net descent rate is positive or negative is a different question. It is quite easy to be under a parachute and end up rising (I have done it myself), you just need an ...
39
The bus experiences considerable drag, and will therefore fall more slowly than a person inside the bus. The scenario is possible in principle - but after carefully viewing the clip and doing some calculations, I believe that the details are inaccurate. Assume the bus has a mass of 5000 kg (pretty light for a bus), and is 3 m wide by 3 m tall - so the ...
37
The answer is Yes and your thinking is correct. You try to differ between impact and sliding on a curve. In fact the impact is just a sudden large force, while a curved (e.i. circular) motion similarly applies a force, just much smaller but also over a longer period of time. The key in surviving any fall is to reduce the force on your body at "impact". A ...
37
As other answers say, if someone just jumps off of the international space station(ISS), they would still be in orbit around the earth since the ISS is traveling at 17,000 miles per hour (at an altitude of 258 miles). Instead of just jumping, imagine the astronaut had a jet pack that could cancel that speed of 17,000 miles per hour in a very short time ...
33
As a very rude guess, fresh snow (see page vi) can have a density of $0.3 g/cm^3$ and be compressed all the way to about the density of ice, $0.9 g/cm^3$. Under perfect conditions you could see a 13 feet uniform deceleration when landing in 20 feet of snow, or about 4 meters. Going from $30 m/s$ to $0m/s$ (as @Sean suggested in comments), you'd have ...
29
Let's make life easy for ourselves by assuming that the slide is an arc of a circle: We also assume the slide is made out of something with a very low friction, so the skydiver maintains a constant speed $v$ all the way round. The reason that using an arc of a circle makes life easy is that the acceleration felt by the skydiver is simply: $$a = ... 28 No. The answer is clearly no. This building is 800 meter high. Some comparison: Skydivers are falling more kilometers in free fall. They experience absolutely no damage from the pressure increase. Scuba divers moving fast upwardly or downwardly also don't get any wounds, although 10 meter deep water has the same pressure as there is between the sea level ... 26 It is incorrect to link the feeling of being accelerated to being accelerated itself. You can be under constant velocity or be continuously accelerated, yet you need not feel anything at all. Let me explain. The reason you feel compressed or stretched when you are accelerated in a lift is because of the presence of the normal force from the ground on you. ... 25 It would be possible in theory, but only in a very side-thinking way: if you make a parachute so large it encapsulates the whole Earth, it will in effect act as a balloon and not fall down, due to the internal pressure of the atmosphere. This wouldn't work in practice for obvious reasons, but maybe in Kerbal you might be able to do something like it.. 25 While everyone agrees that jumping in a falling elevator doesn't help much, I think it is very instructive to do the calculation. General Remarks The general nature of the problem is the following: while jumping, the human injects muscle energy into the system. Of course, the human doesn't want to gain even more energy himself, instead he hopes to transfer ... 23 If the bus was in a vacuum (both inside and outside), then the passenger would float. However, the effects of air resistance on the two objects (passenger and bus) are probably not negligible in such an instance. The bus will be moving relative to the outside air, and so will be accelerating towards the ground at a rate less than g. If we then released ... 20 Ball 1 will drop faster in air, but both balls will drop at the same speed in vacuum. In vacuum, there is only the gravitational force on each ball. That force is proportional to mass. The acceleration of a object due to a force is inversely proportional to its mass, so the mass cancels out. Each ball will accelerate the same, which is the accelleration ... 18 @Señor O gives a very good answer, but he assumes an ideal deceleration. Based on a viewing of the scene, Anna sinks a little under a meter, while Kristoff doesn't sink more than half a meter. Since they fell about 200 feet (about 60 m), my initial estimate for their impact velocity is (assuming no air resistance): v = \sqrt{2gh} = \sqrt{2*60*9.8} ... 17 This is another chance to use one of my favorite approximations ever! I first offered it as an answer to a question about how deep a platform diver will go into the water. Now is the chance to use it again! Issac Newton developed an expression for the ballistic impact depth of a body into a material. The original idea was expressed for materials of ... 15 I have slid down a much smaller version of this at Burning Man. Paha'oha'o was a 30 foot tall volcano art piece which you climbed and then "sacrificed" yourself by dropping into a pit featuring a slide just like you mention. The drop features a 10 foot free-fall, just enough to take your breath away, after which the careful curve of the slide gently catches ... 15 The paradox appears because the "rest frame" of the Earth is not an inertial reference frame, it is accelerating. Keep yourself in the CM reference frame and, at least for two bodies, there is no paradox. Given an Earth of mass M, a body of mass m_i will fall towards the center of mass x_{CM}=(M x_M + m_i x_i)/(M+m_i) with an acceleration ... 13 It depends on how you define the problem. Humans have re-entered the atmosphere from the International Space Station many times, by riding in either a Space Shuttle or a Soyuz capsule. Someone re-entering without a spacecraft of some sort would obviously have to wear some kind of pressure suit (as Felix Baumgartner did in his jump). How elaborate is the ... 13 A parachute is a device specifically designed to create viscous friction. Viscous friction generates a force that: is oriented opposite to the velocity; is proportional to (a certain power of [*]) the velocity. So the falling velocity will increase until the drag force (pointing upwards) becomes equal to the weight of the falling object (pointing ... 12 You will die. Terminal velocity is a bit more than 50 m/s. The bottom of your ramp appears to have a radius less than 2m. That means you'll be exposed to more than 125g as you zip around the bottom. Nice knowing you. 12 it is because the Force at work here (gravity) is also dependent on the mass gravity acts on a body with mass m with$$F = mg$$you will plug this in to$$F=ma$$and you get$$ma = mga = g and this is true for all bodies no matter what the mass is. Since they are accelerated the same and start with the same initial conditions (at rest and ...
12
Analyzing the acceleration of the center of mass of the system might be the easiest way to go since we could avoid worrying about internal interactions. Let's use Newton's second law: $\sum F=N-Mg=Ma_\text{cm}$, where $M$ is the total mass of the hourglass enclosure and sand, $N$ is what you read on the scale (normal force), and $a_\text{cm}$ is the center ...
11
Other answers & comments cover the difference in acceleration due to drag, which will be the largest effect, but don't forget that if you are in an atmosphere there will also be buoyancy to consider. The buoyancy provides an additional upward force on the balls that is equal to the weight of the displaced air. As it is the same force on each ball, the ...
11
While the stone is still travelling on the elevator, there are two forces acting on it, the force from the elevator to the stone, as well as the weight due to gravity. The moment the stone leaves the elevator, it becomes a free falling object. The elevator stops giving a force to the stone, and the only force remaining is its weight due to gravity. ...
11
As an addition to already posted answers and while realising that experiments on Mythbusters don't really have the required rigour of physics experiments, the Mythbusters have tested this theory and concluded that: The jumping power of a human being cannot cancel out the falling velocity of the elevator. The best speculative advice from an elevator ...
11
He "only" flew at the maximum speed of 370 m/s or so which is much less than the speed of the meteoroids – the latter hit the Earth by speeds between 11,000 and 70,000 m/s. So he was about 2 orders of magnitude slower. The friction is correspondingly lower for Baumgartner. Note that even if he jumped from "infinity", he would only reach the escape velocity ...
11
Nice theoretical answers (I can certainly appreciate them, I'm a mathematician). But why delve into theory when experiment is available? In this video you can see a skier jump from more than 200 feet and get head first into the snow, without a helmet. The video starts with the aftermath, if you want to see the jump right away fast forward to about 1 ...
10
The reason that jumping can make a relatively large difference is that the kinetic energy is proportional to the square of the velocity. Thus relatively small changes to the velocity can result in relatively large changes to the kinetic energy. In addition, the velocity which a human can achieve in jumping is a substantial percentage of the velocity of fatal ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
My craftsman snowblower drive shaft for the auger, turns fast. However the auger turns very slowly. The tension cord is very tight. What is the problem?
|
# Cubical block on a cylinder
1. Nov 17, 2013
### Pranav-Arora
1. The problem statement, all variables and given/known data
A cubical block of side L rests on a fixed cylindrical drum of radius R. Find the largest value of L for which the block is stable.
2. Relevant equations
3. The attempt at a solution
There is only one contact point between the cube and cylinder. There are three forces acting on the cube. The normal reaction from the cylinder, the friction and weight. The normal reaction and the friction pass through the contact point. Now how do I form the equations here?
Any help is appreciated. Thanks!
#### Attached Files:
• ###### kleppner 6.35.png
File size:
1.8 KB
Views:
175
2. Nov 17, 2013
### voko
It might be easier to find the potential energy due to gravity as a function of the block's tilt angle and then look for its minima.
3. Nov 17, 2013
### Pranav-Arora
I calculate potential energy from the line passing through the centre of cylinder.
$$U(\theta)=mg\left(R+\frac{L}{2}\right)(1-\cos \theta)$$
where $\theta$ is the angle rotated by the cylinder.
The minima is at $\theta=0$ and for this $\theta$, $U''(\theta)>0$ but solving gives me a negative answer which is certainly incorrect.
Last edited: Nov 17, 2013
4. Nov 17, 2013
### voko
You seem to assume that the block will slide, so that its center of mass is on the line passing through center of the drum and the point of contact. But the block will roll over the drum, and the point of contact will shift.
5. Nov 17, 2013
### Staff: Mentor
Consider the following drawing:
Does that suggest an approach? Hint: Consider the difference between the displacement of the point of contact on the face of the block to where the vertical through the center of mass passes through the same face (the line segment named Δ on the enlargement of the right).
#### Attached Files:
• ###### Fig1.gif
File size:
4.8 KB
Views:
905
6. Nov 17, 2013
### Pranav-Arora
Nice drawing gneill!
Is $\Delta=R\theta-(L/2)\tan\theta$?
How do I continue with voko's approach of finding potential energy? I need the distance of CM of cube from centre of cylinder but I don't see how to apply the geometry here.
7. Nov 17, 2013
### Staff: Mentor
Yes indeed. The answer should be obvious from that...
In my drawing, extend the radius on the right by L/2 making it a total of R + L/2 in length. Then draw a perpendicular line from its end to the center of mass position. Treat the two as vectors to locate the center of mass.
8. Nov 17, 2013
### Pranav-Arora
How?
Do I have to apply the condition that $\Delta \geq 0$?
Please look at the attachment. Are you talking about the vectors shown? I still don't see how can I find the distance of CM from them. :(
#### Attached Files:
• ###### Fig1.png
File size:
20.5 KB
Views:
243
9. Nov 17, 2013
### Staff: Mentor
That's the idea. Consider the direction of the torque provided by the weight of the block about its contact point. Under what conditions is it a restoring force? When is it going to make the angle of tilt increase? When will it be exactly neutral?
You have the lengths of both, and they are perpendicular. You've even drawn in the hypotenuse!
10. Nov 17, 2013
### BruceW
I Love these kinds of problems. Problems related to something that you might just be absent-mindedly thinking about, after trying to balance some cereal on a football. Pranav, you always bring nice problems :)
11. Nov 17, 2013
### WannabeNewton
They're all from Kleppner and Kolenkow "An Introduction to Mechanics" (this particular one is from the chapter on rotational motion). Go get the book before I mail it to you myself!
12. Nov 17, 2013
### haruspex
Yes, bearing in mind that this is for arbitrarily small theta. You will need an approximation to get θ and tan θ comparable.
13. Nov 17, 2013
### BruceW
cool, I might check it out. It seems I've missed out on a real gem here :)
14. Nov 17, 2013
### haruspex
There is an ambiguity in the question. The interpretation of 'stable' so far is (I think) that it doesn't tilt at all, but it could be read as not rolling right off (assuming adequate friction).
15. Nov 17, 2013
### Staff: Mentor
I think the interpretation is that it's "stable" in the same sense that Lagrange Points are stable; A small perturbation induces a small orbit around the point rather than an escape trajectory.
So long as the cubical block is less than a certain size a small perturbation will result in oscillation rather than it tipping off of the cylinder. Larger than that critical size, any small perturbation will result in the angle increasing continuously -- i.e. falling off.
16. Nov 17, 2013
### haruspex
Yes, I agree, that's probably what's intended.
There's yet a third question one could ask: what's the largest L for which it is possible to place the block on the cylinder stably? What a rich source of puzzles.
17. Nov 18, 2013
### Pranav-Arora
Okay, I understand but looking over the sketch again, how did you find that angle between the perpendicular of length L/2 and vertical is $\theta$?
Just a guess, is it $R\theta$?
Thanks BruceW!
As WBN stated, some problems are from the book "Introduction to Mechanics by David Kleppner" and the other, which is much more interesting (sorry WBN, I prefer Irodov than Kleppner :P ) is this: https://www.amazon.com/Problems-General-Physics-I-Irodov/dp/8183552153
The above book is quite popular in India. You must check this and have a look at reviews on Amazon. :)
Last edited by a moderator: May 6, 2017
18. Nov 18, 2013
### Staff: Mentor
There are numerous right-angles joining the line segments, so anything tilted will make angle θ with respect to either the horizontal or the vertical. The trick is to determine which (w.r.t. horizontal or vertical). Fortunately the shape of the triangles gives a big hint, but even so you could follow the trail of perpendicular lines from the tilted radius vector and label the angles w.r.t. horizontal and vertical as you go. I've added a few annotations to the diagram -- see the thumbnail.
Yes, it must be because the two L/2 length line segments are parallel and the line you want is parallel to the bottom of the cube. The resulting figure is a rectangle.
#### Attached Files:
• ###### Fig2.gif
File size:
2.6 KB
Views:
261
19. Nov 18, 2013
### voko
The cube is tangent to the drum at all times.
Your guess is right. Now try to explain it.
20. Nov 18, 2013
### Pranav-Arora
Nicely explained, thanks gneill!
Since $\Delta \geq 0$ and $\theta$ is very small, $L\geq 2R$.
Umm..I think I did not require it, I need the distance from the reference line to continue with the voko's approach. How do I find that?
What I get is $(L/2)/\cos(\theta)+\Delta \sin\theta+R-(x+\Delta \cos(\theta))$, is this correct?
|
• ### Remove ads and support GameDev.net for only $3. Learn more: The New GDNet+: No Ads! • # Learning How to write a 2D UFO game using the Orx Portable Game Engine - Part 1 General and Gameplay Programming # Overview Welcome to the 2D UFO game guide using the Orx Portable Game Engine. My aim for this tutorial is to take you through all the steps to build a UFO game from scratch. The aim of our game is to allow the player to control a UFO by applying physical forces to move it around. The player must collect pickups to increase their score to win. I should openly acknowledge that this series is cheekily inspired by the 2D UFO tutorial written for Unity. It makes an excellent comparison of the approaches between Orx and Unity. It is also a perfect way to highlight one of the major parts that makes Orx unique among other game engines, its Data Driven Configuration System. You'll get very familiar with this system very soon. It's at the very heart of just about every game written using Orx. If you are very new to game development, don't worry. We'll take it nice and slow and try to explain everything in very simple terms. The only knowledge you will need is some simple C++. I'd like say a huge thank you to FullyBugged for providing the graphics for this series of articles. # What are we making? Visit the video below to see the look and gameplay of the final game: # Getting Orx The latest up to date version of Orx can be cloned from github and set up with: git clone https://github.com/orx/orx.git Once cloning has completed, the setup script in the root of the files will start automatically for you. This script creates an$ORX environment variable for your system. The variable will point to the code subfolder where you cloned Orx.
Why? I'll get to the in a moment, but it'll make your life easier.
The setup script also creates several projects for various IDEs and operating system: Visual Studio, Codelite, Code::Blocks, and gmake. You can pick one of these projects to build the Orx library.
# Building the Orx Library
While the Orx headers are provided, you need to compile the Orx library so that your own games can link to it. Because the setup script has already created a suitable a project for you (using premake), you can simply open one for your chosen OS/IDE and compile the Orx library yourself.
There are three configurations to compile: Debug, Profile and Release. You will need to compile all three.
For more details on compiling the Orx lbrary at: http://orx-project.org/wiki/en/tutorials/cloning_orx_from_github at the Orx learning wiki.
# The $ORX Environment Variable I promised I would explain what this is for. Once you have compiled all three orx library files, you will find them in the code/lib/dynamic folder: • orx.dll • orxd.dll • orxp.dll Also, link libraries will be available in the same folder: • orx.lib • orxd.lib • orxp.lib When it comes time to create our own game project, we would normally be forced to copy these library files and includes into every project. A better way is to have our projects point to the libraries and includes located at the folder that the$ORX environment variable points to (for example: C:\Dev\orx\code).
This means that your projects will always know where to find the Orx library. And should you ever clone and re-compile a new version of Orx, your game projects can make immediate use of the newer version.
# Setting up a 2D UFO Project
Now the you have the Orx libraries cloned and compiled, you will need a blank project for your game. Supported options are: Visual Studio, CodeLite, Code::Blocks, XCode or gmake, depending on your operating system.
Once you have a game project, you can use it to work through the steps in this tutorial.
Orx provides a very nice system for auto creating game projects for you. In the root of the Orx repo, you will find either the init.bat (for Windows) or init.sh (Mac/Linux) command.
Create a project for our 2D game from the command line in the Orx folder and running:
init c:\temp\ufo
or
init.sh ~/ufo
##### Share on other sites
8 hours ago, trsh said:
..\..\..\lib;$(ORX)\lib\dynamic; doesn't exsist. Some some steps setting up new project are missing. Hi Trsh, thanks for reporting. Under "Getting Orx" there is a part that mentions that git close auto-setup creates a project so that Orx can be used in your own projects. On reflection it's not 100% clear that you need to compile the projects. But I did include the link on compiling which takes you to the Orx wiki. Let me know if that doesn't solve the issue. However, once the orx libraries (debug, profile and release) are all compiled, the "dynamic" folder should then exist. Also check that after git clone step (the post setup step) managed to create an$ORX variable in your environment variables.
##### Share on other sites
On 3/15/2018 at 11:45 PM, trsh said:
..\..\..\lib;\$(ORX)\lib\dynamic; doesn't exsist. Some some steps setting up new project are missing.
I recently made the building of the Orx lib instructions more clear. Should be less of a barrier for new comers.
## Create an account
Register a new account
• 13
• 13
• 11
• 10
• 14
• ### Similar Content
• Hiya! i'm Jason, And i want to reach everyone here who's excited about making games! And as the Title says, I'm looking for either GMS2 Programmers Or Godot Programmers.
I want to invite you to my Game Creation Of Imagistory, A Beautiful 2D RPG game with Plot twists, Corky Characters and an amazing story.
Here is some spoliers:
Long ago, A mystical Comet Flew throughout the universe.
Legends say that one day this comet will create a brand-new galaxy.
A little scientist named Brown tried to fly to space to see the comet in action.
His Friends, the Wizards, went with him to see the explosion.
Until…
A Vortex Pulled them into a strange portal. They all scattered in different locations.
What will Brown Do?
And Here is Some Art:
I'm Ciao Gelato #7986 on discord and my email is is [email protected], if you want to contact me there.
• I'm looking for a open source PBR rendering engine that I can use. Basic requirements are Windows (C/C++) and free to use type license.
The first two hits I get on Google are:
Filament
LuxCoreRender
https://luxcorerender.org/
Does anybody have any experience using any of these, or do you recommend something else that's better?
Thanks.
Pluses: Active development, easy to compile, zero dependencies.
• By Novakin
Looking for a c++ pogrammer to help us on a Viking battle game. We are using unreal engine 4 so knowledge of blueprint would be handy. The project is intended to sell commercially so you will recieve revenue shares. For more info on the project please contact me. Thnak you
• By Sneikyz
Hello,
I'm an amateur digital artist looking for a beginner-level project. I'd like to be a part of a team that wants to learn to create games. It might be a simple idle, mini, simulation game or a visual novel.
I'm still a beginner at backgrounds and I usually draw females but I'm willing to learn.
Time invested to the game creation might differ every week mostly because of my work and will be discussed separately.
Here's my DA : https://www.deviantart.com/sneikyz
• By Josheir
This is a follow up to a previous post. MrHallows had asked me to post the project, so I am going to with a new fresh thread so that I can get the most needed help.
I have put the class in the main .cpp to simplify for your debugging purposes. My error is :
I tried adding : #define GLFW_INCLUDE_NONE, and tried adding this as a preprocessor definitions too. I also tried to change the #ifdef - #endif, except I just couldn't get it working. The code repository URL is :
https://github.com/Joshei/GolfProjectRepo/tree/combine_sources/GOLFPROJ
The branch is : combine_sources
The Commit ID is: a4eaf31
glad1.cpp was also in my project, I removed it to try to solve this problem.
Here is the description of the problem at hand:
Except for glcolor3f and glRasterPos2i(10,10); the code works without glew.h. When glew is added there is only a runtime error (that is shown above.)
I could really use some exact help. You know like, "remove the include for gl.h on lines 50, 65, and 80. Then delete the code at line 80 that states..."
I hope that this is not to much to ask for, I really want to win at OpenGL. If I can't get help I could use a much larger file to display the test values or maybe it's possible to write to an open file and view the written data as it's outputted.
|
# Question
The math portion of the ACT test consists of 60 multiple-choice questions, each with five possible answers (a, b, c, d, e), one of which is correct. Assume that you guess the answers to the first three questions.
a. Use the multiplication rule to find the probability that the first two guesses are wrong and the third is correct. That is, find P(WWC), where C denotes a correct answer and W denotes a wrong answer.
b. Beginning with WWC, make a complete list of the different possible arrangements of two wrong answers and one correct answer, then find the probability for each entry in the list.
c. Based on the preceding results, what is the probability of getting exactly one correct answer when three guesses are made?
Sales0
Views50
|
# Chapter 1 - Whole Numbers - 1.6 Long Division - 1.6 Exercises: 48
When dividing a number by a multiple of 10, move the decimal to the left the same number of places as there are zeroes in the divisor.
#### Work Step by Step
$32000\div10=3200.\underleftarrow0$ $32000\div100=320.\underleftarrow{00}$ $32000\div1000=32.\underleftarrow{000}$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
‘The individual who is not himself a combatant – and so a cog in the gigantic machine of war – feels bewildered in his orientation, and inhibited in his powers and activities.’ (Freud 1985: 62)
‘During the war, l lived, both as a civilian and as a writer with every pore open… We all lived in a state of lucid abnormality.’ (Bowen 1945)
‘World War One (and the years following it) appear as a laboratory for testing and honing the functional mechanisms and apparatuses of the state of exception as a paradigm of government.’ (Agamben 2005: 7)
Elizabeth Bowen wrote her wartime collection of short stories – The Demon Lover and Other Stories (1945) – under the stress of living through the London Blitz. As an Anglo-Irish writer, employed to spy in Ireland by the British Government’s Ministry of Information, she was an active participant in the state’s increased involvement in the lives of its citizens, and those beyond its borders (Lee, 1999: 150). Despite this, Lassner (1991: 157) could still write that ‘interest in Bowen’s historical and political concerns [has] been limited.’ Since then, more attention has been paid to Bowen as a political writer, including by Lassner herself, who produced the first full-length study of Bowen’s short fiction, considering the ‘psychological experience’ so often focused on in Bowen’s writing, but within the context of ‘social, economic and political forces.’ (1991: 3) Her focus, along with many subsequent critics, was on Bowen as politically informed by ‘her Anglo-Irish history,’ where ‘the horrors of Ireland’s endless strife’ inform her story writing (1991: 6).1 More recently, Janice Ho (2015: 88) has explored Bowen’s relationship to the British state in her chapter on The Heat of the Day, demonstrating that, ‘The question of what ought to be the proper relationship between the state and the citizen…lies at the heart of Bowen’s novel.’2 In this article, I consider Bowen’s engagement with the changing political and legal relationship between the British state and citizens during World War II, whilst she herself was actively engaged in work for the state.
In that time, as Bowen wrote in 1945 in a magazine article titled ‘The Short Story,’ she found that ‘the discontinuities of life in wartime make such life a difficult subject for the novelist,’ but ‘the short story is the ideal prose medium for wartime creative writing’ (2008: 314–315). As such, it was as she was living ‘as a civilian, and as a writer,’ and as a government agent (albeit in a minor way), ‘with every pore open,’ that she produced her wartime short stories (Bowen, 1945). The first time she read them through as a collection in 1945, she was struck by what she termed the ‘rising tide of hallucination’, as the exceptional state of ‘lucid abnormality’ created by the war permeated the stories (1945). This heightened psychic state is exactly why her stories have been read from a psychological and psychoanalytical critical viewpoint,3 often with a particular focus on her ghost stories as being in an Irish Gothic tradition.4 Bowen’s writing, including these comments on her own stories, reflects the fact that ‘Freudian psychoanalysis’ had become ‘both familiar and influential in the interwar years’ (McKibbin, 1998: 299). In this article, I consider how Freud’s theories on the wartime behaviour of the state and its impact on citizens illuminates Bowen’s writing. In his 1915 essay, ‘Thoughts for the Times on War and Death,’ written while his sons were fighting in World War I, Freud writes about state legitimisation of the illicit treatment of its citizens, including violence, and the psychic creation of ghosts in the face of death (Freud, 1985). It is these two areas, and how they interact, that I focus on in Bowen’s well-documented story, ‘The Demon Lover,’ published in 1941, and her less examined piece, ‘Green Holly,’ published later in the war, in 1944. This will build on previous criticism and extend the analysis into considering the political elements of these two ghost stories.
In addition to being written during different stages of the war, the stories, between them, span the period from World War I (1914–1918) to World War II (1939–1945). In ‘The Demon Lover,’ the protagonist, Kathleen Drover, visits her evacuated home in post-Blitz London, only to find a letter from her former lover, a soldier reported to have died in the First World War (Bowen, 1980a). The impact of that earlier conflict is incorporated into the story by a flashback to their last meeting, and also through the threat of the soldier’s ghostly return. In the later story, the characters have been working in a remote house as government intelligence workers (not unlike Bowen) since the beginning of the war (Bowen, 1980b). Moved to a new property at the government’s insistence, they are haunted by ghosts of former occupants of the house.
In the period between Freud writing about the inhibitions placed on non-combatant’s ‘powers and activities’ (1985: 62), and Bowen’s short stories, the ability of the state to intrude on the lives of citizens had, in fact, seen an unprecedented expansion. My opening quotation, from legal philosopher Giorgio Agamben’s book The State of Exception, expresses his concern that this period enabled governments, including ‘England…and Germany’, to further develop the ‘systematic expansion of executive powers’ which had increased ‘during World War I, when a state of siege was declared…in many of the warring states.’ (2005: 7) Agamben writes of this expansion as establishing a state of exception, which we might initially understand as similar to a legal state of emergency which provides governments with extensive powers (Agamben’s theorisation of this concept is explained further below). Although he is well-known for his, sometimes controversial, comments on our contemporary political climate, as a philosopher who started out his academic life studying law but was converted to philosophy by Heidegger himself, having, metaphorically, at ‘his side a second guide: Walter Benjamin,’ and a subsequent focus on the writings of Carl Schmitt, Agamben has been engaged with the inter-war period for his whole career (de la Durantaye, 2009: 363). As such, his writings on the state of exception provide a relevant and helpful theoretical context for the situation in which Bowen was writing. She too was increasingly concerned about the ‘inflation of state power’; while for Bowen, it might be ‘nostalgia for the minimalist state of classical liberalism’, Agamben is far more radical in his approach (Ho, 2015: 88). Both stand opposed to the encroachment of the state into the lives of citizens.
Before exploring how Bowen’s depiction of the exceptional states of citizens resonates with Agamben’s concerns about the growing reach of the state of exception, the complexity of his theory demands consideration. Having introduced this in some detail, this essay will go on to compare the evocation of the state’s presence and treatment of its citizens in ‘The Demon Lover’ and ‘Green Holly,’ incorporating Freud’s writings on the topic. The argument then moves to focus on the particular socio-legal predicament women were placed in, giving a detailed analysis of the ‘unnatural promise’ (Bowen, 1980a: 746) by Kathleen to her soldier fiancé in ‘The Demon Lover,’ through consideration of the literary antecedents to the story, the history of state surveillance of war widows, and Bowen’s short radio play, ‘A Year I Remember – 1918’ (2010: 63–76). The article culminates in investigating the most extreme hallucinatory, exceptional state in the stories, the ghosts, considering the extent to which they represent ‘all the dead,’ who, ‘[u]ncounted…continued to move in shoals through the city day,’ (Bowen, 1948: 90), or whether they act as an expression of the existential anxiety experienced not only because of the threat of death during war, but also, the threat to individuality, rights and legal status created by the expansion of the state.
In publishing The State of Exception (Stato di eccezione) in 2003 (translated into English in 2005), the contemporary Italian philosopher Agamben reinitiated a strand of political thought that had lain dormant since the 1920s and 1930s. Originating in the writings of Carl Schmitt, the phrase, to cite a footnote by Schwab (Schmitt’s translator), ‘includes any kind of severe economic or political disturbance that requires the application of extraordinary measures’ (2005: 5). As is clear from the first sentence of Agamben’s text, he engages directly with Schmitt’s philosophy, referring to his Politische Theologie, or Political Theology, published in 1922, and Schmitt’s ‘famous definition of the sovereign as “he who decides on the state of exception”’ (Agamben, 2005: 1). Agamben acknowledges that this definition of the sovereign ‘has been widely commented on and discussed’; however, what had not been considered was a ‘theory of the state of exception in public law,’ with it being treated rather ‘as a quaestio facti than as a genuine juridical problem’ (2005: 1). It is this problematic juridical nature of the state of exception that Agamben explores in his text, and which has consequently become a much-debated topic across many academic disciplines.
Although there has been a rise in interest in Schmitt’s philosophy in recent decades, given his historic involvement with the Nazi party, engagement with him is seen as controversial. Agamben is no exception, and his relationship to Schmitt’s philosophy is subject to the usual fears, accusations and defences, as to how far his thinking is intertwined with that of the ‘conservative revolutionary’ (de la Durantaye, 2009: 363). However, Agamben takes issues with Schmitt’s conception of the legal basis and status of the state of exception. In his second chapter, Agamben sums up Schmitt’s theory of the state of exception, whose goal he identifies as being ‘the inscription of the state of exception within a juridical context’ (Agamben, 2005: 32). Schmitt posits that in the state of exception, when emergency powers are invoked, ‘the juridical order is preserved even when the law itself is suspended’ (Humphreys, 2006: 682). As Agamben points out, this is ‘a paradoxical articulation,’ and to achieve it ‘what must be inscribed within the law is something that is essentially exterior to it, that is, nothing less than the suspension of the juridical order itself’ (2005: 33). Agamben goes on to sum up the ways in which Schmitt, across both his earlier works, Dictatorship (1921) and Political Theology, identifies mechanisms which he posits legitimise the state of exception; Agamben opposes these attempts to re-inscribe the state of exception within the remit and bounds of the law (2005: 50).
For Agamben, ‘the state of exception is neither internal nor external to the juridical order, and the problem of defining it concerns precisely a threshold…where inside and outside do not exclude each other but rather blur with one another’ (2005: 23). Agamben goes on to further refine what this ‘ambiguous zone’ (2005: 2) might involve, stating that ‘the state of exception is a space devoid of law, a ‘zone of anomie,’’ (2005: 50) where the usual moral and social norms are lacking. Earlier on in his argument he illustrates the legal impact on individuals subject to this state of exception, citing the extreme example of Guantanamo and Nazi concentration camps. He argues that ‘the “military order” issued by the President of the United States on November 13, 2001’ went further than the Patriots Act because it ‘radically erases any legal status of the individual, thus producing a legally unnameable and unclassifiable being’ (2005: 3). He finds that the ‘only thing to which it could possibly be compared is the legal situation of the Jews…who, along with their citizenship, had lost every legal identity’ (2005: 4). Even when writing about the modern day, Agamben’s deep seated engagement with the first half of the twentieth century is apparent. He goes on to include the English legal system in his ‘brief history of the state of exception’ (2005: 13):
World War One played a decisive role in the generalization of exceptional executive [governamentali] apparatuses in England as well. Indeed, immediately after war was declared, the government asked parliament to approve a series of emergency measures that had been prepared by the relevant ministers, and they were passed virtually without discussion. The most important of these acts was the Defence of the Realm Act of August 4, 1914, known as DORA, which not only granted the government quite vast powers to regulate the wartime economy, but also provided for serious limitations on the fundamental rights of the citizens (in particular, granting military tribunals jurisdiction over civilians) (2005: 19).
This example further illustrates the significance of Agamben’s insistence that the state of exception cannot be treated as linked to a legal, juridical order. The passing of DORA involved a legal process, which served to suspend the normal functioning of the juridical system and rights. The ambiguity created is also clear – it is not simply that all law has been suspended, yet, nor can it be said that the law is still operating fairly. Leland puts it well in his comprehensive Critical Introduction to Agamben, when he writes that for Agamben ‘the state of exception is the point at which the law provides for its own suspension; it is the legal suspension of the distinction between legality and illegality’ (2009: 338).
Bowen, from writing ‘The Demon Lover’ in 1941 to ‘Green Holly’ in 1945 and on to The Heat of the Day in 1949, increasingly depicts individuals as citizens who are living in a world of legal ambiguity. This was the case, from the beginning of World War II through to the post-war world; that Agamben’s analysis of the government’s powers in 1914 is directly applicable to 1939 and beyond is made clear in an article published in the Daily Telegraph, dated September 8th, 1939. The journalist writes of the ‘new laws that affect every citizen,’ opening by stating that ‘during the last few days as many regulations have become law as were enacted in the first year of 1914–18’ (Stannard, 1939). He goes on to list some of the highlights, giving government agencies’ extensive powers over land and persons, including ‘drastic precautions for the security of the State.’ Agamben was certainly not wrong when he wrote that after World War I ‘the principle of the state of exception had been firmly introduced into English law’ (2005: 19). It could be objected that the government did so out of necessity, to defend its citizens from a nation committing worse breaches of laws and rights. Yet this does not change the legal stance taken up, and the actual historical events make such clear-cut distinctions between nations far less comfortable. Although in ‘1939 it was not the original intention of the British authorities to embark on a policy of mass alien internment as had been carried out in the First World War…by mid-1940 the situation had changed’ (Brinson, 2008: 288). As a result, ‘by the summer of 1940, around 25,000 men and perhaps 4,000 women found themselves in internment camps on the Isle of Man and elsewhere.’ (Brinson, 2008: 288) Although some were ‘pro-Nazi’ sympathisers, ‘the vast majority’ were ‘refugees from Nazi oppression’ (Brinson, 2008: 288). Of course, this is not to suggest that there is any parity between such internment camps and the Nazi concentration camps. Nonetheless, it does point to the ambiguity of claims by the British government to be acting wholly differently to her enemies. This is an ambiguity Bowen eventually expressed in her doubling up of the two Roberts in The Heat of the Day, with one being a British spy and the other a Nazi sympathiser.
The difficulty that surrounds attempts to define what is licit or illicit behaviour in the state of exception, from another angle, is also a question of violence. This is a question that Agamben considers through what reads as an exciting discovery of the supposed ‘covert engagement’ beyond the ‘known – scanty – relations’ between Walter Benjamin and Schmitt, on the legal status of political violence (de la Durantaye, 2009: 342). Agamben argues that Schmitt’s attempts to claim the state of exception as part of the legal system look to ‘re-inscribe violence within a juridical context’ (2005: 59). And it is this move to legitimise all forms of violence, for use by the state, that Agamben notes that Benjamin’s position counters, but in a potentially unexpected way. Instead of arguing that some violence is legal and some is not, Benjamin posits the category of ‘pure violence,’ by which he means violence that has its total ‘existence outside of the law’ (2005: 59). This beyond-law violence could sound very alarming, and almost exactly like the kind of state of exception abuses Agamben appears to be arguing against. Arguably the situation is not helped by the fact that neither Benjamin, subsequent commentators, or Agamben provide a concrete illustration of what this violence might look like. However, this is perhaps because its significance lies in its power as a theory, and not in practice, in that it looks to theorise a violence beyond the reach of the law, while the law wants to claim it back. Or, as de la Durantaye puts it:
For Agamben, Benjamin is not referring to actual acts of physical violence that he wishes to isolate, glorify, or purify but is instead playing a conceptual game with theorist[s] of the state who instrumentalize the use of violence. His surprising recourse to the term is, for Agamben, a subtle and unexpected move that allows him to surprise…his conservative opponents. (2009: 344)
Having introduced the conceptual legal framework for this paper, we can now consider Bowen’s war-time stories ‘The Demon Lover’ and ‘Green Holly’ alongside Freud’s 1915 paper.
‘The Demon Lover,’ written in late 1941, tells of Mrs Drover’s return to her London house to pick up a few things to take back to her family, who are safely living in the countryside. Bowen herself stayed in London throughout the War, working as an Air Raid warden, including during the extended bombing lasting from September 1940 to May 1941. The story is set in the following August and London is eerily quiet and depopulated, as ‘no human eye watched Mrs Drover’s return’ and ‘her once familiar street’ has ‘an unfamiliar queerness’ (1980a: 742). The uncanniness carries a subtle sense of threat or paranoia, as on opening her house, ‘Dead air came out to meet her’ (1980a: 743). On arriving, there is a letter on the hall table addressed to her. Bowen, in a passage reminiscent of Henry James, gives another turn of the screw to the disturbing atmosphere, as Mrs Drover tries to logically explain how the letter came to be there, initially thinking that ‘the caretaker must be back,’ only to acknowledge that he did ‘not know she was due in London today’ (1980a: 744). She assumes he has been negligent in sending it on to her, is briefly annoyed, and picks up the letter without looking at it. The narrator informs us that ‘Her reluctance to look again at the letter came from the fact that she felt intruded upon – and by someone contemptuous of her ways’ (1980a: 744). Writing of Bowen’s later novel, The Heat of the Day, Janice Ho observes that ‘individual behaviour was increasingly subject to regulation by the wartime security state’ (2015: 105). In a similar vein, while concern with the state’s expansion is also evident in this earlier short story, it is more apparent in ‘Green Holly,’ written in 1944, by which time the impact and effect of state intrusion on daily life was felt more intensely (Bowen, 1980b: 811).
In her article published in Vogue in 1945, ‘Opening up the House,’ Bowen writes about people ‘going home’, focusing particularly on, ‘Houses that had to be left when war came, and which were thereupon occupied by unknown people,’ describing them as confronting, ‘their returning owners with their own complex mystery’ (2008: 132). Bowen interprets the changes to the houses themselves as subtle, but lasting:
Those unnumbered human beings who came and went – kept it in motion by the clockwork of wartime… – have left something behind them, something that will not evaporate so quickly as the smell of unfamiliar cigarettes. These now departed dwellers in one’s house cannot fail to be seen as either enigmas or enemies; one must try to dwell on them as enigmas (2008: 133).
Despite her advice, the unsettling sense of the wartime occupants of the house as the enemy remains, undermining simple distinctions between the British state and her opponents, while also problematising the legality of governmental action during wartime. The equivalent of such supposed enemy dwellers, who are to be treated as an ‘enigma,’ in ‘Green Holly’ (ironically, given they are supposed to be doing intelligence work), also suffer from increasingly attenuated identity and rights, to the point of becoming ghostly. And indeed, in her magazine article, Bowen refers to the ‘ghostly indentations of someone’s doodling’ which can be ‘found on the left-behind telephone pad’ (Hepburn, 2008: 133). The expansion of the state into private lives and property leaves a haunting, ghostly, mysterious air in its wake: an exceptional state, more fully explored in Bowen’s short stories than this article aimed at readers of Vogue, which perhaps unsurprisingly, looks to end on a more hopeful note, with a returning occupant declaring: ‘“I felt for a moment, just now, as though I had never been away!”’ (2008: 135). The tone of the endings of the two short stories will, however, prove to be far from straightforward.
Although Bowen’s engagement with the nation’s use of the state of exception to intrude into private property and private lives is explicit in her writings towards the end of the war, it is still developed in ‘The Demon Lover,’ beyond the opening air of paranoia. More subtly, in the earlier short story, the narrative reveals through a flashback to August 1916, that the letter which has mysteriously appeared on the hallway table is from a soldier who was Mrs Drover’s fiancé. Less explicitly than a government agent on a mission, he necessarily carries with him the shadowy presence of the nation state, and, to use Agamben’s term, the expansion of the state of exception. Their last meeting is almost, but not quite, the clichéd romantic parting of lovers in wartime; saying good-bye in the dark she ‘had not ever completely seen his face’ so that ‘from not seeing him at this intense moment’ she felt ‘as though she had never seen him at all’ (1980a: 745). In the same instant ‘she verified his presence…by putting out a hand, which he each time pressed, without very much kindness, and painfully, on to one of the breast buttons of his uniform. That cut of the button on the palm of her hand was, principally, what she was to carry away.’ (1980a: 745) It is as if she has been branded by the official imprint of the buttons of his uniform, so that years later, ‘she instinctively looked for the weal left by the button on the palm of her hand’ (1980a: 748). There is something disturbing about this act of intimacy, which is painful, unwanted and imposed. As in other literature from this period, the violence of World War II is experienced as a return of the violence from the First.5 For Mrs Drover, the repressed memory of that farewell meeting with the soldier contains a physical and psychic trauma so powerful that she expects to see it embodied on her hand.
Freud’s essay is illuminating here, particularly his argument that the ‘sense of disillusionment’ caused by World War I was in part because of the ‘low morality shown externally by states which in their internal relations pose as the guardians of moral standards’ (1985: 67). Freud identifies a central hypocrisy in the behaviour of nations, which especially came to light during the exceptional state of being at war:
Within each of these nations high norms of moral conduct were laid down for the individual, to which his manner of life was bound to conform if he desires to take part in a civilized community. These ordinances…demanded a great deal of him – much self-restraint, much renunciation of instinctual satisfaction (1985: 63).
As such, ‘It was to be assumed, therefore, that the state itself would respect them, and would not think of undertaking anything against them which would contradict the basis of its own existence’ (1985: 63). Whereas of course, wartime reveals that this is exactly what the state does; it ‘permits itself every such misdeed, every such act of violence, as would disgrace the individual’ (1985: 66). The revelation of this extraordinary, paradoxical behaviour creates, for Freud, the potential for intense psychological pressure on citizens. This is a point echoed by Janice Ho, who argues that the ‘“psychological impact”’ of wartime legislation in Britain:
‘“was considerable” as Britons were faced with the paradoxical championing of a liberal form of citizenship in which individual rights and liberties were ostensibly sacrosanct and the concurrent subordination of “[e]very private interest” for the national community’ (2015: 85–86).
This is taken to an extreme in ‘Green Holly,’ where the occupants of Mopsam Grange have lost almost every individual liberty. Nonetheless, their situation is rather mocked, and trivialised, compared to what we see enacted in ‘The Demon Lover,’ as the legitimised force that the soldier is permitted to perform against people of other nations is directed not just at a fellow citizen, but against his lover. In a move that Agamben would arguably approve of, Bowen destabilises the boundaries between legal and non-legal violence, suggesting the cruelty of the soldier and state, and depicting the concurrent impact on Mrs Drover’s psyche.
However, it is not just the violence itself, but also the accompanying demand that the soldier makes of Kathleen, which contributes to this instability; the wonderful ambiguity of Bowen’s prose demands it is quoted at length.
‘You’re going such a long way.’
‘Not so far as you think.’
‘I don’t understand.’
‘You don’t have to,’ he said. ‘You will. You know what we said.’
‘But that was – suppose you – I mean, suppose.’
‘I shall be with you,’ he said, ‘sooner or later. You won’t forget that. You need do nothing but wait.’
Only a little more than a minute later she was free to run up the silent lawn…she already felt that unnatural promise drive down between her and the rest of all human kind… She could not have plighted a more sinister troth (1980a: 746).
The soldier’s demand, that she wait even if he is killed – or she hears that he is killed – requires the subordination of her private interests to the potential demands of his role as a soldier. She can hardly assert her right to be freed of the obligation if he dies, while he is ostensibly fighting to protect her. In some ways, it is a question that he should never have asked; it is an ‘unnatural promise.’ Yet, it could be argued, it is the kind of sentimental, romantic promise that abounds: particularly in wartime, or indeed, in literature. In fact, as Neil Corcoran reveals in his chapter on Bowen’s wartime short stories, the promise has its origins in the title of the story: ‘The Demon Lover’ is the name of a Scottish ballad sometimes also called ‘The Carpenter’s Wife’ (2004: 4). There are of course variants, but the tenets of the original narrative are worth comparing to Bowen’s story. In it, a woman who was betrothed to a sailor, who she thought had drowned at sea, is now married to another man (usually a carpenter) and has a family. The story consists of a spirit or devil in the form of the former fiancée returning to lure the woman away with the promises of riches, only for her to realise she has been tricked, and then she either dies or descends to hell. The woman is condemned for her unfaithfulness – for breaking her promise to wait for the return of her betrothed, even though he has died. This traditional tale would suggest that there is nothing unusual, or untoward, in the promise extracted from Kathleen.
However, there are a few crucial distinctions. As discussed in further detail later in this article, Mrs Drover does not choose to flee with her former lover, but is trying to escape his threatened return. The overall effect of Bowen’s story is not to morally judge the woman’s behaviour. In fact, the narrative voice instead points out that the soldier’s demand is socially isolating and aberrant. Although Bowen’s choice of the soldier can be likened to the sailor, not least in the likelihood of them dying, the authority with which he speaks is different. The soldier is depicted as exploiting the wartime situation to make a demand that extends beyond the grave. He does so through an oath – a form that should epitomise legal, normative bonds between people, but Bowen has located Kathleen in a twilight world, where she seems unable to avail herself of whatever socio-legal conventions underpin courtship (not that such conventions are at all straightforward). While the ballad of ‘The Demon Lover’ provides a framework for the genre expectations of the story, it is far more difficult to establish what standard practices were between men and women in the earlier half of the twentieth-century. So much so that, ‘despite the difficulties historians face in accessing intimate physical experiences, we seem to know far more about English sexual lives than about how men and women contracted, negotiated, and maintained emotional intimacies prior to marriage’ (Langhamer, 2007: 174). Nonetheless, we do know there was an expectation that ‘the war widow…was expected to remain faithful to her fallen husband’ (Bette, 2015), and a ‘war widow who did not behave in an exemplary manner risked reprobation’ (Lomas, 2000: 137), as well as loss of her widow’s pension. The ambiguity of Kathleen’s position is heightened because she was not married to the soldier; if she were, one aspect of the state’s expectation would have been for her to remain faithful to him. In fact, an extensive system of surveillance was established by the government to ensure that widows’ behaviour was of the appropriate moral standard (Lomas, 2000: 131–132). However, as Kathleen would not qualify for such investigation, it is as if the soldier has taken it upon himself to carry out the state surveillance, writing in the letter, ‘I was sorry to see you leave London, but was satisfied that you would be back in time’ (Bowen, 1980a: 744). Her anxiety at remarrying was such that, on living her married life as Mrs Drover, we are told she had to actively dismiss ‘any idea that [her actions] were still watched’ (1980a: 746). Nonetheless, it could be argued that the soldier’s expectations do not differ significantly from societal expectations. However, a radio play that Bowen wrote after World War II demonstrates her appreciation of the equivocal legal status occupied by women like Kathleen.
In the play, ‘A Year I Remember – 1918,’ aired on 10th March, 1949, the narrator recalls working in a hospital in Ireland, set up in a ‘gimcrack house in the country, overlooking a river’ for ‘men wounded…in the mind. Shell-shock cases’ (Bowen, 2010: 66). The piece involves broadcasts and recordings from the earlier period, along with snatches of conversations between the nurses. A gramophone is heard ‘playing “Widows are Wonderful”’ which is followed by ‘the subsequent conversation’:
FIRST YOUNG GIRL’S VOICE:
Wonder if they are wonderful.
SECOND YOUNG GIRL:
Who?
FIRST YOUNG GIRL:
Widows.
SECOND YOUNG GIRL:
Oh. (Pause) Someone said, that’s what we are.
FIRST YOUNG GIRL:
What, wonderful?
SECOND YOUNG GIRL:
No, widows. Without being wives. (Pause) There may not be anyone left for us (Bowen, 2010: 67).
The combination of wartime loss of life and governmental policy leaves the women in a social and legal no-man’s land; this is in effect where Kathleen becomes located, or dislocated to, as the ‘unnatural promise’ he extracts which drives ‘down between her and the rest of all human kind[,]’ in effect asks her to behave like a widow without being married (Bowen, 1980a: 746). The extremity she was in causes her, years later, to ask herself, ‘What did he do to make me promise like that?’ claiming that she ‘can’t remember’ only for the narrator to add, ‘But she found that she could’ (Bowen, 1980a: 748). Yet she avoids revealing why, instead describing how well: ‘She remembered not only all that he said and did but the complete suspension of her existence during that August week’ (Bowen, 1980a: 748). The equivocal legal status of her unnerving betrothal left her in a state of psychological limbo, even after he ‘was reported missing, presumed killed’ (Bowen, 1980a: 748). She is not that upset, in fact, ‘her trouble, behind just a little grief, was a complete dislocation from everything.’ (1980a: 746) In remaining subject to the socially aberrant promise, she occupies a liminal state where she is neither betrothed nor not betrothed. Kathleen, and to an extent the women in the play, have been placed in Agamben’s zone of anomie, created during the wartime state of exception, and it leaves her in a dislocated, exceptional state. Suffered as trauma in ‘The Demon Lover,’ it perhaps isn’t a coincidence that the girls’ conversation in the play is interrupted by the arrival of Sergeant Rose, with whom there is something the matter, so they ask him, ‘You been seeing the ghost?’ (Bowen, 2010: 67). Alongside their legally liminal status, as neither widows nor not-widows, it is ‘said the house was haunted’ (2010: 67).
In Bowen, the ambiguous presence of ghosts becomes one of the extreme symptoms of the psychic stress experienced by civilians during wartime. In ‘The Demon Lover,’ Bowen delays the revelation that the soldier was presumed dead, allowing the uncanny to develop into a suggestion of ‘the supernatural side of the letter’s entrance’ (1980a: 746). Kathleen refuses to dwell on this, ruminating that ‘As things were – dead or living the letter-writer sent her only a threat’ (Bowen, 1980a: 746). She is haunted by the spectre of her former lover, whether he is a ghost or not, so that she hopes ‘she had imagined the letter’ (Bowen, 1980a: 746). In, almost playfully, having her character consciously rule out the hallucinatory, Bowen keeps open the possibility that Mrs Drover is sufficiently in charge of her mind for the demon’s appearance at the end of the story to be real. Indeed, this is how Lassner interprets the story, so that on Mrs Drover’s return, ‘The empty house is now haunted…by the presence of a mysterious letter from her fiancé, who perished in World War I’ (1991: 64). Certainly, the presence of the letter in the house is hard to explain otherwise, although Bryant-Jordan argues the opposite, claiming that ‘the piquancy of memory compels her to imagine that this man has written a letter to her’ (1992: 133). Significantly, Bowen does not allow the reader to be so sure of this, maintaining the ambiguity around the ghost; as Corcoran (2004: 158), in keeping with Ellmann (2002: 176), sums it up, the ‘revenant is one of the ‘missing’ of World War I, ‘presumed dead’ but never actually found—so that his return may, just about, be susceptible to rational explanation.’ This ever-oscillating uncertainty increases the uncanny tension in ‘The Demon Lover,’ especially when compared to ‘Green Holly,’ where Bowen almost seems to be parodying the clichés of ambiguity surrounding the presence of ghosts. From the description of the ‘Gothic porch and gables’ (Bowen, 1980b: 812), to the wry narrative comment, ‘And not, you could think, by chance did the electric light choose this moment for one of its brown fade-outs,’ during the apparition of the ghost, so that ‘the scene…faded under this fog-dark but glass-clear veil of hallucination’ (Bowen, 1980b: 818). The ironic tone seems to imply Mr Winterslow’s vision of the ghost at the top of the stairs is a product of psychological strain. However, as will be discussed a little later in the essay, the ghost in ‘Green Holly,’ is crucially different from Bowen’s other haunting spectres: some of the story being from her perspective, albeit through free-indirect discourse, lends far greater veracity to her existence.
In ‘The Demon Lover,’ Bowen adds to the central ambiguity by continuing to create an atmosphere ever more psychologically charged:
The desuetude of her former bedroom, her married London home’s whole air of being a cracked cup from which memory…had either evaporated or leaked away, made a crisis – and at just this crisis the letter-writer had…struck (1980a: 747).
And, just as in Auden’s poem ‘As I Walked Out One Evening,’ written in 1937, ‘the crack in the tea-cup opens/A lane to the land of the dead’ (1991: 134). The crisis is created by the war, the state of exception that has returned and brought with it the ghosts of the past. It is here that Freud’s claims about the creation of ghosts as a means of coping with death have resonance (1985: 82). In his paper, he writes of primeval man that, ‘It was beside the dead body of someone he loved that he invented spirits, and his sense of guilt at the satisfaction mingled with his sorrow turned these new-born spirits into evil demons that had to be dreaded.’ That such illusory spirits abound during a time of war is implicit in the psychological pressures that give rise to them. Freud writes that, ‘Just as for primeval man, so also for our unconscious…the two opposing attitudes towards death, the one which acknowledges it as the annihilation of life and the other which denies it as unreal, collide and come into conflict’ (1985: 87). As such, ghosts arise out of conflicting, contradictory drives in mankind: they are borne out of ambivalence (Freud, 1985: 82). The belief in our own immortality conflicts with the reality of death, and during ‘war…[d]eath will no longer be denied; we are forced to believe in it’ (Freud, 1985: 79), while there is a mixture of love, relief and guilt directed towards those that die. Crucially, for modern man, all of this conflict is located in the unconscious, rather than in the presence of literal ghosts.
Without recourse to the primitive coping mechanism of unambiguous ghosts that can be exorcised, London and Mrs Drover both become haunted by the threatened presence of an equivocal ghost. In the ‘dead air’ of her house, surrounded by the empty, watching streets of wartime London, death becomes both undeniable and insupportable (Bowen, 1980a: 743). Here ‘Green Holly’ adds an interesting element to this concern, as it is the ghost itself that experiences the ‘two opposing attitudes towards death’ (Freud, 1985: 87). Her ‘visibleness’ is dependent ‘on having fallen in love again’, this time with the unpromisingly named Mr Winterslow, but ‘because of her years of death, there cut an extreme anxiety: it was not merely a matter of, how was she? but of, was she – tonight – at all?’ (Bowen, 1980b: 815). As Lassner argues, the ghost herself feels existential anxiety, which is not caused by the war, but by her lack of existence (1991: 56). However, Lassner’s argument that ‘The isolation of the group’ of intelligence officers ‘is matched by the acute loneliness of the ghost’ suggests parallels between them (1991: 56). The fact that ‘Death had left [the ghost] to be her own mirror’ causes her to try and verify her existence in being seen by a man, which is what she puts all of her energy into achieving, as ‘She gathered about her, with a gesture not less proud for being tormentedly uncertain, the total of her visibility’ (Bowen, 1980b: 816–817). Yet her attempt fails bathetically, as not only does he just want her to let him past her on the stairs to get his ‘spectacles’ but he also then can’t really see her, asking, ‘Where are you?’ (Bowen, 1980b: 818). In this, the ghost reflects the position of the other women in the house. Earlier in the story, we are told that ‘Miss Bates had been engaged to Mr Winterslow; before that, she had been extremely friendly with Mr Rankstock,’ and that ‘Mr Rankstock’s deviation towards one Carla…had been totally uninteresting to everyone’ (Bowen, 1980b: 812). In their anonymity, as Lassner argues, no matter how ‘significant their war work, the threat of annihilation and the ambiguity of their intelligence work leave them in limbo. The ghost thus signifies the terrifying possibility that, they too, might not exist’ (1991: 56). Ironically, whereas ghosts for Freud’s primitive peoples carried a terrifying reassurance of our immortality, in Bowen they express fear of annihilation.
This threat of non-existence emerges in a more terrifying sense in the climax of ‘The Demon Lover’ as Kathleen takes refuge in a taxi, where:
Through the aperture driver and passenger…remained for an eternity eye to eye. Mrs Drover’s mouth hung open for some seconds before she could issue her first scream. After that she continued to scream freely…as the taxi…made off with her into the hinterland of deserted streets. (1980a: 749)
Coming face-to-face with her fear, whether that is the ghost of her former lover or an hysterical projection, involves being whisked away to join the dead for ‘eternity.’ As in ‘Green Holly,’ parallels between the citizen living through the strain of wartime and the ghost emerge. Notably, the letter left in the house is signed ‘K,’ her own initial, (Bowen, 1980a: 744) and given that she admits that ‘under no conditions could she remember his face,’ it is not clear if she recognises the driver (Bowen, 1980a: 748). The ghostly presences summoned in response to the pressures of wartime are strongly identified with by the living, are persistent, and resist being exorcised. The ending of ‘Green Holly,’ depicts this even more firmly, with Miss Bates revealing that she also saw the ghost of the dead man at the foot of the stairs, which the ghost of the femme fatale had referred to earlier in the story, which leads her to exclaim, ‘But who was she…? – I could be fatal’ (1980b: 819–20).
As we have seen, for Freud, ghosts operate as a primitive way of coping with the belief in our own immortality when faced with death, whereas Bowen’s ghosts reflect, and in some cases, create existential anxiety. Freud distinguishes between the practices of primeval peoples and contemporary European society. Although often guilty of primitivism, Freud actually commends practices of the so-called less civilised as they are connected with coping with violence and death, and in particular, for their practice of atoning ‘for the murders they committed in war by penances’ before they could even ‘set foot in their village’ (1985: 84). This acts as an acknowledgement that violence committed against others, even in war, is still deserving of guilt, something that Freud believes modern humanity does not allow for (Freud, 1985: 84). Interestingly, this atoning does not rid us of the primitive ‘fear of the avenging spirits of the slain’ (Freud, 1985: 84). Yet, in a surprising move, Freud asks at the very end of his essay whether we would be better admitting that ‘in our civilized attitude towards death we are once again living psychologically beyond our means’ and whether would be better ‘to give a little more prominence to the unconscious attitude towards death which we have hitherto so carefully suppressed’ (Freud, 1985: 89). The unconscious attitude is the same as the primeval viewpoint, suggesting that for Freud, we would be psychologically healthier for acknowledging our guilt, and believing in ghosts.
Not so for modern humans, as rituals associated with mourning the dead, and in particular those who died in war, were actively reduced by the British state in the inter-war period. This began during World War I, where:
Bodies of dead combatants and the funerary rituals associated with their disposal became the property and duty of the state, the Imperial War Graves Commission burying the dead, when conditions allowed, close to the site of battle, in cemeteries that emphasized their commonality with their comrades, rather than their civilian identity (Noakes, 2015: 75).
Of course, this did not lessen the psychological need for the bereaved to process their grief, as is suggested by, ‘The numbers who queued to visit the Tomb of the Unknown Warrior and the Cenotaph in 1920’ (Noakes, 2015: 75). Rather, it is another instance of the expansion of state powers into the private lives of citizens as a result of the state of exception caused by war. The extensive and deliberate nature of such steps is evidenced by the fact that when in the 1930s the ‘Home Office and the Ministry of Health became concerned about the number of corpses aerial warfare was expected to create’ they looked to amend World War I policy, which had been ‘to inscribe the deaths of civilians in the limited air raids of that conflict with sacrificial meaning’ (Noakes, 2015: 77). As a result,
New guidelines were issued, discouraging the use of horses to draw hearses, and putting an end to the tradition of undertakers and mourners walking ahead of the cortege…and the government and local authorities planned for mass civilian casualties by stockpiling cardboard coffins and shrouds (Noakes, 2015: 77).
In addition to the huge numbers of deaths, this state control of mourning practices is in part responsible for the rise in the ‘popularity of spiritualism in interwar Britain,’ as ‘the need of many of the bereaved to make contact with the dead continued’ (Noakes, 2015: 75). This need to be in contact with the dead is palpable in the almost séance like atmosphere of ‘Green Holly,’ and crucially, of course, Kathleen’s soldier was ‘reported missing, presumed killed,’ and so, other than at the Tomb of the Unknown Warrior, could not be ritually mourned (Bowen, 1980a: 746). It is likely that, the officially sanctioned ‘silencing of grief may well have made the process of bereavement, or even witnessing death, harder to bear,’ leading to a proliferation of ghosts, literary and otherwise, refusing to be laid to rest (Noakes, 2015: 83). Additionally, the extension of the state’s jurisdiction over mourning for the mass civilian casualties caused by the total war of the 1940s, is felt in Bowen’s characters’ identification with the ghosts in ‘The Demon Lover’ and ‘Green Holly.’
In her ‘Preface’ to The Second Ghost Book Bowen questions why ‘ghosts should today be so ubiquitous’ (1952, quoted in Lassner, 1991: 137). If ‘Tradition connects with the scenes of violence,’ Bowen asks whether this means that ‘any and every place is, has been or may be a scene of violence?’ (Bowen, 1952, quoted in Lassner, 1991: 137). In response to her own question, she acknowledges that, ‘Our interpretation of violence is wider than once it was,’ and includes, ‘Inflictions and endurances, exactions, injustices, infidelities’ (Bowen, 1952, quoted in Lassner, 1991: 137). During World War II, the civilian population was subtly, but increasingly, subject to such inflictions and injustices by the operation of the nation state. The legal philosophy of Agamben and psychoanalysis of Freud demonstrate the fraught nature of the complexities that arose as a consequence for citizens, a complexity that is reflected in Bowen’s stories which provide ‘snapshots taken from close up…in the middle of the mêlée of a battle’ of the exceptional states experienced as a consequence of living through the wartime state of exception (Bowen, 1945).
## Notes
1. Bowen’s writings are often included in critical works on Ireland and Irishness. See, for example, D. Kiberd (1995), Inventing Ireland: The Literature of the Modern Nation, London: Jonathan Cape; D. Hand (2011), A History of the Irish Novel, Cambridge: Cambridge University Press; and N. Pearson (2015), Irish Cosmopolitanism: Location and Dislocation in James Joyce, Elizabeth Bowen, and Samuel Beckett, Florida: University Press of Florida. [^]
2. See also Stonebridge (2011) on human rights and the social contract in Bowen’s post-war writings. [^]
3. See Ellman (2002), Corcoran (2004), Thurston (2012) and Gildersleeve (2014). [^]
4. For example, see Lassner (1991: 10) and Bryant-Jordan (1992: 130). [^]
5. As early as 1930, Waugh’s Vile Bodies culminates in a war that repeats World War I; see also Marina Mackay’s ‘Modernism and the Second World War’ in Modernism, War, and Violence for comprehensive consideration of how ‘literature of the Second World War…explicitly recalls the traumas of the First World War.’ (2017: 105) [^]
## Competing Interests
The author has no competing interests to declare.
## References
Agamben, G 2005 [2003] The State of Exception (trans. K. Attell). Chicago: Chicago University Press. DOI: http://doi.org/10.7208/chicago/9780226009261.001.0001
Auden, W 1991 Collected Poems. London: Faber & Faber.
Bette, P 2015 War Widows. Available at https://encyclopedia.1914-1918-online.net/pdf/1914-1918-Online-war_widows-2015-12-18.pdf [Last accessed 26th October 2019].
Bowen, E 1945 Postscript by the Author. Available at http://www.ricorso.net/rx/library/criticism/classic/Anglo_I/Bowen_E/Demon_L.htm [Last accessed 16th June 2019].
Bowen, E 1980a ‘The Demon Lover,’ In Collected Stories. London: Vintage. pp. 743–749.
Bowen, E 1980b ‘Green Holly,’ In Collected Stories. London: Vintage. pp. 811–820.
Bowen, E 2008 People, Place, Things: Essays by Elizabeth Bowen. Hepburn, A (ed.). Edinburgh: Edinburgh University Press.
Bowen, E 2010 Listening In: Broadcasts, Speeches, and Interviews by Elizabeth Bowen. Edinburgh: Edinburgh University Press.
Brinson, C 2008 ‘‘Please Tell the Bishop of Chichester’: George Bell and the Internment Crisis of 1940.’ Kirchliche Zeitgeschichte, 21(2): 287–299. DOI: http://doi.org/10.13109/kize.2008.21.2.287
Bryant Jordan, H 1992 How Will the Heart Endure: Elizabeth Bowen and the Landscape of War. Michigan: The University of Michigan Press.
Corcoran, N 2004 The Enforced Return. New York: Oxford University Press.
de la Durantaye, L 2009 Giorgio Agamben: A Critical Introduction. California: Stanford University Press.
Ellman, M 2002 The Shadow Across the Page. Edinburgh: Edinburgh University Press.
Freud, S 1985 [1915] ‘Thoughts for the Times on War and Death,’ In Freud, S Civilization, Society and Religion (trans. J. Strachey), Dickson, A (ed.). London: Penguin.
Gildersleeve, J 2014 Elizabeth Bowen and the Writing of Trauma: The Ethics of Survival. New York: Rodopi. DOI: http://doi.org/10.1163/9789401210478
Ho, J 2015 Nation and Citizenship in the Twentieth-Century. Cambridge: Cambridge University Press. DOI: http://doi.org/10.1017/CBO9781316026748
Humphreys, S 2006 ‘Legalizing Lawlessness: On Giorgio Agamben’s State of Exception.’ The European Journal of International Law, 17(3): 677–687. DOI: http://doi.org/10.1093/ejil/chl020
Langhamer, C 2007 ‘Love and Courtship in Mid-Twentieth-Century England.’ The Historical Journal, 50(1): 173–196. DOI: http://doi.org/10.1017/S0018246X06005966
Lassner, P 1991 Elizabeth Bowen: A Study of the Short Fiction. New York: Macmillan Publishing Company.
Lomas, J 2000 ‘‘Delicate duties’: issues of class and respectability in government policy towards the wives and widows of British soldiers in the era of the great war.’ Women’s History Review, 9(1): 123–147. DOI: http://doi.org/10.1080/09612020000200233
Mackay, M 2017 Modernism, War, and Violence. London: Bloomsbury Publishing.
McKibbin, R 1998 Classes and Cultures: England 1918–1951. Oxford: Oxford University Press. DOI: http://doi.org/10.1093/acprof:oso/9780198206729.001.0001
Noakes, L 2015 ‘Gender, Grief, and Bereavement in Second World War Britain.’ Journal of War & Culture Studies, 8(1): 72–85. DOI: http://doi.org/10.1179/1752628014Y.0000000016
Schmitt, C 2005 [1922] Political Theology, Four Chapters on the Concept of Sovereignty, George Schwab (ed.), (trans. G. Schwab). Chicago: Chicago University Press. DOI: http://doi.org/10.7208/chicago/9780226738901.001.0001
Stannard, R 1939 ‘World War 2: New laws that affect every citizen.’ Available at https://www.telegraph.co.uk/news/newstopics/6155078/World-War-2-New-laws-that-affect-every-citizen.html [Last accessed 24th August 2019].
Stonebridge, L 2011 ‘‘Creatures of an Impossible Time’: Late Modernism, Human Right and Elizabeth Bowen.’ The Judicial Imagination. Edinburgh: Edinburgh University Press. pp. 118–140. DOI: http://doi.org/10.3366/edinburgh/9780748642359.003.0005
Thurston, L 2012 ‘Double-Crossing: Elizabeth Bowen.’ Literary Ghosts from the Victorians to Modernism: The Haunting Interval. New York: Routledge. pp. 145–163. DOI: http://doi.org/10.4324/9780203112496
|
# Tools¶
## Convex hull construction¶
class icet.tools.convex_hull.ConvexHull(concentrations, energies)
This class provides functionality for extracting the convex hull of the (free) energy of mixing. It is based on the convex hull calculator in SciPy.
Parameters: concentrations (list of floats / list of lists of floats) – concentrations for each structure listed as [[c1, c2], [c1, c2], ...]; for binaries, in which case there is only one independent concentration, the format [c1, c2, c3, ...] works as well. energies (list of floats) – energy (or energy of mixing) for each structure
concentrations
NumPy array (N, dimensions) – concentrations of the N structures on the convex hull
energies
NumPy array – energies of the N structures on the convex hull
dimensions
int – number of independent concentrations needed to specify a point in concentration space (1 for binaries, 2 for ternaries etc.)
structures
list of int – indices of structures that constitute the convex hull (indices are defined by the order of their concentrations and energies are fed when initializing the ConvexHull object)
Examples
A ConvexHull object is easily initialized by providing lists of concentrations and energies:
hull = ConvexHull(data['concentration'], data['mixing_energy'])
after which one can for example plot the data (assuming a matplotlib axis object ax):
ax.plot(hull.concentrations, hull.energies)
or extract structures at or close to the convex hull:
low_energy_structures = hull.extract_low_energy_structures(
data['concentration'], data['mixing_energy'],
tolerance=0.005, structures=list_of_structures)
A complete example can be found in the basic tutorial.
extract_low_energy_structures(concentrations, energies, energy_tolerance, structures=None)
Returns structures that lie within a certain tolerance of the convex hull.
Parameters: concentrations (Union[List[float], List[List[float]]]) – concentrations of candidate structures If there is one independent concentration, a list of floats is sufficient. Otherwise, the concentrations must be provided as a list of lists, such as [[0.1, 0.2], [0.3, 0.1], ...]. energies (List[float]) – energies of candidate structures energy_tolerance (float) – include structures with an energy that is at most this far from the convex hull structures (Optional[list]) – list of candidate structures, e.g., ASE Atoms objects, corresponding to concentrations and energies The list will be returned, but with the objects too far from the convex hull removed. If None, a list of indices is returned instead.
get_energy_at_convex_hull(target_concentrations)
Returns the energy of the convex hull at specified concentrations. If any concentration is outside the allowed range, NaN is returned.
Parameters: target_concentrations (Union[List[float], List[List[float]]]) – concentrations at target points If there is one independent concentration, a list of floats is sufficient. Otherwise, the concentrations ought to be provided as a list of lists, such as [[0.1, 0.2], [0.3, 0.1], ...]. ndarray
## Mapping structures¶
icet.tools.structure_mapping.map_structure_to_reference(input_structure, reference_structure, tolerance_mapping, vacancy_type=None, inert_species=None, tolerance_cell=0.05, tolerance_positions=0.01)
Maps a relaxed structure onto a reference structure. The function returns a tuple comprising
• the ideal supercell most closely matching the input structure,
• the largest deviation of any input coordinate from its ideal coordinate, and
• the average deviation of the input coordinates from the ideal coordinates.
Parameters: input_structure (Atoms) – relaxed input structure reference_structure (Atoms) – reference structure, which can but need not represent the primitive cell tolerance_mapping (float) – maximum allowed displacement for mapping an atom in the relaxed (but rescaled) structure to the reference supercell Note: A reasonable choice is up to 20-30% of the first nearest neighbor distance (r1). A value above 50% of r1 will most likely lead to atoms being multiply assigned, whereby the mapping fails. vacancy_type (Optional[str]) – If this parameter is set to a non-zero string unassigned sites in the reference structure will be assigned to this type. Note 1: By default (None) the method will fail if there are any unassigned sites in the reference structure. Note 2: vacancy_type must be a valid element type as enforced by the ase.Atoms class. inert_species (Optional[List[str]]) – List of chemical symbols (e.g., ['Au', 'Pd']) that are never substituted for a vacancy. Used to make an initial rescale of the cell and thus increases the probability for a successful mapping. Need not be specified if vacancy_type is None. tolerance_cell (float) – tolerance factor applied when computing permutation matrix to generate supercell tolerance_positions (float) – tolerance factor applied when scanning for overlapping positions in Angstrom (forwarded to ase.build.cut())
Example
The following code snippet illustrates the general usage. It first creates a primitive FCC cell, which is latter used as reference structure. To emulate a relaxed structure obtained from, e.g., a density functional theory calculation, the code then creates a 4x4x4 conventional FCC supercell, which is populated with two different atom types, has distorted cell vectors, and random displacements to the atoms. Finally, the present function is used to map the structure back the ideal lattice:
from ase.build import bulk
reference = bulk('Au', a=4.09)
atoms = bulk('Au', cubic=True, a=4.09).repeat(4)
atoms.set_chemical_symbols(10 * ['Ag'] + (len(atoms) - 10) * ['Au'])
atoms.set_cell(atoms.cell * 1.02, scale_atoms=True)
atoms.rattle(0.1)
mapped_atoms = map_structure_to_reference(atoms, reference, 1.0)
Return type: Tuple[Atoms, float, float]
## Structure enumeration¶
icet.tools.structure_enumeration.enumerate_structures(atoms, sizes, species, concentration_restrictions=None, niggli_reduce=None)
Yields a sequence of enumerated structures. The function generates all inequivalent structures that are permissible given a certain lattice. Using the species and concentration_restrictions keyword arguments it is possible to specify which species are to be included on which site and in which concentration range.
The function is sensitive to the boundary conditions of the input structure. An enumeration of, for example, a surface can thus be performed by setting atoms.pbc = [True, True, False].
The algorithm implemented here was developed by Gus L. W. Hart and Rodney W. Forcade in Phys. Rev. B 77, 224115 (2008) [HarFor08] and Phys. Rev. B 80, 014120 (2009) [HarFor09].
Parameters: atoms (Atoms) – primitive structure from which derivative superstructures should be generated sizes (List[int]) – number of sites (included in the enumeration) species (list) – species with which to decorate the structure, e.g., ['Au', 'Ag']; see below for more examples concentration_restrictions (Optional[dict]) – allowed concentration range for one or more element in species, e.g., {'Au': (0, 0.2)} will only enumerate structures in which the Au content is between 0 and 20 %; here, concentration is always defined as the number of atoms of the specified kind divided by the number of all atoms. niggli_reduction – if True perform a Niggli reduction with spglib for each structure; the default is True if atoms is periodic in all directions, False otherwise.
Examples
The following code snippet illustrates how to enumerate structures with up to 6 atoms in the unit cell for a binary alloy without any constraints:
from ase.build import bulk
prim = bulk('Ag')
enumerate_structures(atoms=prim, sizes=range(1, 7),
species=['Ag', 'Au'])
To limit the concentration range to 10 to 40% Au the code should be modified as follows:
enumerate_structures(atoms=prim, sizes=range(1, 7),
species=['Ag', 'Au'],
concentration_restrictions={'Au': (0.1, 0.4)})
Often one would like to consider mixing on only one sublattice. This can be achieved as illustrated for a Ga(1-x)Al(x)As alloy as follows:
prim = bulk('GaAs', crystalstructure='zincblende', a=5.65)
enumerate_structures(atoms=prim, sizes=range(1, 9),
species=[['Ga', 'Al'], ['As']])
Return type: Atoms
icet.tools.structure_enumeration.get_symmetry_operations(atoms, tolerance=0.001)
Returns symmetry operations permissable for a given structure as obtained via spglib. The symmetry operations consist of three parts: rotation, translation and basis shifts. The latter define the way that sublattices shift upon rotation (correponds to d_Nd in [HarFor09]).
Parameters: atoms (Atoms) – structure for which symmetry operations are sought tolerance (float) – numerical tolerance imposed during symmetry analysis
|
# Homework Help: Conic Equation using a Quadratic Form
1. Aug 23, 2011
### Apple&Orange
1. The problem statement, all variables and given/known data
x12+x1x2+2x22=8
a) Write the equation using a quadratic form i.e. $\underline{x}$TA$\underline{X}$=8
b)Find the Matrix Q such that the transformation $\underline{X}$=Q$\underline{Y}$ diagonalises A and reduces the quadratic form to standard form in terms of coordinates (y1,y2)
2. Relevant equations
$\underline{X}$=Q$\underline{Y}$
$\underline{X}$TA$\underline{X}$=8
3. The attempt at a solution
For question b), I got the A matrix as [1 1;0 2] or [1 0.5;0.5 2] *sorry, don't know how to use the matrix operator so I've written it matlab style*.
I used the first matrix to give a better looking eigenvalues, which resulted in 2 and 1. From the values, I got a vector of [1;0] and [1;1]
Using the vectors, I got a Q matrix of [1 0; 1/sqrt(2) 1/sqrt(2)]
and using $\underline{X}$=Q$\underline{Y}$, I got
2.707y12+2.707y1y2+y22 which I'm not even sure if its right.
Could someone please assist me in tackling this question?
Thanks!
2. Aug 24, 2011
### HallsofIvy
Yes, the matrix here is
$$\begin{bmatrix}1 & 0.5 \\ 0.5 & 2\end{bmatrix}$$
The first matrix you give is wrong. In order to be certain that there are eigenvalues, you must have a symmetric matrix. You don't want "better looking" eigenvalues, you want the right eigenvalues!
Finally, the problem asked you to "Find the Matrix Q such that the transformation X=QY diagonalises A reduces the quadratic form to standard form in terms of coordinates (y1,y2)" but your final result is NOT in standard form.
3. Aug 25, 2011
### Apple&Orange
Got it! Thanks!
|
## Franklin's Notes
### Singular value decomposition
The singular value decomposition or SVD of an $m\times n$ matrix $A$ is a factorization of the form where $U,V$ are unitary matrices in $\mathbb{C}^{m\times m}$ and $\mathbb{C}^{n\times n}$ respectively and $\Sigma$ is a diagonal matrix in $\mathbb R^{m\times n}$. Geometrically, the SVD is motivated by the fact that the image of a unit sphere under a matrix transformation is always a hyperellipse. Thus, every matrix transformation can be represented by composing a rotation with a stretching along some orthonormal set of axes. The factors by which these axes are stretched are called the singular values of $A$ and are denoted $\sigma_1,...,\sigma_n$. They are conventionally listed in descending order $\sigma_1\geq ...\geq \sigma_n\geq 0$. The left singular vectors $u_1,...,u_n$ are the direction vectors of the largest principal semiaxes of the image of the unit sphere under $A$, so that $\sigma_i u_i$ is the ith largest principle semiaxis of this hyperellipse. The right singular vectors are the preimages of these semiaxes, so that $Av_i = \sigma u_i$ for each $i$. These are the columns of the matrices $U$ and $V$ respectively, so that
Theorem 1. Every matrix $A\in\mathbb C^{m\times n}$ has an SVD. The singular values $\sigma_i$ are uniquely determined, and if $A$ is a square matrix, then the left and right singular vectors are uniquely determined up to multiplication by a complex unit.
This can be proven by induction on the dimensions of $A$, starting by isolating the largest semiaxis of the image of the unit hypersphere transformed by $A$, and thereby recursively constructing an SVD that depends on the existence of the SVD for a space with one dimension fewer. It happens that if $A$ is a real-valued matrix, it also has an SVD decomposition into real-valued matrices.
The following theorem gives some examples of useful information provided by the SVD:
Theorem 2. For some $A\in\mathbb C^{m\times n}$, let $r$ be the number of nonzero singular values of $A$. Then the following statements hold:
1. The rank of $A$ is equal to $r$.
2. The range of $A$ is $\text{span}{u_1,...,u_r}$ and the null space of $A$ is $\text{null}{v_{r+1},...,v_n}$.
3. If $A$ is square, then $|\det(A)|=\sigma_1...\sigma_m$.
4. $\lVert A\rVert_2 = \sigma_1$ and $\lVert A\rVert_F = \sqrt{\sigma_1^2 + ... + \sigma_r^2}$.
5. The nonzero singular values of $A$ are the square roots of the nonzero eigenvalues of $A^\ast A$ or $AA^\ast$.
Proof. If $x\in\mathbb C^n$, then we may decompose it in the form so that we have Thus, we have that the range of $A$ is spanned by $u_1,...,u_r$. From this we can also see that $Ax=0$ iff $\alpha_1=...=\alpha_r=0$, so that the null space of $A$ is spanned by $v_{r+1},...,v_n$, as claimed. Thus, we have claim $(2)$, from which claim $(1)$ follows.
If $A$ is a square matrix, then $A=U\Sigma V^\ast$ where $U,\Sigma,V$ are also square matrices of the same dimension. Recall that unitary matrices have a unit determinant, meaning that and since $\Sigma$ is diagonal with the singular values along its diagonal, we have as claimed.
The equality $\lVert A\rVert_2=\sigma_1$ follows from the definition of the induced matrix 2-norm , since $\sigma_1$ is defined as the length of the longest principal semiaxis of the image of the unit hypersphere under $A$. Since multiplication by unitary matrices does not alter the Frobenius norm, we have that $\lVert A\rVert_\mathrm{F} = \lVert \Sigma\rVert_\mathrm{F}$. The Frobenius norm of a diagonal matrix equals the square root of the sum of the (nonzero) diagonal entries, so we have $\det A = \sqrt{\sigma_1^2 + ... + \sigma_r^2}$.
If $A=U\Sigma V^\ast$, then $AA^\ast = U\Sigma \Sigma^\ast U^\ast$, which is an eigenvalue decomposition of $AA^\ast$, with eigenvalues given by $\sigma_i^2$ along the diagonal of $\Sigma^2$. Similar reasoning applies for $A^\ast A=V\Sigma^\ast \Sigma V^\ast$, proving claim $(5)$. $\blacksquare$
Exercise 1. (Trefethen and Bau, 5.4) Suppose $A\in \mathbb C^{m\times m}$ has an SVD $A=U\Sigma V^\ast$. Find an eigenvalue decomposition of the $2m\times 2m$ block hermitian matrix
Solution. We may take and so that and as desired.
|
# randomUniformHyperGraph -- returns a random uniform hypergraph
## Synopsis
• Usage:
H = randomUniformHyperGraph(R,c,d)
• Inputs:
• R, , which gives the vertex set of H
• c, an integer, the cardinality of the edge sets
• d, an integer, the number of edges in H
• Outputs:
• H, , a hypergraph with d edges of cardinality c on vertex set determined by R
## Description
This function allows one to create a uniform hypergraph on an underlying vertex set with a given number of randomly chosen edges of given cardinality.
i1 : R = QQ[x_1..x_9]; i2 : randomUniformHyperGraph(R,3,4) o2 = HyperGraph{edges => {{x , x , x }, {x , x , x }, {x , x , x }, {x , x , x }}} 5 6 9 2 6 8 4 7 8 3 5 7 ring => R vertices => {x , x , x , x , x , x , x , x , x } 1 2 3 4 5 6 7 8 9 o2 : HyperGraph i3 : randomUniformHyperGraph(R,4,2) o3 = HyperGraph{edges => {{x , x , x , x }, {x , x , x , x }} } 2 3 4 9 2 3 4 8 ring => R vertices => {x , x , x , x , x , x , x , x , x } 1 2 3 4 5 6 7 8 9 o3 : HyperGraph
## Ways to use randomUniformHyperGraph :
• "randomUniformHyperGraph(PolynomialRing,ZZ,ZZ)"
## For the programmer
The object randomUniformHyperGraph is .
|
## Stević, Stevo
Compute Distance To:
Author ID: stevic.stevo Published as: Stević, Stevo; Stevic, Stevo; Stević, S. more...less External Links: MGP · Math-Net.Ru · dblp · GND
Documents Indexed: 472 Publications since 1996 Co-Authors: 61 Co-Authors with 179 Joint Publications 2,369 Co-Co-Authors
all top 5
### Co-Authors
281 single-authored 43 Iričanin, Bratislav D. 36 Li, Songxiao 32 Šmarda, Zdeněk 16 Kosmala, Witold A. J. 14 Berenhaut, Kenneth S. 12 Diblík, Josef 10 Alghamdi, Mohammed Ali 9 Alotaibi, Abdullah M. 9 Shahzad, Naseer 8 Berg, Lothar 7 Foley, John David 7 Karakostas, George L. 7 Ueki, Sei-ichiro 6 Chang, Der-Chen E. 5 Jiang, Zhijie 4 Avetisyan, Karen L. 4 Bohner, Martin J. 4 Elsayed, Elsayed Mohammed 4 Kent, Candace M. 4 Liu, Wanping 3 Yang, Xiaofan 2 Bhat, Ambika 2 Brzdęk, Janusz 2 Çinar, Cengiz 2 Li, Wan-Tong 2 Maturi, Dalal A. 2 Sehba, Benoit Florent 2 Tollu, Durhasan Turgut 2 Yalcinkaya, Ibrahim 1 Agarwal, Ravi P. 1 Ahmed, Ahmed El-Sayed 1 Alghamdi, A. Mohammed 1 Alotaibi, Alotaibi 1 Balibrea, Francisco 1 Chan, David M. 1 Chen, Renyu 1 Clahane, Dana D. 1 Dice, Jennifer E. 1 Fu, Xiaohong 1 Galindo, Pablo 1 Gilbert, Robert Pertsch 1 Goedhart, Eva G. 1 Gutnik, Leonid A. 1 Hatori, Osamu 1 Hu, Linxia 1 Iida, Yasuo 1 Kocic, Vlajko Lj. 1 Krantz, Steven George 1 Krishan, Ram 1 Lindström, Mikael 1 Linero Bas, Antonio 1 Liu, Xinzhi 1 Radin, Michael A. 1 Ranković, Dragana 1 Sharma, Ajay Y. 1 Soler López, Gabriel 1 Su, Youhui 1 Warth, Howard 1 Wolf, Elke 1 Zhang, Guang 1 Zhou, Zehua
all top 5
### Serials
109 Applied Mathematics and Computation 28 Abstract and Applied Analysis 25 Advances in Difference Equations 24 Journal of Difference Equations and Applications 20 Discrete Dynamics in Nature and Society 16 Journal of Inequalities and Applications 16 Electronic Journal of Qualitative Theory of Differential Equations 13 Mathematical Methods in the Applied Sciences 11 Journal of Computational Analysis and Applications 10 Journal of Mathematical Analysis and Applications 10 Applied Mathematics Letters 9 Taiwanese Journal of Mathematics 8 Ars Combinatoria 8 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 8 Utilitas Mathematica 8 Electronic Journal of Differential Equations (EJDE) 7 Indian Journal of Mathematics 5 Sibirskiĭ Matematicheskiĭ Zhurnal 5 Zeitschrift für Analysis und ihre Anwendungen 5 Journal of Applied Mathematics and Computing 4 Houston Journal of Mathematics 4 Indian Journal of Pure & Applied Mathematics 4 Demonstratio Mathematica 4 Journal of the Mathematical Society of Japan 4 Filomat 4 The ANZIAM Journal 4 Dynamics of Continuous, Discrete & Impulsive Systems. Series A. Mathematical Analysis 4 Symmetry 3 Applicable Analysis 3 Chaos, Solitons and Fractals 3 The Journal of the Indian Mathematical Society. New Series 3 Rostocker Mathematisches Kolloquium 3 Studia Scientiarum Mathematicarum Hungarica 3 Bulletin of the Greek Mathematical Society 3 Complex Variables. Theory and Application 3 Bulletin of the Institute of Mathematics. Academia Sinica 3 Bulletin of the Belgian Mathematical Society - Simon Stevin 3 Integral Transforms and Special Functions 3 Complex Variables and Elliptic Equations 2 Ukraïns’kyĭ Matematychnyĭ Zhurnal 2 The Australian Mathematical Society Gazette 2 International Journal of Mathematics and Mathematical Sciences 2 Journal of the Korean Mathematical Society 2 Mathematische Nachrichten 2 Numerical Functional Analysis and Optimization 2 Proceedings of the American Mathematical Society 2 Aequationes Mathematicae 2 Sbornik: Mathematics 2 Advances in Nonlinear Analysis 1 Computers & Mathematics with Applications 1 Mathematical Notes 1 Acta Scientiarum Mathematicarum 1 Annales Polonici Mathematici 1 Bulletin of the Calcutta Mathematical Society 1 Colloquium Mathematicum 1 Fasciculi Mathematici 1 Functiones et Approximatio. Commentarii Mathematici 1 Gaṇita 1 Glasgow Mathematical Journal 1 Matematički Vesnik 1 Nagoya Mathematical Journal 1 Publicationes Mathematicae 1 Results in Mathematics 1 Siberian Mathematical Journal 1 Yokohama Mathematical Journal 1 Panamerican Mathematical Journal 1 International Journal of Computer Mathematics 1 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 1 Bulletin des Sciences Mathématiques 1 Analele Științifice ale Universității “Ovidius” Constanța. Seria: Matematică 1 Communications on Applied Nonlinear Analysis 1 Mathematical Inequalities & Applications 1 Ultra Scientist of Physical Sciences 1 Annales Mathematicae Silesianae 1 Journal of Nonlinear and Convex Analysis 1 Nonlinear Functional Analysis and Applications 1 Applied Mathematics E-Notes 1 Acta Mathematica Scientia. Series B. (English Edition) 1 Bulletin of the Brazilian Mathematical Society. New Series 1 Mediterranean Journal of Mathematics 1 International Journal of Mathematical Sciences 1 Journal of Concrete and Applicable Mathematics 1 Bulletin of the Institute of Mathematics. Academia Sinica. New Series 1 Journal of Applied Functional Analysis 1 Scientia. Series A: Mathematical Sciences. New Series 1 Journal of Mathematical Inequalities 1 Journal of Nonlinear Functional Analysis and Differential Equations 1 Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A: Matemáticas. RACSAM
all top 5
### Fields
258 Difference and functional equations (39-XX) 119 Operator theory (47-XX) 71 Several complex variables and analytic spaces (32-XX) 69 Functions of a complex variable (30-XX) 59 Functional analysis (46-XX) 19 Potential theory (31-XX) 18 Ordinary differential equations (34-XX) 8 Real functions (26-XX) 7 Sequences, series, summability (40-XX) 6 Harmonic analysis on Euclidean spaces (42-XX) 6 Integral equations (45-XX) 6 Biology and other natural sciences (92-XX) 4 Number theory (11-XX) 2 Combinatorics (05-XX) 2 Partial differential equations (35-XX) 2 Dynamical systems and ergodic theory (37-XX) 2 Numerical analysis (65-XX) 1 Approximations and expansions (41-XX)
### Citations contained in zbMATH Open
404 Publications have been cited 7,438 times in 1,085 Documents Cited by Year
Generalized composition operators on Zygmund spaces and Bloch type spaces. Zbl 1135.47021
Li, Songxiao; Stević, Stevo
2008
Products of Volterra type operator and composition operator from $$H^\infty$$and Bloch spaces to Zygmund spaces. Zbl 1145.47022
Li, Songxiao; Stević, Stevo
2008
On a new integral-type operator from the Bloch space to Bloch-type spaces on the unit ball. Zbl 1171.47028
Stević, Stevo
2009
Norm of weighted composition operators from Bloch space to $$H_{\mu }^{\infty }$$ on the unit ball. Zbl 1224.30195
Stević, Stevo
2008
Existence of nontrivial solutions of a rational difference equation. Zbl 1131.39009
Stević, Stevo
2007
Weighted differentiation composition operators from $$H^{\infty }$$ and Bloch spaces to $$n$$th weighted-type spaces on the unit disk. Zbl 1195.30073
Stević, Stevo
2010
On the recursive sequence $$x_{n+1} = \max \left\{ c, \frac {x_n^p}{x_{n-1}^p} \right\}$$. Zbl 1152.39012
Stević, Stevo
2008
On positive solutions of a ($$k+1$$)th order difference equation. Zbl 1095.39010
Stević, Stevo
2006
On some solvable systems of difference equations. Zbl 1253.39011
Stević, Stevo
2012
Global stability and asymptotics of some classes of rational difference equations. Zbl 1090.39009
Stević, Stevo
2006
Weighted composition operators from Zygmund spaces into Bloch spaces. Zbl 1215.47022
Li, Songxiao; Stević, Stevo
2008
On an integral operator from the Zygmund space to the Bloch-type space on the unit ball. Zbl 1176.47029
Stević, Stevo
2009
On a system of difference equations. Zbl 1242.39017
Stević, Stevo
2011
On an integral operator on the unit ball in $$\mathbb{C}^n$$. Zbl 1074.47013
Stević, Stevo
2005
Norm and essential norm of composition followed by differentiation from $$\alpha$$-Bloch spaces to $$H_\mu^\infty$$. Zbl 1157.47026
Stević, Stevo
2009
On a nonlinear generalized max-type difference equation. Zbl 1208.39014
Stević, Stevo
2011
On some systems of difference equations. Zbl 1243.39009
Berg, Lothar; Stević, Stevo
2011
Products of integral-type operators and composition operators between Bloch-type spaces. Zbl 1155.47036
Li, Songxiao; Stević, Stevo
2009
Periodicity of a class of nonautonomous max-type difference equations. Zbl 1225.39018
Stević, Stevo
2011
Eventually constant solutions of a rational difference equation. Zbl 1178.39012
Iričanin, Bratislav; Stević, Stevo
2009
Composition followed by differentiation between Bloch type spaces. Zbl 1132.47026
Li, Songxiao; Stević, Stevo
2007
A short proof of the Cushing-Henson conjecture. Zbl 1149.39300
Stević, Stevo
2006
Global stability of a max-type difference equation. Zbl 1193.39009
Stević, Stevo
2010
On the iterated logarithmic Bloch space on the unit ball. Zbl 1221.47056
Krantz, Steven G.; Stević, Stevo
2009
On the difference equation $$x_{n} = x_{n - 2}/(b_{n} + c_{n}x_{n - 1}x_{n - 2})$$. Zbl 1256.39009
Stević, Stevo
2011
Global stability of a difference equation with maximum. Zbl 1167.39007
Stević, Stevo
2009
Essential norms of weighted composition operators from the $$\alpha$$-Bloch space to a weighted-type space on the unit ball. Zbl 1160.32011
Stević, Stevo
2008
On a new operator from the logarithmic Bloch space to the Bloch-type space on the unit ball. Zbl 1162.47029
Stević, Stevo
2008
Boundedness character of a class of difference equations. Zbl 1162.39011
Stević, Stevo
2009
Riemann-Stieltjes-type integral operators on the unit ball in $$\mathbb C^n$$. Zbl 1124.47022
Li, Songxiao; Stević, Stevo
2007
Products of composition and integral type operators from $$H^{\infty}$$ to the Bloch space. Zbl 1159.47019
Li, Songxiao; Stević, Stevo
2008
Products of composition and differentiation operators from Zygmund spaces to Bloch spaces and Bers spaces. Zbl 1204.30046
Li, Songxiao; Stević, Stevo
2010
On a third-order system of difference equations. Zbl 1243.39011
Stević, Stevo
2012
On an integral-type operator from logarithmic Bloch-type and mixed-norm spaces to Bloch-type spaces. Zbl 1186.47033
Stević, Stevo
2009
Norm equivalence and composition operators between Bloch/Lipschitz spaces of the ball. Zbl 1131.47018
Clahane, Dana D.; Stević, Stevo
2006
On the recursive sequence $$x_{n+1}=\alpha+\frac{x^p_{n-1}}{x_n^p}$$. Zbl 1078.39013
Stević, Stevo
2005
Products of composition and differentiation operators on the weighted Bergman space. Zbl 1181.30031
Stević, Stevo
2009
On a generalized max-type difference equation from automatic control theory. Zbl 1194.39007
Stević, Stevo
2010
On the asymptotics of the difference equation $$y_{n}(1 + y_{n - 1} \cdots y_{n - k + 1}) = y_{n - k}$$. Zbl 1220.39011
Berg, Lothar; Stević, Stevo
2011
Products of multiplication composition and differentiation operators on weighted Bergman spaces. Zbl 1218.30152
Stević, Stevo; Sharma, Ajay K.; Bhat, Ambika
2011
On a new operator from $$H^\infty$$ to the Bloch-type space on the unit ball. Zbl 1175.47034
Stević, Stevo
2008
On the difference equation $$x_{n}=x_{n-k} / (b+cx_{n-q}+\dots{ }+x_{n-k})$$. Zbl 1246.39010
Stević, Stevo
2012
On the recursive sequence $$x_{n+1}=x_{n-1}/g(x_n)$$. Zbl 1019.39010
Stević, Stevo
2002
Composition operators between $$H^\infty$$ and $$a$$-Bloch spaces on the polydisc. Zbl 1118.47015
Stević, Stevo
2006
Asymptotic behavior of a nonlinear difference equation. Zbl 1049.39012
Stević, Stevo
2003
Weighted differentiation composition operators from mixed-norm spaces to weighted-type spaces. Zbl 1165.30029
Stević, Stevo
2009
Weighted composition operators between $$H^\infty$$ and $$\alpha$$-Bloch spaces in the unit ball. Zbl 1177.47032
Li, Songxiao; Stević, Stevo
2008
Composition followed by differentiation from mixed-norm spaces to $$\alpha$$-Bloch spaces. Zbl 1169.47025
Li, Songxiao; Stević, S.
2008
Some systems of nonlinear difference equations of higher order with periodic solutions. Zbl 1098.39003
Iričanin, Bratislav; Stević, Stevo
2006
More on a rational recurrence relation. Zbl 1069.39024
Stević, Stevo
2004
Composition followed by differentiation between $$H^\infty$$ and $$\alpha$$-Bloch spaces. Zbl 1166.47034
Li, Songxiao; Stević, Stevo
2009
Asymptotic behavior of a sequence defined by iteration with applications. Zbl 1029.39006
Stević, Stevo
2002
Asymptotics of some classes of higher-order difference equations. Zbl 1180.39009
Stević, Stevo
2007
Norms of some operators from Bergman spaces to weighted and Bloch-type spaces. Zbl 1160.47027
Stević, Stevo
2008
Weighted composition operators from $$\alpha$$-Bloch space to $$H^{\infty}$$ on the polydisc. Zbl 1130.47015
Li, Songxiao; Stević, Stevo
2007
The behaviour of the positive solutions of the difference equation $$x_n = A + (\frac{x_{n-2}}{x_{n-1}})^p$$. Zbl 1111.39003
Berenhaut, Kenneth S.; Stević, Stevo
2006
On an integral operator between Bloch-type spaces on the unit ball. Zbl 1189.47032
Stević, Stevo
2010
Solutions of a max-type system of difference equations. Zbl 1252.39009
Stević, Stevo
2012
Weighted composition operators from Bergman-type spaces into Bloch spaces. Zbl 1130.47016
Li, Songxiao; Stević, Stevo
2007
Generalized composition operators from logarithmic Bloch spaces to mixed-norm spaces. Zbl 1175.47033
Stević, Stevo
2008
On some solvable difference equations and systems of difference equations. Zbl 1253.39001
Stević, Stevo; Diblík, Josef; Iričanin, Bratislav; Šmarda, Zdeněk
2012
On operator $$P_\varphi^g$$ from the logarithmic Bloch-type space to the mixed-norm space on the unit ball. Zbl 1205.45014
Stević, Stevo
2010
The global attractivity of the rational difference equation $$y_{n}=1+\frac{y_{n-k}}{y_{n-m}}$$. Zbl 1109.39004
Berenhaut, Kenneth S.; Foley, John D.; Stevic, Stevo
2007
Periodicity of some classes of holomorphic difference equations. Zbl 1103.39004
Berg, Lothar; Stević, Stevo
2006
Boundedness character of positive solutions of a max difference equation. Zbl 1116.39001
Berenhaut, Kenneth S.; Foley, John D.; Stević, Stevo
2006
Boundedness and compactness of an integral operator on a weighted space on the polydisc. Zbl 1121.47032
Stević, Stevo
2006
Composition followed by differentiation from $$H^{\infty }$$ and the Bloch space to $$n$$th weighted-type spaces on the unit disk. Zbl 1195.30070
Stević, Stevo
2010
On some periodic systems of max-type difference equations. Zbl 1280.39012
Stević, Stevo
2012
Periodicity of max difference equations. Zbl 1236.39018
Stević, Stevo
2010
Weighted differentiation composition operators from the mixed-norm space to the $$n$$th weigthed-type space on the unit disk. Zbl 1198.30014
Stević, Stevo
2010
On a new integral-type operator from the weighted Bergman space to the Bloch-type space on the unit ball. Zbl 1155.32002
Stević, Stevo
2008
Integral type operators from mixed-norm spaces to $$\alpha$$-Bloch spaces. Zbl 1131.47031
Li, Songxiao; Stević, Stevo
2007
Weighted composition operators from weighted Bergman spaces to weighted-type spaces on the unit ball. Zbl 1186.47020
Stević, Stevo
2009
Volterra-type operators on Zygmund spaces. Zbl 1146.30303
Li, Songxiao; Stević, Stevo
2007
On positive solutions of a reciprocal difference equation with minimum. Zbl 1074.39002
Çinar, Cengiz; Stević, Stevo; Yalçinkaya, Ibrahim
2005
A global convergence result with applications to periodic solutions. Zbl 1002.39004
Stević, Stevo
2002
Essential norm of products of multiplication composition and differentiation operators on weighted Bergman spaces. Zbl 1244.30080
Stević, Stevo; Sharma, Ajay K.; Bhat, Ambika
2011
Compactness of Riemann–Stieltjes operators between $$F(p,q,s)$$ spaces and $$\alpha$$-Bloch spaces. Zbl 1164.47040
Li, Songxiao; Stević, Stevo
2008
Integral-type operators acting between weighted-type spaces on the unit ball. Zbl 1197.47061
Stević, Stevo; Ueki, Sei-Ichiro
2009
The global attractivity of the rational difference equation $$y_n = \frac{y_{n-k}+y_{n-m}}{1+y_{n-k}y_{n-m}}$$. Zbl 1131.39006
Berenhaut, Kenneth S.; Foley, John D.; Stević, Stevo
2007
Weighted composition operators between mixed norm spaces and $$H_{\alpha }^{\infty }$$ spaces in the unit ball. Zbl 1138.47019
Stević, Stevo
2007
On some integral operators on the unit polydisk and the unit ball. Zbl 1149.47026
Chang, Der-Chen; Li, Songxiao; Stević, Stevo
2007
Representation of solutions of bilinear difference equations in terms of generalized Fibonacci sequences. Zbl 1324.39004
Stević, Stevo
2014
Behavior of the positive solutions of the generalized Beddington-Holt equation. Zbl 1039.39005
Stević, Stevo
2000
Nontrivial solutions of a higher-order rational difference equation. Zbl 1219.39007
Stević, S.
2008
Riemann–Stieltjes operators on Hardy spaces in the unit ball of $$\mathbb C^ n$$. Zbl 1136.47023
Li, Songxiao; Stević, Stevo
2007
Weighted composition operators from $$H^{\infty }$$ to the Bloch space on the polydisc. Zbl 1152.47016
Li, Songxiao; Stević, Stevo
2007
Integral-type operators from Bloch-type spaces to Zygmund-type spaces. Zbl 1179.45022
Li, Songxiao; Stević, Stevo
2009
On the recursive sequence $$x_{n+1}=\alpha_n+\frac{x_{n-1}}{x_n}$$. II. Zbl 1051.39012
Stević, Stevo
2003
Riemann–Stieltjes operators between different weighted Bergman spaces. Zbl 1169.47026
Li, Songxiao; Stević, Stevo
2008
Cesàro-type operators on some spaces of analytic functions on the unit ball. Zbl 1166.45009
Li, Songxiao; Stević, Stevo
2009
On the difference equation $$X_{n+1} = \alpha + \frac{x_{n-1}}{x_n}$$. Zbl 1155.39305
Stević, Stevo
2008
Boundedness and compactness of an integral operator in a mixed norm space on the polydisk. Zbl 1164.47331
Stević, Stevo
2007
Norm of weighted composition operators from $$\alpha$$-Bloch spaces to weighted-type spaces. Zbl 1181.32011
Stević, Stevo
2009
On the system of difference equations $$x_n=c_ny_{n-3}/(a_n+b_ny_{n-1}x_{n-2}y_{n-3})$$,$$y_n=\gamma_nx_{n-3}/(\alpha_n+\beta_nx_{n-1}y_{n-2}x_{n-3})$$. Zbl 1386.39027
Stević, Stevo
2013
On a system of difference equations with period two coefficients. Zbl 1256.39008
Stević, Stevo
2011
On some product-type operators from Hardy-Orlicz and Bergman-Orlicz spaces to weighted-type spaces. Zbl 1334.42052
Sehba, Benoît; Stević, Stevo
2014
On a product-type system of difference equations of second order solvable in closed form. Zbl 1333.39006
Stević, Stevo; Iričanin, Bratislav; Šmarda, Zdeněk
2015
The global attractivity of a higher order rational difference equation. Zbl 1112.39002
Berenhaut, Kenneth S.; Stević, Stevo
2007
A note on periodic character of a difference equation. Zbl 1057.39005
Stević, Stevo
2004
Weighted iterated radial composition operators from weighted Bergman-Orlicz spaces to weighted-type spaces on the unit ball. Zbl 07377115
Stević, Stevo; Jiang, Zhi-Jie
2021
Note on a difference equation and some of its relatives. Zbl 1473.39005
Stević, Stevo; Ahmed, Ahmed El-Sayed; Kosmala, Witold; Šmarda, Zdeněk
2021
New class of practically solvable systems of difference equations of hyperbolic-cotangent-type. Zbl 1474.39007
Stevic, Stevo
2020
On a symmetric bilinear system of difference equations. Zbl 1409.39002
Stević, Stevo; Iričanin, Bratislav; Šmarda, Zdeněk
2019
Solvability of a product-type system of difference equations with six parameters. Zbl 1426.39008
Stević, Stevo
2019
More on a hyperbolic-cotangent class of difference equations. Zbl 1416.39009
Stević, Stevo; Iričanin, Bratislav; Kosmala, Witold
2019
Solvability and semi-cycle analysis of a class of nonlinear systems of difference equations. Zbl 1418.39008
Stević, Stevo; Tollu, Durhasan T.
2019
Solvability of eight classes of nonlinear systems of difference equations. Zbl 1423.39018
Stević, Stevo; Tollu, Durhasan T.
2019
Solving a class of nonautonomous difference equations by generalized invariants. Zbl 1430.39002
Stević, Stevo
2019
Solvability of a one-parameter class of nonlinear second-order difference equations by invariants. Zbl 1459.39022
Stević, Stevo
2019
Solvability of a general class of two-dimensional hyperbolic-cotangent-type systems of difference equations. Zbl 1485.39019
Stević, Stevo
2019
General solutions to four classes of nonlinear difference equations and some of their representations. Zbl 1449.39001
Stevic, Stevo
2019
On some classes of solvable systems of difference equations. Zbl 1458.39003
Stević, Stevo; Iričanin, Bratislav; Kosmala, Witold; Šmarda, Zdeněk
2019
Representations of general solutions to some classes of nonlinear difference equations. Zbl 1458.39001
Stević, Stevo; Iričanin, Bratislav; Kosmala, Witold
2019
Note on the bilinear difference equation with a delay. Zbl 1404.39001
Stević, Stevo; Iričanin, Bratislav; Kosmala, Witold; Šmarda, Zdeněk
2018
Representations of solutions to linear and bilinear difference equations and systems of bilinear difference equations. Zbl 1448.39001
Stević, Stevo
2018
Representation of solutions of a solvable nonlinear difference equation of second order. Zbl 1424.39022
Stevic, Stevo; Iricanin, Bratislav; Kosmala, Witold; Smarda, Zdenek
2018
On a product-type operator between Hardy and $$\alpha$$-Bloch spaces of the upper half-plane. Zbl 07445989
Stević, Stevo; Sharma, Ajay K.
2018
A four-dimensional solvable system of difference equations in the complex domain. Zbl 1401.39018
Stević, Stevo
2018
On a two-dimensional solvable system of difference equations. Zbl 1424.39021
Stevic, Stevo
2018
Bounded and periodic solutions to the linear first-order difference equation on the integer domain. Zbl 1422.39003
Stević, Stevo
2017
Essential norm of some extensions of the generalized composition operators between $$k$$th weighted-type spaces. Zbl 06775985
Stević, Stevo
2017
Existence of a unique bounded solution to a linear second-order difference equation and the linear first-order difference equation. Zbl 1422.39002
Stević, Stevo
2017
Solvable product-type system of difference equations whose associated polynomial is of the fourth order. Zbl 1413.39018
Stevic, Stevo
2017
Bounded solutions to nonhomogeneous linear second-order difference equations. Zbl 1423.39002
Stević, Stevo
2017
New class of solvable systems of difference equations. Zbl 1348.39005
Stević, Stevo
2017
Product-type system of difference equations with a complex structure of solutions. Zbl 1444.39005
Stević, Stevo
2017
Solvability of boundary-value problems for a linear partial difference equation. Zbl 1356.39002
Stevic, Stevo
2017
Note on bounded solutions to nonhomogenous linear difference equations. Zbl 1386.39004
Stević, Stevo; Iričanin, Bratislav; Šmarda, Zdenék
2017
On a class of solvable higher-order difference equations. Zbl 07380070
Stević, Stevo; Alghamdi, Mohammed A.; Alotaibi, Abdullah; Elsayed, Elsayed M.
2017
On an extension of a recurrent relation from combinatorics. Zbl 1413.39014
Stevic, Stevo
2017
Solvable product-type system of difference equations with two dependent variables. Zbl 1422.39016
Stević, Stevo
2017
Solvability of the class of two-dimensional product-type systems of difference equations of delay-type $$(1,3,1,1)$$. Zbl 1423.39017
Stević, Stevo
2017
Solution to the solvability problem for a class of product-type systems of difference equations. Zbl 1444.39009
Stević, Stevo
2017
Solvable subclasses of a class of nonlinear second-order difference equations. Zbl 1338.39011
Stević, Stevo
2016
Solvability of a close to symmetric system of difference equations. Zbl 1344.39004
Stevic, Stevo; Iricanin, Bratislav; Smarda, Zdenek
2016
Two-dimensional product-type system of difference equations solvable in closed form. Zbl 1419.39012
Stević, Stevo; Iričanin, Bratislav; Šmarda, Zdeněk
2016
On a practically solvable product-type system of difference equations of second order. Zbl 1363.39005
Stevic, Stevo; Rankovic, Dragana
2016
Solvability of boundary value problems for a class of partial difference equations on the combinatorial domain. Zbl 1419.39022
Stević, Stevo
2016
Boundedness and compactness of a new product-type operator from a general space to Bloch-type spaces. Zbl 1353.47065
Stević, Stevo; Sharma, Ajay K.; Krishan, Ram
2016
Weighted differentiation composition operators from the logarithmic Bloch space to the weighted-type space. Zbl 1389.47091
Li, Songxiao; Stević, Stevo
2016
Boundedness and persistence of some cyclic-type systems of difference equations. Zbl 1334.39036
Stević, Stevo
2016
On a fifth-order difference equation. Zbl 1339.39016
Stević, Stevo; Diblík, Josef; Iričanin, Bratislav; Šmarda, Zdeněk
2016
New solvable class of product-type systems of difference equations on the complex domain and a new method for proving the solvability. Zbl 1399.39015
Stevic, Stevo
2016
On periodic solutions of a class of $$k$$-dimensional systems of Max-type difference equations. Zbl 1419.39032
Stević, Stevo
2016
Relations between two classes of real functions and applications to boundedness and compactness of operators between analytic function spaces. Zbl 06554479
Sehba, Benoît F.; Stević, Stevo
2016
Third-order product-type systems of difference equations solvable in closed form. Zbl 1353.39014
Stević, Stevo
2016
On a product-type system of difference equations of second order solvable in closed form. Zbl 1333.39006
Stević, Stevo; Iričanin, Bratislav; Šmarda, Zdeněk
2015
Product-type system of difference equations of second-order solvable in closed form. Zbl 1349.39017
Stevic, Stevo
2015
Solvable product-type system of difference equations of second order. Zbl 1321.39014
Stević, Stevo; Alghamdi, Mohammed A.; Alotaibi, Abdullah; Elsayed, Elsayed M.
2015
First-order product-type systems of difference equations solvable in closed form. Zbl 1329.39014
Stević, Stevo
2015
Generalized weighted composition operators from $$\alpha$$-Bloch spaces into weighted-type spaces. Zbl 1338.47018
Li, Songxiao; Stević, Stevo
2015
Note on the binomial partial difference equation. Zbl 1349.39012
Stevic, Stevo
2015
On a close to symmetric system of difference equations of second order. Zbl 1422.39018
Stević, Stevo; Iričanin, Bratislav; Šmarda, Zdeněk
2015
Weighted composition operators from weighted Bergman spaces with Békollé weights to Bloch-type spaces. Zbl 1338.47020
Stević, Stevo; Sharma, Ajay K.
2015
Boundedness character of a fourth-order system of difference equations. Zbl 1422.39017
Stević, Stevo; Iričanin, Bratislav; Šmarda, Zdeněk
2015
Boundedness character of the recursive sequence $$x_n=\alpha + \Pi_{j = 1}^k x_{n - j}^{a_j}$$. Zbl 1330.39020
Stević, Stevo; Alghamdi, Mohammed A.; Alotaibi, Abdullah
2015
Representation of solutions of bilinear difference equations in terms of generalized Fibonacci sequences. Zbl 1324.39004
Stević, Stevo
2014
On some product-type operators from Hardy-Orlicz and Bergman-Orlicz spaces to weighted-type spaces. Zbl 1334.42052
Sehba, Benoît; Stević, Stevo
2014
On a solvable system of rational difference equations. Zbl 1298.39013
Stević, Stevo; Diblík, Josef; Iričanin, Bratislav; Šmarda, Zdeněk
2014
Boundedness character of a max-type system of difference equations of second order. Zbl 1324.39012
Stevic, Stevo; Alghamdi, A. Mohammed; Alotaibi, Alotaibi; Shahzad, Naseer
2014
Solvability of nonlinear difference equations of fourth order. Zbl 1314.39013
Stević, Stevo; Diblík, Josef; Iričanin, Bratislav; Šmarda, Zdeněk
2014
Eventual periodicity of some systems of MAX-type difference equations. Zbl 1334.39042
Stević, Stevo; Alghamdi, Mohammed A.; Alotaibi, Abdullah; Shahzad, Naseer
2014
Long-term behavior of positive solutions of a system of max-type difference equations. Zbl 1334.39037
Stević, Stevo; Alghamdi, Mohammed A.; Alotaibi, Abdullah; Shahzad, Naseer
2014
On a cyclic system of difference equations. Zbl 1298.39012
Stević, Stevo
2014
On positive solutions of a system of max-type difference equations. Zbl 1293.39007
Stević, Stevo; Alotaibi, Abdullah; Shahzad, Naseer; Alghamdi, Mohammed A.
2014
On positive solutions of some classes of max-type systems of difference equations. Zbl 1410.39017
Stević, Stevo
2014
Note on the existence of periodic solutions of a class of systems of differential-difference equations. Zbl 1410.34198
Diblík, Josef; Iričanin, Bratislav; Stević, Stevo; Šmarda, Zdeněk
2014
On periodic and solutions converging to zero of some systems of differential-difference equations. Zbl 1364.34098
Stević, Stevo; Diblík, Josef; Šmarda, Zdeněk
2014
On the periodicity of some classes of systems of nonlinear difference equations. Zbl 1474.39031
Stević, Stevo; Alghamdi, Mohammed A.; Maturi, Dalal A.; Shahzad, Naseer
2014
Existence of bounded solutions of a class of neutral systems of functional differential equations. Zbl 1410.34197
Stević, Stevo
2014
Part-metric and its applications in discrete systems. Zbl 1364.39014
Liu, Wanping; Yang, Xiaofan; Liu, Xinzhi; Stević, Stevo
2014
On the system of difference equations $$x_n=c_ny_{n-3}/(a_n+b_ny_{n-1}x_{n-2}y_{n-3})$$,$$y_n=\gamma_nx_{n-3}/(\alpha_n+\beta_nx_{n-1}y_{n-2}x_{n-3})$$. Zbl 1386.39027
Stević, Stevo
2013
Domains of undefinable solutions of some equations and systems of difference equations. Zbl 1304.39007
Stević, Stevo
2013
On a symmetric system of max-type difference equations. Zbl 1291.39029
Stević, Stevo
2013
On the system $$x_{n+1}=y_nx_{n-k}/(y_{n-k+1}(a_n+b_ny_nx_{n-k}))$$, $$y_{n+1}=x_ny_{n-k}/(x_{n-k+1}(c_n+d_nx_ny_{n-k}))$$. Zbl 1386.39026
Stević, Stevo
2013
On a solvable system of difference equations of $$k$$th order. Zbl 1291.39027
Stević, Stevo
2013
On a system of difference equations which can be solved in closed form. Zbl 1291.39030
Stević, Stevo
2013
Existence of solutions bounded together with their first derivatives of some systems of nonlinear functional differential equations. Zbl 1286.34104
Stević, Stevo
2013
Asymptotically convergent solutions of a system of nonlinear functional differential equations of neutral type with iterated deviating arguments. Zbl 1278.34085
Stević, Stevo
2013
On some symmetric systems of difference equations. Zbl 1383.39020
Diblík, Josef; Iričanin, Bratislav; Stević, Stevo; Šmarda, Zdeněk
2013
A note on stability of polynomial equations. Zbl 1277.39037
Brzdȩk, Janusz; Stević, Stevo
2013
Global attractivity of the max-type difference equation $$x_n=\max\{c,x^p_{n-1}/\prod^k_{j=2}x^{p_j}_{n-j}\}$$. Zbl 1294.39004
Iričanin, Bratislav; Stević, Stevo
2013
Behaviour of solutions of some linear functional equations at infinity. Zbl 1273.39027
Stević, Stevo
2013
On a class of solvable difference equations. Zbl 1297.39005
Stević, Stevo; Alghamdi, Mohammed A.; Shahzad, Naseer; Maturi, Dalal A.
2013
On a higher-order system of difference equations. Zbl 1340.39018
Stevic, Stevo; Alghamdi, M.; Alotaibi, A.; Shahzad, N.
2013
On a solvable system of difference equations of fourth order. Zbl 1285.39006
Stević, Stevo
2013
On bounded continuously differentiable solutions with Lipschitz first derivatives of a system of functional differential equations. Zbl 1278.34073
Stević, Stevo
2013
On a system of difference equations of odd order solvable in closed form. Zbl 1291.39028
Stević, Stevo
2013
On a nonlinear second order system of difference equations. Zbl 1304.39017
Stević, Stevo; Alghamdi, Mohammed A.; Alotaibi, Abdullah; Shahzad, Naseer
2013
On $$q$$-difference asymptotic solutions of a system of nonlinear functional differential equations. Zbl 1294.34070
Stević, Stevo
2013
On solutions of a class of systems of nonlinear functional equations in a neighborhood of zero satisfying Lipschitz condition. Zbl 1290.39012
Stević, Stevo
2013
On some solvable systems of difference equations. Zbl 1253.39011
Stević, Stevo
2012
On a third-order system of difference equations. Zbl 1243.39011
Stević, Stevo
2012
On the difference equation $$x_{n}=x_{n-k} / (b+cx_{n-q}+\dots{ }+x_{n-k})$$. Zbl 1246.39010
Stević, Stevo
2012
Solutions of a max-type system of difference equations. Zbl 1252.39009
Stević, Stevo
2012
On some solvable difference equations and systems of difference equations. Zbl 1253.39001
Stević, Stevo; Diblík, Josef; Iričanin, Bratislav; Šmarda, Zdeněk
2012
On some periodic systems of max-type difference equations. Zbl 1280.39012
Stević, Stevo
2012
On a third-order system of difference equations with variable coefficients. Zbl 1242.39011
Stević, Stevo; Diblík, Josef; Iričanin, Bratislav; Šmarda, Zdeněk
2012
Unique existence of bounded continuous solutions on the real line of a class of nonlinear functional equations with complicated deviations. Zbl 1243.39018
Stević, Stevo
2012
...and 304 more Documents
all top 5
### Cited by 737 Authors
275 Stević, Stevo 55 Li, Songxiao 41 Iričanin, Bratislav D. 31 Zhu, Xiangling 28 Šmarda, Zdeněk 26 Diblík, Josef 24 Sun, Taixiang 23 Liu, Yongmin 20 Zhou, Zehua 18 Liang, Yuxia 18 Papaschinopoulos, Garyfalos 18 Xi, Hongjian 18 Yu, Yanyan 17 Berenhaut, Kenneth S. 17 Ueki, Sei-ichiro 15 Liu, Wanping 15 Yang, Xiaofan 14 Colonna, Flavia 14 Schinas, Christos J. 13 de la Sen, Manuel 12 Jiang, Zhijie 12 Wang, Maofa 12 Yazlik, Yasin 11 Çinar, Cengiz 11 Hu, Linxia 11 Kosmala, Witold A. J. 11 Yalcinkaya, Ibrahim 10 Ladas, Gerasimos E. 10 Tollu, Durhasan Turgut 10 Touafek, Nouressadat 9 Avetisyan, Karen L. 9 Mengestie, Tesfa Y. 9 Qian, Ruishen 8 Abbasi, Ebrahim 8 Abo-Zeid, Raafat 8 Alghamdi, Mohammed Ali 8 Alonso Quesada, Santiago 8 Camouzis, Elias 8 Guo, Xin 8 Han, Caihong 8 Hu, Qinghua 8 Li, Wan-Tong 8 Vaezi, Hamid 8 Zhang, Xuejun 7 Berg, Lothar 7 Bohner, Martin J. 7 Brzdęk, Janusz 7 Du, Juntao 7 Elsayed, Elsayed Mohammed 7 Foley, John David 7 Kara, Merve 7 Ramos-Fernández, Julio C. 7 Růžičková, Miroslava 7 Sehba, Benoit Florent 7 Yang, Weifeng 6 Ahmed, Ahmed El-Sayed 6 Alotaibi, Abdullah M. 6 Fang, Zhongshan 6 Gelişken, Ali 6 Guo, Zhitao 6 Jia, Xiumei 6 Liao, Maoxin 6 Shahzad, Naseer 6 Simsek, Dağıstan 6 Subhadarsini, Elina 5 Aloqeili, Marwan 5 Chen, Cui 5 Doubtsov, Evgueni Sergeevich 5 El-Moneam, M. A. 5 Fu, Xiaohong 5 He, Qiuli 5 Hu, Bingyang 5 Kent, Candace M. 5 Krishan, Ram 5 Kurbanli, Abdullah Selçuk 5 Li, Haiying 5 Lindström, Mikael 5 Migda, Janusz 5 Psarros, Nikolaos 5 Qin, Bin 5 Radin, Michael A. 5 Su, Guangwang 5 Tang, Xianhua 5 Wang, Changyou 5 Ye, Shanli 5 Zayed, Elsayed M. E. 4 Abdullayev, Fahreddin G. 4 Agarwal, Ravi P. 4 Berezansky, Leonid M. 4 Braverman, Elena 4 Chang, Der-Chen E. 4 Chen, Shaolin 4 Chen, Yongzhuo 4 El-Dessoky, Mohamed M. 4 Gümüs, Mehmet 4 Karakostas, George L. 4 Khan, Abdul Qadeer 4 Kúdelčíková, Mária 4 Li, Shenlian 4 Li, Xianyi ...and 637 more Authors
all top 5
### Cited in 170 Serials
170 Applied Mathematics and Computation 78 Abstract and Applied Analysis 71 Discrete Dynamics in Nature and Society 70 Advances in Difference Equations 58 Journal of Difference Equations and Applications 46 Journal of Inequalities and Applications 41 Journal of Mathematical Analysis and Applications 27 Complex Variables and Elliptic Equations 25 Complex Analysis and Operator Theory 21 Journal of Function Spaces 19 Journal of Applied Mathematics and Computing 18 Mathematical Inequalities & Applications 17 Applied Mathematics Letters 15 Computers & Mathematics with Applications 12 Integral Equations and Operator Theory 12 Filomat 10 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 9 Journal of Contemporary Mathematical Analysis. Armenian Academy of Sciences 8 Mathematical Methods in the Applied Sciences 7 Journal of Mathematical Sciences (New York) 7 Integral Transforms and Special Functions 7 Acta Mathematica Sinica. English Series 7 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 6 Mathematica Slovaca 6 Numerical Functional Analysis and Optimization 6 Computational Methods and Function Theory 6 Mediterranean Journal of Mathematics 6 Operators and Matrices 6 Symmetry 6 Journal of Mathematics 5 Rocky Mountain Journal of Mathematics 5 Zeitschrift für Analysis und ihre Anwendungen 5 Turkish Journal of Mathematics 5 Journal of Function Spaces and Applications 5 Banach Journal of Mathematical Analysis 4 Bulletin of the Australian Mathematical Society 4 Collectanea Mathematica 4 Czechoslovak Mathematical Journal 4 International Journal of Mathematics and Mathematical Sciences 4 Journal of Functional Analysis 4 Monatshefte für Mathematik 4 Proceedings of the American Mathematical Society 4 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 4 Opuscula Mathematica 4 Dynamics of Continuous, Discrete & Impulsive Systems. Series B. Applications & Algorithms 4 Boundary Value Problems 4 Tbilisi Mathematical Journal 4 Journal of Nonlinear Science and Applications 4 Advances in Operator Theory 3 Mathematical Notes 3 Chaos, Solitons and Fractals 3 Annales Polonici Mathematici 3 Archiv der Mathematik 3 Journal of Computational and Applied Mathematics 3 Mathematische Nachrichten 3 Rendiconti del Circolo Matemàtico di Palermo. Serie II 3 Results in Mathematics 3 Chinese Annals of Mathematics. Series B 3 Bulletin of the Iranian Mathematical Society 3 Journal of Integral Equations and Applications 3 Aequationes Mathematicae 3 International Journal of Computer Mathematics 3 Potential Analysis 3 Journal of the Egyptian Mathematical Society 3 Bulletin des Sciences Mathématiques 3 Taiwanese Journal of Mathematics 3 Communications of the Korean Mathematical Society 3 Journal of Mathematical Inequalities 3 Asian-European Journal of Mathematics 3 Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A: Matemáticas. RACSAM 3 Sahand Communications in Mathematical Analysis 2 Applicable Analysis 2 Periodica Mathematica Hungarica 2 Ukrainian Mathematical Journal 2 Acta Mathematica Vietnamica 2 Canadian Mathematical Bulletin 2 Demonstratio Mathematica 2 Functiones et Approximatio. Commentarii Mathematici 2 Glasgow Mathematical Journal 2 Quaestiones Mathematicae 2 Siberian Mathematical Journal 2 Bulletin of the Korean Mathematical Society 2 Mathematical and Computer Modelling 2 The Journal of Geometric Analysis 2 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 2 Indagationes Mathematicae. New Series 2 The Journal of Analysis 2 Georgian Mathematical Journal 2 Boletín de la Sociedad Matemática Mexicana. Third Series 2 Positivity 2 Qualitative Theory of Dynamical Systems 2 Central European Journal of Mathematics 2 Hacettepe Journal of Mathematics and Statistics 2 Thai Journal of Mathematics 2 Journal of Biological Dynamics 2 Journal of Fixed Point Theory and Applications 2 International Journal of Biomathematics 2 ISRN Mathematical Analysis 2 Analysis and Mathematical Physics 2 Mathematical Sciences ...and 70 more Serials
all top 5
### Cited in 33 Fields
517 Difference and functional equations (39-XX) 395 Operator theory (47-XX) 262 Functions of a complex variable (30-XX) 161 Functional analysis (46-XX) 138 Several complex variables and analytic spaces (32-XX) 59 Ordinary differential equations (34-XX) 34 Biology and other natural sciences (92-XX) 23 Integral equations (45-XX) 22 Potential theory (31-XX) 22 Dynamical systems and ergodic theory (37-XX) 14 Partial differential equations (35-XX) 11 Numerical analysis (65-XX) 10 Harmonic analysis on Euclidean spaces (42-XX) 9 Number theory (11-XX) 9 Sequences, series, summability (40-XX) 8 Real functions (26-XX) 8 Systems theory; control (93-XX) 4 General topology (54-XX) 3 Combinatorics (05-XX) 3 Approximations and expansions (41-XX) 3 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 2 Linear and multilinear algebra; matrix theory (15-XX) 2 Calculus of variations and optimal control; optimization (49-XX) 1 General and overarching topics; collections (00-XX) 1 History and biography (01-XX) 1 Mathematical logic and foundations (03-XX) 1 Field theory and polynomials (12-XX) 1 Special functions (33-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Mechanics of particles and systems (70-XX) 1 Classical thermodynamics, heat transfer (80-XX) 1 Quantum theory (81-XX) 1 Relativity and gravitational theory (83-XX)
|
# Cauchy-Schwarz Inequality (Probability) and the Fundamental Bridge
In a section on the Cauchy-Schwarz inequality (a marginal bound on a joint expectation), my textbook, Introduction to Probability, Second Edition, by Blitzstein and Hwang, presents the following example:
Example 10.1.3 (Second moment method). Let $$X$$ be a nonnegative r.v., and suppose that we want an upper bound on $$P(X = 0)$$. For example, $$X$$ could be the number of questions that Fred gets wrong on an exam (then $$P(X = 0)$$ is the probability of Fred getting perfect score), or $$X$$ could be the number of pairs of people at a party with the same birthday (then $$P(X = 0)$$ is the probability of no birthday matches). Note that
$$X = XI(X > 0),$$
where $$I(X > 0)$$ is the indicator of $$X > 0$$. This is true since if $$X = 0$$, then both sides are $$0$$, while if $$X > 0$$ then both sides are $$X$$. By Cauchy-Schwarz,
$$E(X) = E(XI(X > 0)) \le \sqrt{E(X^2) E(I(X > 0))}.$$
Rearranging this and using the fundamental bridge, we have
$$P(X > 0) \ge \dfrac{(EX)^2}{E(X^2)},$$
or equivalently,
$$P(X = 0) \le \dfrac{Var(X)}{E(X^2)}.$$
The fundamental bridge (between probability and expectation) is the fact that there is a one-to-one correspondence between events and indicator r.v.s, and the probability of an event $$A$$ is the expected value of its indicator r.v. $$I_A$$:
$$P(A) = E(I_A).$$
Given this, we can get the first result as follows:
\begin{align} &E(X) \le \sqrt{E(X^2)E(I(X > 0))} \\ &\Rightarrow E(X)^2 \le E(X^2)E(I(X > 0)) = E(X^2)P(X > 0) \\ &\Rightarrow \dfrac{E(X)^2}{E(X^2)} \le P(X > 0) \end{align}
But how does one derive the second result? In particular, I'm confused as to where the $$Var(X)$$ came from? I would greatly appreciate it if people would please take the time to clarify this.
$$P(X=0)=1-P(X>0) \leq 1-\frac {(EX)^{2}} {EX^{2}}=\frac {EX^{2}-(EX)^{2}} {EX^{2}} =\frac {var (X)} {EX^{2}}$$.
• Oh, I see: We have on the left-hand side that $$E(XI(X > 0)) = P(X > 0) = 1 - P(X = 0) \Rightarrow 1 - P(X > 0) = P(X = 0).$$ Dec 30 '19 at 23:59
• Yes,but you wrote $EXI(X>0)$ instead of $EI(X>0)$. Dec 31 '19 at 0:00
• Hmm? But it is $E(XI(X > 0))$? Dec 31 '19 at 0:02
• And for the right-hand side, we have that $$E(X^2) E(I(X > 0)) = E(X^2) P(X > 0) = E(X^2) [1 - P(X = 0)],$$ by the same reasoning as for the LHS. Dec 31 '19 at 0:07
• If $X$ takes the values $1$ and $2$ with probability $\frac 1 2$ each then $EXI(X>0)=(1)(\frac 1 2)+(2)(\frac 1 2)=1.5$ but $P(X>0)=1$. Dec 31 '19 at 0:09
|
max planck institut
informatik
# MPI-I-94-147
## Efficient collision detection for moving polyhedra
### Schömer, Elmar and Thiel, Christian
MPI-I-94-147. September 1994, 24 pages. | Status: available - back from printing | Next --> Entry | Previous <-- Entry
Abstract in LaTeX format:
In this paper we consider the following problem: given two general polyhedra
of complexity $n$, one of which is moving translationally or rotating about a fixed axis, determine the first collision (if any) between them. We present an
algorithm with running time $O(n^{8/5 + \epsilon})$ for the case of
translational movements and running time $O(n^{5/3 + \epsilon})$ for
rotational movements, where $\epsilon$ is an arbitrary positive constant.
This is the first known algorithm with sub-quadratic running time.
Acknowledgement:
References to related material:
MPI-I-94-147.pdf88 KBytes; 245 KBytes
Please note: If you don't have a viewer for PostScript on your platform, try to install GhostScript and GhostView
URL to this document: http://domino.mpi-inf.mpg.de/internet/reports.nsf/NumberView/1994-147
BibTeX
@TECHREPORT{SchoernerThiel94,
AUTHOR = {Sch{\"o}mer, Elmar and Thiel, Christian},
TITLE = {Efficient collision detection for moving polyhedra},
TYPE = {Research Report},
INSTITUTION = {Max-Planck-Institut f{\"u}r Informatik},
|
Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+
Title Keywords Abstract Author All
Search Results: 1 - 10 of 100 matches for " "
Page 1 /100 Display every page 5 10 20 Item
Physics , 2009, DOI: 10.1143/JPSJ.78.084702 Abstract: In order to help detecting superfluidity, we theoretically investigate p-wave pairing superfluids in neutral Fermion atom gases confined by a three dimensimentional (3D) harmonic potential. The Ginzburg-Landau framework, which is generic for p-wave superfluids, is used to describe the order parameter spatial structure, or texture characterized by the l-vector both at rest and under rotation. The l-vector configuration is strongly contrained by the boundary condition due to a trap. It is found that the ground state textures exhibit spontaneous supercurrent at rest both cigar and pancake shape traps. The current direction depends on the trapping shape. Under rotation a pair of half-quantum vortex with half-winding number enters a system and is stabilized for both trap geometries. We give detailed explanation for their 3D structure. The deformations of the condensate shape are seen with increasing the rotation speed, which is tightly connected with the underlying vortex formation where the condensates are depressed in the vortex core.
Physics , 1999, DOI: 10.1103/PhysRevA.60.2319 Abstract: We study the ground state of a system of Bose hard-spheres trapped in an isotropic harmonic potential to investigate the effect of the interatomic correlations and the accuracy of the Gross-Pitaevskii equation. We compare a local density approximation, based on the energy functional derived from the low density expansion of the energy of the uniform hard sphere gas, and a correlated wave function approach which explicitly introduces the correlations induced by the potential. Both higher order terms in the low density expansion, beyond Gross-Pitaevskii, and explicit dynamical correlations have effects of the order of percent when the number of trapped particles becomes similar to that attained in recent experiments.
Physics , 2009, DOI: 10.1143/JPSJ.79.034301 Abstract: The stability conditions for the singular vortex which accompanies Majorana zero modes at the core are investigated for p-wave resonant superfluids of atomic Fermi gases. Within the Ginzburg-Landau framework we determine the stable conditions in the parameter space for the external rotation frequency and the harmonic trap frequency. There exists the narrow stable region in this parameter space for quasi-two-dimensional condensates. We also describe the detailed characterizations of the spatial structure of the order parameter in the chiral p-wave superfluids under rotation.
Physics , 2008, DOI: 10.1103/PhysRevA.80.035601 Abstract: It is found theoretically based on the Ginzburg-Landau framework that p-wave superfluids of neutral atom gases in three dimension harmonic traps exhibit spontaneous mass current at rest, whose direction depends on trap geometry. Under rotation various types of the order parameter textures are stabilized, including Mermin-Ho and Anderson-Toulouse-Chechetkin vortices. In a cigar shape trap spontaneous current flows longitudial to the rotation axis and thus perpendicular to the ordinary rotational current. These features, spontaneous mass current at rest and texture formation, can be used as diagnoses for p-wave superfluidity.
Physics , 2014, DOI: 10.1088/0953-4075/47/5/055301 Abstract: We present a study of the hydrodynamics of compressible superfluids in confined geometries. We use a perturbative procedure in terms of the dimensionless expansion parameter $(v/v_s)^2$ where $v$ is the typical speed of the flow and $v_s$ the speed of sound. A zero value of this parameter corresponds to the incompressible limit. We apply the procedure to two specific problems: the case of a trapped superfluid with a gaussian profile of the local density, and that of a superfluid confined in a rotating obstructed cylinder. We find that the corrections due to finite compressibility which are, as expected, negligible for liquid He, are important but amenable to the perturbative treatment for typical ultracold atomic systems.
Physics , 2003, DOI: 10.1103/PhysRevB.69.134517 Abstract: We study the effects of single-impurity scattering on the local density of states in the high-$T_c$ cuprates. We compare the quasiparticle interference patterns in three different ordered states: d-wave superconductor (DSC), d-density wave (DDW), and coexisting DSC and DDW (DSC-DDW). In the coexisting state, at energies below the DSC gap, the patterns are almost identical to those in the pure DSC state with the same DSC gap. However, they are significantly different for energies greater than or equal to the DSC gap. This transition at an energy around the DSC gap can be used to test the nature of the superconducting state of the underdoped cuprates by scanning tunneling microscopy. Furthermore, we note that in the DDW state the effect of the coherence factors is stronger than in the DSC state. The new features arising due to DDW ordering are discussed.
Physics , 1996, DOI: 10.1103/PhysRevA.56.1046 Abstract: The Bose gas in an external potential is studied by means of the local density approximation. An analytical result is derived for the dependence of the critical temperature of Bose-Einstein condensation on the mutual interaction in a generic power-law potential.
Physics , 2010, DOI: 10.1103/PhysRevA.82.063609 Abstract: Two species superfluid Fermi gas is investigated on the BCS side up to the Feshbach resonance. Using the Greens's function technique gradient corrections are calculated to the generalized Thomas-Fermi theory including Cooper pairing. Their relative magnitude is found to be measured by the small parameter $(d/R_{TF})^4$, where $d$ is the oscillator length of the trap potential and $R_{TF}$ is the radial extension of the density $n$ in the Thomas-Fermi approximation. In particular at the Feshbach resonance the universal %constant $A_{TF}$ has the %correction in the center $A=A_{TF}+A_2(d/R_{TF})^4+\...$ corrections to the local density approximation are calculated and a universal prefactor $\kappa_W=7/27$ is derived for the von Weizs\"acker type correction $\kappa_W(\hbar^2/2m)(\nabla^2 n^{1/2}/n^{1/2})$.
Physics , 2010, DOI: 10.1103/PhysRevA.82.013627 Abstract: We theoretically investigate the itinerant ferromagnetic transition of a spherically trapped ultracold Fermi gas with spin imbalance under strongly repulsive interatomic interactions. Our study is based on a self-consistent solution of the Hartree-Fock mean-field equations beyond the widely used local density approximation. We demonstrate that, while the local density approximation holds in the paramagnetic phase, after the ferromagnetic transition it leads to a quantitative discrepancy in various thermodynamic quantities even with large atom numbers. We determine the position of the phase transition by monitoring the shape change of the free energy curve with increasing the polarization at various interaction strengths.
Physics , 2001, DOI: 10.1103/PhysRevB.65.115113 Abstract: In the previous paper, it was shown that holes doped into BaBiO3 self-trap as small polarons and bipolarons. These point defects are energetically favorable partly because they undo locally the strain in the charge-density-wave (Peierls insulator) ground state. In this paper the neutral excitations of the same model are discussed. The lowest electronic excitation is predicted to be a self-trapped exciton, consisting of an electron and a hole located on adjacent Bi atoms. This excitation has been seen experimentally (but not identified as such) via the Urbach tail in optical absorption, and the multi-phonon spectrum of the breathing mode'' seen in Raman scattering. These two phenomena occur because of the Franck-Condon effect associated with oxygen displacement in the excited state.
Page 1 /100 Display every page 5 10 20 Item
|
# Object look at target direction?
How to make an object look at the target direction not at the target himself ?
public Transform target;
void Update() {
transform.LookAt(target);
float step = 2 * Time.deltaTime;
transform.position = Vector3.MoveTowards(transform.position, target.position, step);
}
• What do you mean by "target direction"? A specific point in space? The direction that target is looking at (how do you define that?)? Mar 28 '17 at 7:02
• I mean "The direction that target is looking at" Mar 28 '17 at 7:04
• You can't really "look at a direction" - you need to pick a point on the direction vector to look at. (Maybe your question is how to calculate a direction vector and pick a point on it? In that case you should update the question accordingly) Mar 28 '17 at 7:31
If you want one object to look parallel to the sight-direction of another object, you can simply turn it into the same rotation:
this.transform.rotation = target.transform.rotation;
But if you want one object to look at whatever thing another object is looking at, you first need to figure out where that other thing is, because the direction would differ depending on if it's something directly in front of the other object or something at the horizon.
One option is to do a raycast from the other object to find the point where its line of sight is broken and then use LookAt for that point.
RaycastHit hit;
if (Physics.Raycast(target.transform.position, target.transform.rotation.eulerAngles, out hit)) {
this.transform.LookAt(hit.point);
}
Note that Physics.Raycast is only broken by objects with colliders.
|
I'm new to arduino and RF modules so I bought some cheap set (XY-MK-5V and FS1000A) to transfer data between two arduinos. Transmitting the data looks fine to me, but I can't receive any Data so far.
This is my Receiver source. It's connected to pin 11. When I start the receiving board, the TX-LED won't turn off, in case this is important somehow. The RX-LED is off, the TX-LED of my sending arduino is turning on and off when I send a message.
I set my buffer size to the VW_MAX_MESSAGE_LEN (30). I just wan't to transfer "hello" for the first time. Is this the correct setup?
/*
This sketch displays text strings received using VirtualWire
Connect the Receiver data pin to Arduino pin 11
*/
#include <VirtualWire.h>
byte message[VW_MAX_MESSAGE_LEN]; // a buffer to hold the incoming messages
byte msgLength = VW_MAX_MESSAGE_LEN; // the size of the message
void setup()
{
Serial.begin(9600);
// Initialize the IO and ISR
vw_setup(2000); // Bits per sec
}
void loop()
{
if (vw_get_message(message, &msgLength)) // Non-blocking
{
Serial.print("Got: ");
for (int i = 0; i < msgLength; i++)
{
Serial.write(message[i]);
}
Serial.println();
}
}
Transmitting Arduino :
/*
SimpleSend
This sketch transmits a short text message using the VirtualWire library
connect the Transmitter data pin to Arduino pin 12
*/
#include <VirtualWire.h>
void setup()
{
// Initialize the IO and ISR
vw_setup(2000); // Bits per sec
Serial.begin(9600);
}
void loop()
{
send("hello");
Serial.println("transmitted");
delay(100);
}
void send (char *message)
{
vw_send((uint8_t *)message, strlen(message));
vw_wait_tx(); // Wait until the whole message is gone
}
Ís it possible that the receiving module is not working?
• How is the RX led on the receiver connected? I'm assuming maybe a LED connected to pin 11 (although I'm not familiar with that platform). If so I'd try on the off-chance disconnected the LED, some of those receiver modules can have "interesting" output configurations. – PeterJ Oct 3 '13 at 12:23
• This is handeled by the VirtualWire lib I think. Pin 11 is the default RX-Pin so when something comes in it would blink. – Wolfen Oct 3 '13 at 12:26
• That "vw get message" call - is this always returning a non-zero value? – Danny Staple Oct 3 '13 at 12:30
• Do you mean the serial TX LED on the receiving Arduino? – Danny Staple Oct 3 '13 at 12:32
• yes. the TX(!) LED on the receiving arduino is always on. I added this code: Serial.println(vw_get_message(message, &msgLength)); It only returns 0 values. – Wolfen Oct 3 '13 at 12:33
First off it is advisable to make sure to add an antenna to both the transmitter and receiver. This can be just a simple piece of insulated stranded wire attached to the TX and RX module's ANT terminals. The wire length follows from the frequency. I've used wire about 6.8 inches in length for 433 MHz modules.
To get reliable communications with these low cost RF modules it is necessary to understand that the receivers often detect so much random RF noise from the environment that they are constantly emitting signal transitions at the receiver output pin. Thus the receiver needs to see a good strong signal to lock onto that will override the stray and random noise from the nearby environment. The receiver modules also require some "capture time" to be able to settle into seeing the transmit signal.
I have found these low cost modules to be nearly useless for reliable communications when just trying to be used as a UART extender with NRZ signal modulation. Even worse with a UART type scheme is the variable spacing that can occur between individual bytes that are sent.
When I use these modules I have devised a protocol that sends data bits at a rate of 500 bits per second. The data is sent with a Manchester encoding pattern using a state machine design with a 1 msec interrupt rate on the TX side of the link. A lead-in preamble of about 30 bit times of all 1's is used to lock in the receiver.
For successful Manchester transmission it is necessary to have a SYNC pattern in the data stream that uses a timing pattern that is different than the normal 1T and 2T pulse widths seen in the normal stream. In my protocol I have my sync be a 3T low level followed by a 3T high pulse that comes immediately after the preamble sequence. The data portion of the stream comes immediately after the SYNC in a continuous flow through the end of the data packet.
At my receiver MCU I setup a decoding state machine that is driven off two interrupts, one on the positive edge of the receiver signal and the other on the negative edge of the received signal. In the receiver interrupt routines the time spacing from edge to edge of the detected signal is checked for validity within an expected range. There can be quite a bit of pulse width distortion through the RF link pair so this validation on pulse width needs to be liberal in margin but tight enough to qualify the expected possible Manchester pulse interval timing (1T, 2T and 3T). If any error occurs the receiver aborts the current state and returns to an initial idle state looking again for the SYNC sequence.
Using this scheme I have been able to deploy reasonably reliable RF links that work up to about 100 feet or so. Any system that would deploy a link like this needs to be designed so that the failure of any given data stream does not hang the operations at the receiver end. As such the transmitter should be designed to repeat the transmission on a periodic manner. (Think of this like the low cost garage door openers. They use a similar scheme and transmit as long as the user presses the button and sees that the receiver has detected the signal - i.e. the door is opening or closing). One application where I've deployed these modules is where I have a time clock master transmitter that has accurate battery backed up RTC. It transmits a packet with the current time and date once per minute. The targets are time clock display units that simply run a software clock from the MCU crystal timing. These are accurate for short term but can drift many seconds per day. The target units will see the transmitter packets and when decoded successfully will sync their software clocks to the time seen from the received packet. As such even if the target devices fail to see the time update packets for several minutes or even an hour or so they still continue to operate normally until a validated packet is detected.
Through the use of interrupts on both the transmitter and receiver this system only places an couple of percent processing load on MCUs operating at clock rates of 25 to 50 MHz.
My tutorial may or may not help you... If not, it may help someone else.
http://arduinobasics.blogspot.com.au/2014/06/433-mhz-rf-module-with-arduino-tutorial.html
• Link-only answers are discouraged because they become useless if the link dies. Perhaps you can add a summary of some of the information here? – PeterJ Jun 23 '14 at 8:22
|
## Wednesday, February 20, 2008
### Remember When?
You may not have to bother to -- if it happens this year or later, a couple of Russians think you might be able to drive there some day.
Don't bet any more on it than you can afford to lose.
#### 1 comment:
Aaron said...
The very fact that the alledged scientists are claiming it would cause a causality violation tends to cast aspersions on their claim. Sounds to me like a publicity stunt. Any 'hole' ripped (and isn't that a viscerally satisfying verb to use?) through the 'fabric of space/time', assuming such were possible, would be through space, not time.
Of course, I seem to be preaching to the choir.
|
# Gorgi Kosev
code, music, math
# Gorgi Kosev
code, music, math
@spion
# Machine learning ethics
Tue Dec 19 2017
Today I found and watched one of the most important videos on machine learning published this year
We're building a dystopia just to make people click on ads https://www.youtube.com/watch?v=iFTWM7HV2UI&app=desktop
Go watch it first before reading ahead! I could not possibly summarise it without doing it a disservice.
What struck me most was the following quote:
Having interviewed people who worked at Facebook, I'm convinced that nobody there really understands how it [the machine learning system] works.
The important question is, howcome nobody understands how a machine learning system works? You would think, its because the system is very complex, its hard for any one person to understand it fully. Thats not the problem.
The problem is fundamental to machine learning systems.
A machine learning system is a program that is given a target goal, a list of possible actions, a history of previous actions and how well they achieved the goal in a past context. The system should learn on the historical data and be able to predict what action it can select to best achieve the goal.
Lets see what these parts would represent on say, YouTube, for a ML system that has to pick which videos to show on the sidebar right next to the video you're watching.
The target goal could be e.g. to maximise the time the user stays on YouTube, watching videos. More generally, a value function is given by the ML system creator that measures the desireability of a certain outcome or behaviour (it could include multiple things like number of product bought, number of ads clicked or viewed, etc).
The action the system can take is the choice of videos in the sidebar. Every different set of videos would be a different alternative action, and could cause the user to either stay on YouTube longer or perhaps leave the site.
Finally, the history of actions includes all previous video lists shown in the sidebar to users, together with the value function outcome from them: the time the user spent on the website after being presented that list. Additional context from that time is also included: which user was it, what was their personal information, their past watching history, the channels they're subscribed to, videos they liked, videos they disliked and so on.
Based on this data, the system learns how to tailor its actions (the videos it shows) so that it achieves the goal by picking the right action for a given context.
At the beginning it will try random things. After several iterations, it will find which things seem to maximize value in which context.
Once trained with sufficient data, it will be able to do some calculations and conclude: "well, when I encountered a situation like this other times, I tried these five options, and option two on average caused users like this one to stay the longest, so I'll do that".
Sure, there are ways to ask some ML systems why they made a decision after the fact, and they can elaborate the variables that had the most effect. But before the algorithm gets the training data, you don't know what it will decide - nobody does! It learns from the history of its own actions and how the users reacted to them, so in essence, the users are programming its behaviour (through the lens of its value function).
Lets say the system learnt that people who have cat videos in their watch history will stay a lot longer if they are given cat videos in their suggestion box. Nothing groundbreaking there.
Now lets say it figures out the same action is appropriate when they are watching something unrelated, like academic lecture material, because past data suggests that people of that profile leave slightly earlier when given more lecture videos, while they stay for hours when given cat videos, giving up the lecture videos.
This raises a very important question - is the system behaving in an ethical manner? Is it ethical to show cat videos to a person trying to study and nudge them towards wasting their time? Even that is a fairly benign example. There are far worse examples mentioned in the TED talk above.
The root of the problem is the value function. Our systems are often blisfully unaware of any side effects their decision may cause and blatantly disregard basic rules of behaviour that we take for granted. They have no other values than the value function they're maximizing. For them, the end justifies the means. Whether the value function is maximized by manipulating people, preying on their insecurities, making them scared, angry or sad - all of that is unimportant. Here is a scary proposition: if a person is epileptic, it might learn that the best way to keep thenm "on the website" is to show them something that will render them unconscious. It wouldn't even know that it didn't really achieve the goal: as far as it knows, autoplay is on and they haven't stopped it in the past two hours, so it all must be "good".
So how do we make these systems ethical?
The first challenge is technical, and its the easiest one. How do we come up with a value function that encodes additional basic values of of human ethics? Its easy as pie! You take a bunch of ethicists, give them various situations and ask them to rate actions as ethical/unethical. Then once you have enough data, you train a new value function so that the system can learn some basic humanity. You end up with a an ethics function, and you create a new value function that combines the old value function with the ethics function into the new value function. As a result the system starts picking more ethical actions. All done. (If only things were that easy!)
The second challenge is a business one. How far are you willing to reduce your value maximisation to be ethical? What to do if your competitor doesn't do that? What are the ethics of putting a number on how much ethics you're willing to sacrifice for profits? (Spoiler alert: they're not great)
One way to solve that is to have regulations for ethical behaviour of machine learning systems. Such systems could be held responsible for unethical actions. If those actions are reported by people, investigated by experts and found true in court, the company owning the ML system is held liable. Unethical behaviour of machine learning systems shouldn't be too difficult to spot, although getting evidence might prove difficult. Public pressure and exposure of companies seems to help too. Perhaps we could make a machine learning systems that detects unethical behaviour and call it the ML police. Citizens could agree to install the ML police add-on to help monitor and aggregate behaviour of online ML systems. (If these suggestions look silly, its because they are).
Another way to deal with this is to mandate that all ML systems have a feedback feature. The user (or a responsible guardian of the user) should be able to log on to the system, see its past actions within a given context and rate them as ethical or unethical. The system must be designed to use this data and give it precedence when making decisions, such that actions that are computed to be more ethical are always picked over actions that are less ethical. In this scenario the users are the ethicists.
The third challenge is philosophical. Until now, philosophers were content with "there is no right answer, but there have been many thoughts on what exactly is ethical". They better get their act together, because we'll need them to come up with a definite, quantifiable answer real soon.
On the more optimistic side, I hope that any generally agreed upon "standard" ethical system will be a better starting point than having none at all.
# JavaScript isn't cancer
Thu Oct 06 2016
The last few days, I've been thinking about what leads so many people to hate JavaScript.
JS is so quirky and unclean! Thats supposed to be the primary reason, but after working with a few other dynamic languages, I don't buy it. JS actually has a fairly small amount of quirks compared to other dynamic languages.
Just think about PHP's named functions, which are always in the global scope. Except when they are in namespaces (oh hi another concept), and then its kinda weird because namespaces can be relative. There are no first class named functions, but function expressions can be assigned to variables. Which must be prefixed with $. There are no real modules, or proper nestable scope - at least not for functions, which are always global. But nested functions only exist once the outer function is called! In Ruby, blocks are like lambdas except when they are not, and you can pass a block explicitly or yield to the first block implicitly. But there are also lambdas, which are different. Modules are uselessly global, cannot be parameterised over other modules (without resorting to meta programming), and there are several ways to nest them: if you don't nest them lexically, the lookup rules become different. And there are classes, with private variables, which are prefixed with @. I really don't get that sigil fetish. The above examples are only scratching the surface. And which are the most often cited problems of JavaScript? Implicit conversions (the wat talk), no large ints and hard to understand prototypical inheritance and this keyword. That doesn't look any worse than the above lists! Plus, the language (pre ES6) is very minimalistic. It has freeform records with prototypes, and closures with lexical scope. Thats it! So this supposed "quirkiness" of JavaScript doesn't seem like a satisfactory explanation. There must be something else going on here, and I think I finally realized what that is. JavaScript is seen as a "low status" language. A 10 day accident, a silly toy language for the browser that ought to be simple and easy to learn. To an extent this is true, largely thanks to the fact that there are very few distinct concepts to be learned. However, those few concepts combine together into a package with a really good power-to-weight ratio. Additionally, the simplicity ensures that the language is malleable towards even more power (e.g. you can extend it with a type system and then you can idiomatically approximate some capabilities of algebraic sum types, like making illegal states unrepresentable). The emphasis above is on idiomatically for a reason. This sort of extension is somehow perfectly normal in JavaScript. If you took Ruby and used its dictionary type to add a comparable feature, it has significantly lower likelyhood of being accepted by developers. Why? Because Ruby has standard ways of doing things. You should be using objects and classes, not hashes, to model most of your data. (*) That was not the case with the simple pre-ES6 JavaScript. There was no module system to organize code. No classes system to hierarhically organize blueprints of things that hold state. Lack of basic standard library items, such as maps, sets, iterables, streams, promises. Lack of functions to manipulate existing data structures (dictionaries and arrays). Combine sufficient power, simplicity/malleability, and the lack of the basic facilities. Add to this the fact that its the basic option in the browser, the most popular platform. What do you get? You get a TON of people working in it to extend it in various different ways. And they invent a TON of stuff! We ended up with several popular module systems (object based namespaces, CommonJS, AMD, ES6, the angular module system, etc) as well as many package managers to manage these modules (npm, bower, jspm, ...). We also got many object/inheritance systems: plain objects, pure prototype extension, simulating classes, "composable object factories", and so on and so forth. Heck, a while ago every other library used to implement its own class system! (That is, until CoffeeScript came and gave the definite answer on how to implement classes on top of prototypes. This is interesting, and I'll come back to it later.) This creates dissonance with the language's simplicity. JavaScript is this simple browser language that was supposed to be easy, so why is it so hard? Why are there so many things built on top of it and how the heck do I choose which one to use? I hate it. Why do I hate it? Probably its all these silly quirks that it has! Just look at its implicit conversions and lack of number types other than doubles! It doesn't matter that many languages are much worse. A great example of the reverse phenomenon is C++. Its a complete abomination, far worse than JavaScript - a Frankenstein in the languages domain. But its seen as "high status", so it has many apologists that will come to defend its broken design: "Yeah, C++ is a serious language, you need grown-up pants to use it". Unfortunately JS has no such luck: its status as a hack-together glue for the web pages seems to have been forever cemented in people's heads. So how do we fix this? You might not realize it, but this is already being fixed as we speak! Remember how CoffeeScript slowed down the prolification of custom object systems? Browsers and environments are quickly implementing ES6, which standardizes a huge percentage of what used to be the JS wild west. We now have the standard way to do modules, the standard way to do classes, the standard way to do basic procedural async (Promises; async/await). The standard way to do bundling will probably be no-bundling: HTTP2 push + ES6 modules will "just work"! Finally, I believe the people who think that JavaScript will always be transpiled are wrong. As ES6+ features get implemented in major browsers, more and more people will find the overhead of ES.Next to ES transpilers isn't worth it. This process will stop entirely at some point as the basics get fully covered. At this point, I'm hoping several things will happen. We'll finally get those big integers and number types that Brendan Eich has been promising. We'll have some more stuff on top of SharedArrayBuffer to enable easier shared memory parallelism, perhaps even immutable datastructures that are transferable objects. The wat talk will be obsolete: obviously, you'd be using a static analysis tool such as Flow or TypeScript to deal with that; the fact that the browser ignores those type annotations and does its best to interpret what you meant will be irrelevant. async/await will be implemented in all browsers as the de-facto way to do async control flow; perhaps even async iterators too. We'll also have widly accepted standard libraries for data and event streams. Will JavaScript finally gain the status it deserves then? Probably. But at what cost? JavaScript is big enough now that there is less space for new inventions. And its fun to invent new things and read about other people's inventions! On the other hand, maybe then we'll be able to focus on the stuff we're actually building instead. (*) Or metaprogramming, but then everyone has to agree on the same metaprogramming. In JS, everyone uses records, and they probably use a tag field to discriminate them already: its a small step to add types for that. # ES7 async functions - a step in the wrong direction Sun Aug 23 2015 Async functions are a new feature scheduled to become a part of ES7. They build on top of previous capabilities made available by ES6 (promises), letting you write async code as though it were synchronous. At the moment, they're a stage 1 proposal for ES7 and supported by babel / regenerator. When generator functions were first made available in node, I was very excited. Finally, a way to write asynchronous JavaScript that doesn't descend into callback hell! At the time, I was unfamiliar with promises and the language power you get back by simply having async computations be first class values, so it seemed to me that generators are the best solution available. Turns out, they aren't. And the same limitations apply for async functions. ### Predicates in catch statements With generators, thrown errors bubble up the function chain until a catch statement is encountered, much like in other languages that support exceptions. On one hand, this is convenient, but on the other, you never know what you're catching once you write a catch statement. JavaScript catch doesn't support any mechanism to filter errors. This limitation isn't too hard to get around: we can write a function guard function guard(e, predicate) { if (!predicate(e)) throw e; } and then use it to e.g. only filter "not found" errors when downloading an image try { await downloadImage(url); } catch (e) { guard(e, e => e.code == 404); handle404(...); } But that only gets us so far. What if we want to have a second error handler? We must resort to using if-then-else, making sure that we don't forget to rethrow the error at the end try { await downloadImage(url); } catch (e) { if (e.code == 404) { handle404(...) } else if (e.code == 401) { handle401(...); } else { throw e; } } Since promises are a userland library, restrictions like the above do not apply. We can write our own promise implementation that demands the use of a predicate filter: downloadImage(url) .catch(e => e.code == 404, e => { handle404(...); }) .catch(e => e.code == 401, e => { handle401(...) }) Now if we want all errors to be caught, we have to say it explicitly: asyncOperation() .catch(e => true, e => { handleAllErrors(...) }); Since these constructs are not built-in language features but a DSL built on top of higher order functions, we can impose any restrictions we like instead of waiting on TC39 to fix the language. ### Cannot use higher order functions Because generators and async-await are shallow, you cannot use yield or await within lambdas passed to higher order functions. This is better explained here - The example given there is async function renderChapters(urls) { urls.map(getJSON).forEach(j => addToPage((await j).html)); } and will not work, because you're not allowed to use await from within a nested function. The following will work, but will execute in parallel: async function renderChapters(urls) { urls.map(getJSON).forEach(async j => addToPage((await j).html)); } To understand why, you need to read this article. In short: its much harder to implement deep coroutines so browser vendors probably wont do it. Besides being very unintuitive, this is also limiting. Higher order functions are succint and powerful, yet we cannot really use them inside async functions. To get sequential execution we have to resort to the clumsy built in for loops which often force us into writing ceremonial, stateful code. ### Arrow functions give us more power than ever before Functional DSLs were very powerful even before JS had short lambda syntax. But with arrow functions, things get even cleaner. The amount of code one needs to write can be reduced greatly thanks to short lambda syntax and higher order functions. Lets take the motivating example from the async-await proposal function chainAnimationsPromise(elem, animations) { var ret = null; var p = currentPromise; for(var anim of animations) { p = p.then(function(val) { ret = val; return anim(elem); }) } return p.catch(function(e) { /* ignore and keep going */ }).then(function() { return ret; }); } With bluebird's Promise.reduce, this becomes function chainAnimationsPromise(elem, animations) { return Promise.reduce(animations, (lastVal, anim) => anim(elem).catch(_ => Promise.reject(lastVal)), Promise.resolve(null)) .catch(lastVal => lastVal); } In short: functional DSLs are now more powerful than built in constructs, even though (admittedly) they may take some getting used to. But this is not why async functions are a step in the wrong direction. The problems above are not unique to async functions. The same problems apply to generators: async functions merely inherit them as they're very similar. Async functions also go another step backwards. ## Loss of generality and power Despite their shortcomings, generator based coroutines have one redeeming quality: they allow you to redefine the coroutine execution engine. This is extremely powerful, and I will demonstrate by giving the following example: Lets say we were given the task to write the save function for an issue tracker. The issue author can specify the issue's title and text, as well as any other issues that are blocking the solution of the newly entered issue. Our initial implementation is simple: async function saveIssue(data, blockers) { let issue = await Issues.insert(data); for (let blockerId of blockers) { await BlockerIssues.insert({blocker: blockerId, blocks: issue.id}); } } Issues.insert = async function(data) { return db.query("INSERT ... VALUES", data).execWithin(db.pool); } BlockerIssue.insert = async function(data) { return db.query("INSERT .... VALUES", data).execWithin(db.pool); } Issue and BlockerIssues are references to the corresponding tables in an SQL database. Their insert methods return a promise that indicate whether the query has been completed. The query is executed by a connection pool. But then, we run into a problem. We don't want to partially save the issue if some of the data was not inserted successfuly. We want the entire save operation to be atomic. Fortunately, SQL databases support this via transactions, and our database library has a transaction abstraction. So we change our code: async function saveIssue(data, blockers) { let tx = db.beginTransaction(); let issue = await Issue.insert(tx, data); for (let blockerId of blockers) { await BlockerIssues.insert(tx, {blocker: blockerId, blocks: issue.id}); } } Issues.insert = async function(tx, data) { return db.query("INSERT ... VALUES", data).execWithin(tx); } BlockerIssue.insert = async function(tx, data) { return db.query("INSERT .... VALUES", data).execWithin(tx); } Here, we changed the code in two ways. Firstly, we created a transaction within the saveIssue function. Secondly, we changed both insert methods to take this transaction as an argument. Immediately we can see that this solution doesn't scale very well. What if we need to use saveIssue as a part of a larger transaction? Then it has to take a transaction as an argument. Who will create the transactions? The top level service. What if the top level service becomes a part of a larger service? Then we need to change the code again. We can reduce the extent of this problem by writing a base class that automatically initializes a transaction if one is not passed via the constructor, and then have Issues, BlockerIssue etc inherit from this class. class Transactionable { constructor(tx) { this.transaction = tx || db.beginTransaction(); } } class IssueService extends Transactionable { async saveIssue(data, blockers) { issues = new Issues(this.transaction); blockerIssues = new BlockerIssues(this.transaction); ... } } class Issues extends Transactionable { ... } class BlockerIssues extends Transactionable { ... } // etc Like many OO solutions, this only spreads the problem across the plate to make it look smaller but doesn't solve it. ## Generators are better Generators let us define the execution engine. The iteration is driven by the function that consumes the generator, which decides what to do with the yielded values. What if instead of only allowing promises, our engine let us also: 1. Specify additional options which are accessible from within 2. Yield queries. These will be run in the transaction specified in the options above 3. Yield other generator iterables: These will be run with the same engine and options 4. Yield promises: These will be handled normally Lets take the original code and simplify it: function* saveIssue(data, blockers) { let issue = yield Issues.insert(data); for (var blockerId of blockers) { yield BlockerIssues.insert({blocker: blockerId, blocks: issue.id}); } } Issues.insert = function* (data) { return db.query("INSERT ... VALUES", data) } BlockerIssue.insert = function* (data) { return db.query("INSERT .... VALUES", data) } From our http handler, we can now write var myengine = require('./my-engine'); app.post('/issues/save', function(req, res) { myengine.run(saveIssue(data, blockers), {tx: db.beginTransaction()}) }); Lets implement this engine: function run(iterator, options) { function id(x) { return x; } function iterate(value) { var next = iterator.next(value) var request = next.value; var nextAction = next.done ? id : iterate; if (isIterator(request)) { return run(request, options).then(nextAction) } else if (isQuery(request)) { return request.execWithin(options.tx).then(nextAction) } else if (isPromise(request)) { return request.then(nextAction); } } return iterate() } The best part of this change is that we did not have to change the original code at all. We didn't have to add the transaction parameter to every function, to take care to properly propagate it everywhere and to properly create the transaction. All we needed to do is just change our execution engine. And we can add much more! We can yield a request to get the current user if any, so we don't have to thread that through our code. Infact we can implement continuation local storage with only a few lines of code. Async generators are often given as a reason why we need async functions. If yield is already being used as await, how can we get both working at the same time without adding a new keyword? Is that even possible? Yes. Here is a simple proof-of-concept. github.com/spion/async-generators. All we needed to do is change the execution engine to support a mechanism to distinguish between awaited and yielded values. Another example worth exploring is a query optimizer that supports aggregate execution of queries. If we replace Promise.all with our own implementaiton caled parallel, then we can add support for non-promise arguments. Lets say we have the following code to notify owners of blocked issues in parallel when an issue is resolved: let blocked = yield BlockerIssues.where({blocker: blockerId}) let owners = yield engine.parallel(blocked.map(issue => issue.getOwner())) for (let owner of owners) yield owner.notifyResolved(issue) Instead of returning an SQL based query, we can have getOwner() return data about the query: {table: 'users', id: issue.user_id} and have engine optimize the execution of parallel queries, by sending a single query per table rather then per item. if (isParallelQuery(query)) { var results = _(query.items).groupBy('table') .map((items, t) => db.query(select * from${t} where id in ?,
items.map(it => it.id))
.execWithin(options.tx)).toArray();
Promise.all(results)
.then(results => results.sort(byOrderOf(query.items)))
.then(runNext)
}
And voila, we've just implemented a query optimizer. It will fetch all issue owners with a single query. If we add an SQL parser into the mix, it should be possible to rewrite real SQL queries.
We can do something similar on the client too with GraphQL queries by aggregating multiple individual queries.
And if we add support for iterators, the optimization becomes deep: we would be able to aggregate queries that are several layers within other generator functions, In the above example, getOwner() could be another generatator which produces a query for the user as a first result. Our implementation of parallel will run all those getOwner() iterators and consolidate their first queries into a single query. All this is done without those functions knowing anything about it (thus, without breaking modularity).
Async functions cant let us do any of this. All we get is a single execution engine that only knows how to await promises. To make matters worse, thanks to the unfortunately short-sighted recursive thenable assimilation design decision, we can't simply create our own thenable that will support the above extra features. If we try to do that, we will be unable to safely use it with Promises. We're stuck with what we get by default in async functions, and thats it.
Generators are JavaScript's programmable semicolons. Lets not take away that power by taking away the programmability. Lets drop async/await and write our own interpreters.
# Why I am switching to promises
Mon Oct 07 2013
I'm switching my node code from callbacks to promises. The reasons aren't merely aesthetical, they're rather practical:
### Throw-catch vs throw-crash
We're all human. We make mistakes, and then JavaScript throws an error. How do callbacks punish that mistake? They crash your process!
But spion, why don't you use domains?
Yes, I could do that. I could crash my process gracefully instead of letting it just crash. But its still a crash no matter what lipstick you put on it. It still results with an inoperative worker. With thousands of requests, 0.5% hitting a throwing path means over 50 process shutdowns and most likely denial of service.
And guess what a user that hits an error does? Starts repeatedly refreshing the page, thats what. The horror!
Promises are throw-safe. If an error is thrown in one of the .then callbacks, only that single promise chain will die. I can also attach error or "finally" handlers to do any clean up if necessary - transparently! The process will happily continue to serve the rest of my users.
For more info see #5114 and #5149. To find out how promises can solve this, see bluebird #51
### if (err) return callback(err)
That line is haunting me in my dreams now. What happened to the DRY principle?
I understand that its important to explicitly handle all errors. But I don't believe its important to explicitly bubble them up the callback chain. If I don't deal with the error here, thats because I can't deal with the error there - I simply don't have enough context.
But spion, why don't you wrap your calbacks?
I guess I could do that and lose the callback stack when generating a new Error(). Or since I'm already wrapping things, why not wrap the entire thing with promises, rely on longStackSupport, and handle errors at my discretion?
Also, what happened to the DRY principle?
### Promises are now part of ES6
Yes, they will become a part of the language. New DOM APIs will be using them too. jQuery already switched to promise...ish things. Angular utilizes promises everywhere (even in the templates). Ember uses promises. The list goes on.
Browser libraries already switched. I'm switching too.
### Containing Zalgo
Your promise library prevents you from releasing Zalgo. You can't release Zalgo with promises. Its impossible for a promise to result with the release of the Zalgo-beast. Promises are Zalgo-safe (see section 3.1).
### Callbacks getting called multiple times
Promises solve that too. Once the operation is complete and the promise is resolved (either with a result or with an error), it cannot be resolved again.
### Promises can do your laundry
Oops, unfortunately, promises wont do that. You still need to do it manually.
## But you said promises are slow!
Yes, I know I wrote that. But I was wrong. A month after I wrote the giant comparison of async patterns, Petka Antonov wrote Bluebird. Its a wicked fast promise library, and here are the charts to prove it:
Time to complete (ms)
Parallel requests
Memory usage (MB)
Parallel requests
And now, a table containing many patterns, 10 000 parallel requests, 1 ms per I/O op. Measure ALL the things!
file time(ms) memory(MB)
callbacks-original.js 316 34.97
callbacks-flattened.js 335 35.10
callbacks-catcher.js 355 30.20
promises-bluebird-generator.js 364 41.89
dst-streamline.js 441 46.91
callbacks-deferred-queue.js 455 38.10
callbacks-generator-suspend.js 466 45.20
promises-bluebird.js 512 57.45
thunks-generator-gens.js 517 40.29
thunks-generator-co.js 707 47.95
promises-compose-bluebird.js 710 73.11
callbacks-generator-genny.js 801 67.67
callbacks-async-waterfall.js 989 89.97
promises-bluebird-spawn.js 1227 66.98
promises-kew.js 1578 105.14
dst-stratifiedjs-compiled.js 2341 148.24
rx.js 2369 266.59
promises-when.js 7950 240.11
promises-q-generator.js 21828 702.93
promises-q.js 28262 712.93
promises-compose-q.js 59413 778.05
Promises are not slow. At least, not anymore. Infact, bluebird generators are almost as fast as regular callback code (they're also the fastest generators as of now). And bluebird promises are definitely at least two times faster than async.waterfall.
Considering that bluebird wraps the underlying callback-based libraries and makes your own callbacks exception-safe, this is really amazing. async.waterfall doesn't do this. exceptions still crash your process.
Bluebird has them behind a flag that slows it down about 5 times. They're even longer than Q's longStackSupport: bluebird can give you the entire event chain. Simply enable the flag in development mode, and you're suddenly in debugging nirvana. It may even be viable to turn them on in production!
This is a valid point. Mikeal said it: If you write a library based on promises, nobody is going to use it.
However, both bluebird and Q give you promise.nodeify. With it, you can write a library with a dual API that can both take callbacks and return promises:
module.exports = function fetch(itemId, callback) {
return locate(itemId).then(function(location) {
return getFrom(location, itemId);
}).nodeify(callback);
}
And now my library is not imposing promises on you. Infact, my library is even friendlier to the community: if I make a dumb mistake that causes an exception to be thrown in the library, the exception will be passed as an error to your callback instead of crashing your process. Now I don't have to fear the wrath of angry library users expecting zero downtime on their production servers. Thats always a plus, right?
To use generators with callbacks you have two options
1. use a resumer style library like suspend or genny
2. wrap callback-taking functions to become thunk returning functions.
Since #1 is proving to be unpopular, and #2 already involves wrapping, why not just s/thunk/promise/g in #2 and use generators with promises?
## But promises are unnecessarily complicated!
Yes, the terminology used to explain promises can often be confusing. But promises themselves are pretty simple - they're basically like lightweight streams for single values.
Here is a straight-forward guide that uses known principles and analogies from node (remember, the focus is on simplicity, not correctness):
Edit (2014-01-07): I decided to re-do this tutorial into a series of short articles called promise nuggets. The content is CC0 so feel free to fork, modify, improve or send pull requests. The old tutorial will remain available within this article.
Promises are objects that have a then method. Unlike node functions, which take a single callback, the then method of a promise can take two callbacks: a success callback and an error callback. When one of these two callbacks returns a value or throws an exception, then must behave in a way that enables stream-like chaining and simplified error handling. Lets explain that behavior of then through examples:
Imagine that node's fs was wrapped to work in this manner. This is pretty easy to do - bluebird already lets you do something like that with promisify(). Then this code:
fs.readFile(file, function(err, res) {
if (err) handleError();
doStuffWith(res);
});
will look like this:
fs.readFile(file).then(function(res) {
doStuffWith(res);
}, function(err) {
handleError();
});
Whats going on here? fs.readFile(file) starts a file reading operation. That operation is not yet complete at the point when readFile returns. This means we can't return the file content. But we can still return something: we can return the reading operation itself. And that operation is represented with a promise.
This is sort of like a single-value stream:
net.connect(port).on('data', function(res) {
doStuffWith(res);
}).on('error', function(err) {
});
So far, this doesn't look that different from regular node callbacks - except that you use a second callback for the error (which isn't necessarily better). So when does it get better?
Its better because you can attach the callback later if you want. Remember, fs.readFile(file) returns a promise now, so you can put that in a var, or return it from a function:
var filePromise = fs.readFile(file);
// do more stuff... even nest inside another promise, then
filePromise.then(function(res) { ... });
Yup, the second callback is optional. We're going to see why later.
Okay, that's still not much of an improvement. How about this then? You can attach more than one callback to a promise if you like:
filePromise.then(function(res) { uploadData(url, res); });
filePromise.then(function(res) { saveLocal(url, res); });
Hey, this is beginning to look more and more like streams - they too can be piped to multiple destinations. But unlike streams, you can attach more callbacks and get the value even after the file reading operation completes.
Still not good enough?
What if I told you... that if you return something from inside a .then() callback, then you'll get a promise for that thing on the outside?
Say you want to get a line from a file. Well, you can get a promise for that line instead:
var linePromise = filePromise.then(function(data) {
return data.toString().split('\n')[line];
});
var beginsWithHelloPromise = linePromise.then(function(line) {
return /^hello/.test(line);
});
Thats pretty cool, although not terribly useful - we could just put both sync operations in the first .then() callback and be done with it.
But guess what happens when you return a promise from within a .then callback. You get a promise for a promise outside of .then()? Nope, you just get the same promise!
function readProcessAndSave(inPath, outPath) {
// then send it to the transform service
var transformedPromise = filePromise.then(function(content) {
return service.transform(content);
});
// then save the transformed content
var writeFilePromise = transformedPromise.then(function(transformed) {
return fs.writeFile(otherPath, transformed)
});
// return a promise that "succeeds" when the file is saved.
return writeFilePromise;
}
console.log("Success!");
}, function(err) {
// This function will catch *ALL* errors from the above
// operations including any exceptions thrown inside .then
console.log("Oops, it failed.", err);
});
Now its easier to understand chaining: at the end of every function passed to a .then() call, simply return a promise.
Lets make our code even shorter:
function readProcessAndSave(file, url, otherPath) {
.then(service.transform)
.then(fs.writeFile.bind(fs, otherPath));
}
Mind = blown! Notice how I don't have to manually propagate errors. They will automatically get passed with the returned promise.
What if we want to read, process, then upload, then also save locally?
function readUploadAndSave(file, url, otherPath) {
var content;
// read the file and transform it
.then(service.transform)
.then(function(vContent)
content = vContent;
}).then(function() { // after its uploaded
// save it
return fs.writeFile(otherPath, content);
});
}
Or just nest it if you prefer the closure.
function readUploadAndSave(file, url, otherPath) {
// read the file and transform it
.then(service.transform)
.then(function(content)
// after its uploaded, save it
return fs.writeFile(otherPath, content);
});
});
}
But hey, you can also upload and save in parallel!
function readUploadAndSave(file, url, otherPath) {
// read the file and transform it
.then(service.transform)
.then(function(content) {
// create a promise that is done when both the upload
// and file write are done:
return Promise.join(
fs.writeFile(otherPath, content));
});
}
No, these are not "conveniently chosen" functions. Promise code really is that short in practice!
Similarly to how in a stream.pipe chain the last stream is returned, in promise pipes the promise returned from the last .then callback is returned.
Thats all you need, really. The rest is just converting callback-taking functions to promise-returning functions and using the stuff above to do your control flow.
You can also return values in case of an error. So for example, to write a readFileOrDefault (which returns a default value if for example the file doesn't exist) you would simply return the default value from the error callback:
function readFileOrDefault(file, defaultContent) {
return fileContent;
}, function(err) {
return defaultContent;
});
}
You can also throw exceptions within both callbacks passed to .then. The user of the returned promise can catch those errors by adding the second .then handler
Now how about configFromFileOrDefault that reads and parses a JSON config file, falls back to a default config if the file doesn't exist, but reports JSON parsing errors? Here it is:
function configFromFileOrDefault(file, defaultConfig) {
// if fs.readFile fails, a default config is returned.
// if JSON.parse throws, this promise propagates that.
return defaultConfig;
});
// if we want to catch JSON.parse errors, we need to chain another
// .then here - this one only captures errors from fs.readFile(file)
}
Finally, you can make sure your resources are released in all cases, even when an error or exception happens:
var result = doSomethingAsync();
return result.then(function(value) {
// clean up first, then return the value.
return cleanUp().then(function() { return value; })
}, function(err) {
// clean up, then re-throw that error
return cleanUp().then(function() { throw err; });
})
Or you can do the same using .finally (from both Bluebird and Q):
var result = doSomethingAsync();
return result.finally(cleanUp);
The same promise is still returned, but only after cleanUp completes.
Since promises are actual values, most of the tools in async.js become unnecessary and you can just use whatever you're using for regular values, like your regular array.map / array.reduce functions, or just plain for loops. That, and a couple of promise array tools like .all, .spread and .some
files.getLastTwoVersions(filename)
.then(function(items) {
// fetch versions in parallel
var v1 = versions.get(items.last),
v2 = versions.get(items.previous);
return [v1, v2];
})
// both of these are now complete.
return diffService.compare(v1.blob, v2.blob)
})
.then(function(diff) {
// voila, diff is ready. Do something with it.
});
async.parallel / async.map are straightforward:
// download all items, then get their names
var pNames = ids.map(function(id) {
return getItem(id).then(function(result) {
return result.name;
});
});
// wait for things to complete:
Promise.all(pNames).then(function(names) {
// we now have all the names.
});
What if you want to wait for the current item to download first (like async.mapSeries and async.series)? Thats also pretty straightforward: just wait for the current download to complete, then start the next download, then extract the item name, and thats exactly what you say in the code:
// start with current being an "empty" already-fulfilled promise
var current = Promise.fulfilled();
var namePromises = ids.map(function(id) {
// wait for the current download to complete, then get the next
// item, then extract its name.
current = current
.then(function() { return getItem(id); })
.then(function(item) { return item.name; });
return current;
});
Promise.all(namePromises).then(function(names) {
// use all names here.
});
The only thing that remains is mapLimit - which is a bit harder to write - but still not that hard:
var queued = [], parallel = 3;
var namePromises = ids.map(function(id) {
// The queued, minus those running in parallel, plus one of
// the parallel slots.
var mustComplete = Math.max(0, queued.length - parallel + 1);
// when enough items are complete, queue another request for an item
return Promise.some(queued, mustComplete)
.then(function() {
}).then(function(item) {
return item.name;
});
});
Promise.all(namePromises).then(function(names) {
// use all names here.
});
That covers most of async.
Early returns are a pattern used throughout both sync and async code. Take this hypothetical sync example:
function getItem(key) {
var item;
// early-return if the item is in the cache.
if (item = cache.get(key)) return item;
// continue to get the item from the database. cache.put returns the item.
item = cache.put(database.get(key));
return item;
}
If we attempt to write this using promises, at first it looks impossible:
function getItem(key) {
return cache.get(key).then(function(item) {
// early-return if the item is in the cache.
if (item) return item;
return database.get(item)
}).then(function(putOrItem) {
// what do we do here to avoid the unnecessary cache.put ?
})
}
How can we solve this?
We solve it by remembering that the callback variant looks like this:
function getItem(key, callback) {
cache.get(key, function(err, res) {
// early-return if the item is in the cache.
if (res) return callback(null, res);
// continue to get the item from the database
database.get(key, function(err, res) {
if (err) return callback(err);
// cache.put calls back with the item
cache.put(key, res, callback);
})
})
}
The promise version can do pretty much the same - just nest the rest of the chain inside the first callback.
function getItem(key) {
return cache.get(key).then(function(res) {
// early return if the item is in the cache
if (res) return res;
// continue the chain within the callback.
return database.get(key)
.then(cache.put);
});
}
Or alternatively, if a cache miss results with an error:
function getItem(key) {
return cache.get(key).catch(function(err) {
return database.get(key).then(cache.put);
});
}
That means that early returns are just as easy as with callbacks, and sometimes even easier (in case of errors)
Promises can work very well with streams. Imagine a limit stream that allows at most 3 promises resolving in parallel, backpressuring otherwise, processing items from leveldb:
originalSublevel.createReadStream().pipe(limit(3, function(data) {
return convertor(data.value).then(function(converted) {
return {key: data.key, value: converted};
});
})).pipe(convertedSublevel.createWriteStream());
Or how about stream pipelines that are safe from errors without attaching error handlers to all of them?
pipeline(original, limiter, converted).then(function(done) {
}, function(streamError) {
})
Looks awesome. I definitely want to explore that.
## The future?
In ES7, promises will become monadic (by getting flatMap and unit). Also, we're going to get generic syntax sugar for monads. Then, it trully wont matter what style you use - stream, promise or thunk - as long as it also implements the monad functions. That is, except for callback-passing style - it wont be able to join the party because it doesn't produce values.
I'm just kidding, of course. I don't know if thats going to happen. Either way, promises are useful and practical and will remain useful and practical in the future.
# Closures are unavoidable in node
Fri Aug 23 2013
A couple of weeks ago I wrote a giant comparison of node.js async code patterns that mostly focuses on the new generators feature in EcmaScript 6 (Harmony)
Among other implementations there were two callback versions: original.js, which contains nested callbacks, and flattened.js, which flattens the nesting a little bit. Both make extensive use of JavaScript closures: every time the benchmarked function is invoked, a lot of closures are created.
Then Trevor Norris wrote that we should be avoiding closures when writing performance-sensitive code, hinting that my benchmark may be an example of "doing it wrong"
I decided to try and write two more flattened variants. The idea is to minimize performance loss and memory usage by avoiding the creation of closures.
You can see the code here: flattened-class.js and flattened-noclosure.js
Of course, this made complexity skyrocket. Lets see what it did for performance.
These are the results for 50 000 parallel invocations of the upload function, with simulated I/O operations that always take 1ms. Note: suspend is currently the fastest generator based library
file time(ms) memory(MB)
flattened-class.js 1398 106.58
flattened.js 1453 110.19
flattened-noclosure.js 1574 102.28
original.js 1749 124.96
suspend.js 2701 144.66
No performance gains. Why?
Because this kind of code requires that results from previous callbacks are passed to the next callback. And unfortunately, in node this means creating closures.
There really is no other option. Node core functions only take callback functions. This means we have to create a closure: its the only mechanism in JS that allows you to include context together with a function.
And yeah, bind also creates a closure:
function bind(fn, ctx) {
return function bound() {
return fn.apply(ctx, arguments);
}
}
Notice how bound is a closure over ctx and fn.
Now, if node core functions were also able to take a context argument, things could have been different. For example, instead of writing:
fs.readFile(f, bind(this.afterFileRead, this));
if we were able to write:
fs.readFile(f, this.afterFileRead, this);
then we would be able to write code that avoids closures and flattened-class.js could have been much faster.
But we can't do that.
What if we could though? Lets fork timers.js from node core and find out:
I added context passing support to the Timeout class. The result was timers-ctx.js which in turn resulted with flattened-class-ctx.js
And here is how it performs:
file time(ms) memory(MB)
flattened-class-ctx.js 929 59.57
flattened-class.js 1403 106.57
flattened.js 1452 110.19
original.js 1743 125.02
suspend.js 2834 145.34
Yeah. That shaved off a couple of 100s of miliseconds more.
Is it worth it?
name tokens complexity
suspend.js 331 1.10
original.js 425 1.41
flattened.js 477 1.58
flattened-class-ctx.js 674 2.23
Maybe, maybe not. You decide.
|
Cardinality sanity check
I think this should be rather simple, but adding here before I forget. We could do with adding a sanity check somewhere to throw up a warning/exception/error if a polynomial is defined with the cardinality beyond a certain threshold. This will prevent the user from sitting there forever, as well as preventing the runtime freezing.
This should certainly be implemented for the tensor-grid basis since the cardinality explodes with the number of parameters there, and it is easy to determine the cardinality a priori, i.e. see below:
d = 50
p = 2
L = (p+1)**d
print('%.2e'%L)
params = [Parameter(distribution='uniform', lower=-1, upper=1, order=p) for j in range(d)]
basis = Basis('tensor-grid')
poly = Poly(params,basis) # TODO - check within this call if L > L_allowed
print(poly.basis.get_cardinality())
This may require more thought for some of the other basis options…
3 Likes
As someone who often accidentally crashes their computer because of forgetting to change basis I’d very much welcome this!
1 Like
Glad to know it isn’t just me! We’re releasing a new version very soon so I shall ensure this functionality is incorporated
Hi all, I have now added functionality for this into the latest develop branch. It turned out to be a little more challenging than first envisioned since even getting to the point where we can call basis.get_cardinality() may be computationally intractable if the dimensions/polynomial orders are too high. My solution is to add two cardinality limits:
1. A hard limit CARD_LIMIT_HARD = int(1e6) in basis.py. This checks the cardinality as the selected basis is being constructed, and raises an exception if the limit is reached. This limit is quite large, even though you wouldn’t want to construct a polynomial from such a large basis, since a number of the basis types prune/subsample from an initial total-order or tensor-grid basis. So we often need to set quite a large initial basis before pruning it down later.
2. A soft limit CARD_LIMIT_SOFT = 50e3 is then enforced in poly.py. This raises an exception if a Poly object is defined with a basis cardinality over this limit. Running Poly.set_model() with a cardinality over this limit often leads to rather long compute times. The limit is set as a soft limit, overridable with override_cardinality=True, since the user may find longer run times acceptable, and may wish to incorperate their own limit into their workflow.
Below are two examples. Firstly, an example with the hard limit reached when constructing the Basis. The basis here would have cardinality=6.533\times 10^{77}, enough to freeze the runtime. Thankfully an exception will now be raised before it gets to this.
# Define d=100 parameters of order 5
d = 100
params = []
for j in range(d):
params.append(eq.Parameter(distribution='uniform',lower=-1,upper=1,order=5))
# Set tensor-grid basis. Resulting cardinality=6.533e77
orders = [param.order for param in params]
basis=eq.Basis('tensor-grid',orders=orders)
Secondly, an example with cardinality= 176.9\times 10^3. This is OK to construct a basis for, but leads to set_model taking a rather long time. If this is acceptable to you, and you really do want this number of dimensions and polynomial orders, you can override the soft limit (as is done below).
# Define d=100 parameters of order 3
d = 100
params = []
for j in range(d):
params.append(eq.Parameter(distribution='uniform',lower=-1,upper=1,order=3))
# Set total-order basis. Resulting cardinality= 176851
orders = [param.order for param in params]
basis=eq.Basis('total-order',orders=orders)
# Define data and poly
X = np.random.uniform(-1,1,size=(1000,d))
Y = X[:,0]**2
poly = eq.Poly(params,basis,method='least-squares',
sampling_args={'sample-points':X,'sample-outputs':Y},
override_cardinality=True)
Note: If orders isn’t given in the Basis definition, the 1st limit in basis.py will not be enforced until the Basis is passed to the Poly object.
1 Like
Hi @psesh and @Nick , what do you think about the above? Re the values for the two limits and the general strategy…
I will mark as solved and push the changes if you think it is sensible.
2 Likes
This functionality has now been merged into version 9.1.0, so I am marking as solved.
|
Finding kth term of a sequence
1. Nov 19, 2007
becca4
1. The problem statement, all variables and given/known data
See picture
It's (sigma) from k (possibly n?)=1 to +infinity of $$U_{k}$$ = 2 - (1/n)
$$U_{100}$$
limit as k goes to infinity of $$U_{k}$$
and sigma from k=1 to inf. of $$U_{k}$$
2. Relevant equations
If I'm not mistaken, 2 - 1/n is the closed for for the sum, right? I'm just not sure where to go from there...
3. The attempt at a solution
None. (
Attached Files:
• scan0015a.jpg
File size:
6 KB
Views:
68
Last edited: Nov 19, 2007
2. Nov 19, 2007
EnumaElish
Is the series $\sum_{k=1}^\infty(2-1/n)$? Or is it $\sum_{k=1}^\infty U_k$ = 2-1/n ?
3. Nov 19, 2007
becca4
the latter. Thanks!!
|
# Chapter 5 - Review Exercises - Page 262: 17
The proof is in the following form: Column 1 of proof; Column 2 of proof. Thus, we have: 1. ABCD is a parallelogram, and DB intersects AE at point F; given 2. Angle ADC is congruent to angle CBA; opposite angles of parallelograms are congruent. 3. EDF is congruent to ABF; from (2) and the fact that DB cuts the two angles the same way on both sides, it follows that the two are congruent. 4. AFB is congruent to DFE; vertical angles theorem 5. DFE is similar to AFB; AA similarity theorem 6. $\frac{AF}{EF} = \frac{AB}{DE}$; CPSSTP
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
On these pages I am investigating the evolution of interactions in the MacArthur/Levins resource competition model.
This page explains the adaptive dynamics concepts and questions I'm using, and includes Sage source code for the models.
Child pages use this theory and code to investigate specific cases of the models:
## MacArthur-Levins population dynamics
$\frac{dX_{i}}{dt}=b_{i}X_{i}(\sum _{\ell}c_{{i\ell}}w_{\ell}R_{\ell}-m_{i})$
$\frac{dR_{\ell}}{dt}=r_{\ell}(K_{\ell}-R_{\ell})-\sum _{i}c_{{i\ell}}X_{i}$
where $X_{i}$ is the density of population $i$, $R_{\ell}$ is the abundance of resource $\ell$ and the parameters are $b_{i}$, an intrinsic population growth rate, $m_{i}$, mortality rate, $c_{{i\ell}}$, the rate at which population $i$ captures resource $\ell$, $w_{\ell}$, the amount a unit of resource $\ell$ contributes to population growth, $r_{\ell}$, the resupply rate of resource $\ell$, and $K_{\ell}$ its maximum possible abundance.
This is a fine model on its own, but I am interested in Lotka-Volterra models, which are models in which there is a simple number $a_{{ij}}$ describing how populations $i$ and $j$ interact with each other. Using a Lotka-Volterra model will let us look at how these interaction terms $a_{{ij}}$ change, telling us how evolution drives these populations to become more competitive, antagonistic, or mutualistic.
In the above model the populations $i$ interact indirectly by taking resources from each other, but we can make the interactions direct by making simplifying assumptions, and this is what MacArthur and Levins did.
We do this by assuming that the resources come to equilibrium very quickly compared to the populations. Under this assumptions, we can hold the population sizes $X_{i}$ fixed and solve the second equation above for $R_{\ell}$ when $\frac{dR_{\ell}}{dt}=0$:
$\hat{R}_{\ell}=K_{\ell}-\frac{1}{r_{\ell}}\sum _{i}c_{{i\ell}}X_{i}$.
Then we simply use that value of $R_{\ell}$ in the first equation, and we have a system of population sizes only:
$\frac{dX_{i}}{dt}=b_{i}X_{i}(\sum _{\ell}c_{{i\ell}}w_{\ell}(K_{\ell}-\frac{1}{r_{\ell}}\sum _{j}c_{{j\ell}}X_{j})-m_{i})$.
We can rearrange the terms of this to have the standard Lotka-Volterra form:
$\frac{dX_{i}}{dt}=k_{i}X_{i}+\sum _{j}a_{{ij}}X_{i}X_{j}$,
where
$k_{i}=b_{i}(\sum _{\ell}c_{{i\ell}}w_{\ell}K_{\ell}-m_{1})$ (ordinarily I'd call this $r_{i}$, but the name is already in use),
$a_{{ij}}=-b_{i}\sum _{\ell}\sum _{j}\frac{c_{{i\ell}}c_{{j\ell}}w_{\ell}}{r_{\ell}}$.
I'll be interested in how the interaction terms $a_{{ij}}$ change as the populations coevolve, because this expresses whether competition becomes stronger or weaker. When two coevolving populations compete for resources, we expect them to differentiate from each other and lessen the competition, but I'd like to look at both that and other cases, and do some detailed analysis about when it lessens and when it greatens.
## Adaptive dynamics in the Macarthur-Levins model
And now here's where we look at how that single population evolves in the Mac-Lev model. I won't review the details of how adaptive dynamics is done, but in summary: suppose that certain aspects of the population dynamics depend on the characteristics of the population, and those characteristics are able to mutate. Then over time, mutants will arise, and if they are better able to thrive in the environment they encounter than their forefathers, they will gradually replace them, and the characteristics of the population will slowly change.
In this model, it's $b_{i}$, $c_{{i\ell}}$, and $m_{i}$ that have to do with the populations, so we introduce a phenotype variable $u_{i}$ representing the characteristics of the population and suppose that $b_{i}$, $c_{{i\ell}}$ and $m_{i}$ are determined by the value of $u$. Then we can find out how $u$ changes in time in response to the conditions created by the population dynamics, and also how the other values such as $X_{i}$, $R_{\ell}$, $a_{{ij}}$, $k_{i}$, change as $u$ evolves.
The change in $u$, to make a long story short, is driven by how the population growth rate of a rare mutant varies with the mutant's phenotype $u$. That is, there's an "invasion speed" $\mathcal{I}$ that is closely related to the population growth rate $\frac{dX_{i}}{dt}$, and the change in $u$ (that is, $\frac{du}{dt}$) is in proportion to $\frac{\partial\mathcal{I}}{\partial u}$.
To be precise:
$\mathcal{I}(u_{i}|E)=\lim _{{X_{i}\to 0}}\frac{1}{X_{i}}\frac{dX_{i}}{dt}$
defines the invasion speed (where $E$ is the environment that an individual population $i$ experiences, including the rest of population $i$ and all other populations), and then
$\frac{du_{i}}{dt}=\left.\gamma\hat{X}_{i}\frac{\partial\mathcal{I}(v|u_{1},\ldots,u_{n})}{\partial v}\right|_{{v=u_{i}}}$,
where $\gamma$ is a coefficient accounting for the frequency and size of available mutations. In simple cases, it's a constant, while when available mutations are not quite so simple and uniform, it may not be (as we will see).
## Adaptive geometry of ecological parameters
Now let's go deeper into what happens in that evolutionary change. We're considering several different ways to describe the evolving population, and now I want to give them clearer notation:
These are vectors that are functions of one another. Using a suitable vector notation in our calculus, we can write the adaptive dynamics of $\mathbf{u}$ directly, ignoring the intermediate variables $\mathbf{p}$ and $\mathbf{A}$ for the moment:
$\frac{d\mathbf{u}_{i}}{dt}=\left.\gamma\hat{X}_{i}\frac{\partial\mathcal{I}(\mathbf{v}|\mathbf{u}_{1},\ldots,\mathbf{u}_{n})}{\partial\mathbf{v}}\right|_{{\mathbf{v}=\mathbf{u}_{i}}}=\gamma\hat{X}_{i}\partial _{1}\mathcal{I}(\mathbf{u}_{i})^{T}$.
The $T$ sign is for vector or matrix transposition: with the notation I'm using, $\mathbf{u}_{i}$ is a column vector - a point in a vector space of values - and $\frac{\partial\mathcal{I}}{\partial\mathbf{u}_{i}}$ is a row vector - a thing that operates on points. Since $\frac{d\mathbf{u}_{i}}{dt}$ is a column, we need to transpose the derivative of $\mathcal{I}$ to make it match. The $\partial _{1}$ sign is for the partial derivative with respect to the first argument.
Now we can use the chain rule to see how $\mathbf{p}$ changes:
$\frac{d\mathbf{p}(\mathbf{u}_{i})}{dt}=\frac{\partial\mathbf{p}}{\partial\mathbf{u}_{i}}\frac{d\mathbf{u}_{i}}{dt}$
$=\gamma\hat{X}_{i}\frac{\partial\mathbf{p}}{\partial\mathbf{u}_{i}}\partial _{1}\mathcal{I}(\mathbf{u}_{i})^{T}$
$=\gamma\hat{X}_{i}\frac{\partial\mathbf{p}}{\partial\mathbf{u}_{i}}(\partial _{1}\mathcal{I}(\mathbf{p}(\mathbf{u}_{i}))\frac{d\mathbf{p}}{d\mathbf{u}_{i}})^{T}$
$=\gamma\hat{X}_{i}\frac{\partial\mathbf{p}}{\partial\mathbf{u}_{i}}\frac{\partial\mathbf{p}}{\partial\mathbf{u}_{i}}^{T}\partial _{1}\mathcal{I}(\mathbf{p}(\mathbf{u}_{i}))^{T}$.
Here we see the usefulness of this row-and-column-vector notation, because it lets us use the chain rule in a natural way: when we write $\frac{d\mathbf{p}(\mathbf{u}_{i})}{dt}=\frac{\partial\mathbf{p}}{\partial\mathbf{u}_{i}}\frac{d\mathbf{u}_{i}}{dt}$, we're multiplying a matrix by a column vector to get another column vector, in just the way we want.
I like to write $S(\mathbf{u})=\partial _{1}\mathcal{I}(\mathbf{u})^{T}$ for the "selection gradient" of $\mathbf{u}$ -- the direction of increasing fitness in the space of possible $\mathbf{u}$ values. I'm very interested in the role of this selection gradient in various spaces, as we'll see. Notice that in the derivation above I switched from $S(\mathbf{u}_{i})=\left(\frac{\partial\mathcal{I}(\mathbf{v}|\mathbf{u}_{1},\ldots,\mathbf{u}_{n})}{\partial\mathbf{v}}\right)_{{\mathbf{v}=\mathbf{u}_{i}}}$ to
$S(\mathbf{p}(\mathbf{u}_{i}))=\left(\begin{array}[]{c}\frac{\partial\mathcal{I}(\mathbf{v}|\mathbf{u}_{1},\ldots,\mathbf{u}_{n})}{\partial b(\mathbf{v})}\\ \vdots\\ \frac{\partial\mathcal{I}(\mathbf{v}|\mathbf{u}_{1},\ldots,\mathbf{u}_{n})}{\partial c_{{im}}(\mathbf{v})}\end{array}\right)_{{\mathbf{v}=\mathbf{u}_{i}}}$.
These different selection gradients are very different objects: they're vectors expressing the direction of selection in different spaces -- here, the space of $\mathbf{u}$ values vs. the space of $\mathbf{p}$ values. Soon enough we'll also be looking at the selection gradient in $\mathbf{A}$ space.
Using the above chain rule manipulations, we can write
$\frac{d\mathbf{u}_{i}}{dt}=\gamma\hat{X}_{i}S(\mathbf{u}_{i})$
$\frac{d\mathbf{p}(\mathbf{u}_{i})}{dt}=\gamma\hat{X}_{i}\frac{\partial\mathbf{p}}{\partial\mathbf{u}}\frac{\partial\mathbf{p}}{\partial\mathbf{u}}^{T}S(\mathbf{p})$.
Like $\mathbf{u}$ or whatever other vector, $\mathbf{p}$ would evolve in the direction of $S(\mathbf{p})$ if it were to mutate in all directions equally. However, it doesn't do that: since $\mathbf{p}$ is a function of $\mathbf{u}$, it only mutates in directions given by mutations in $\mathbf{u}$. The matrix multiplying $S(\mathbf{p})$ in the dynamics of $\mathbf{p}$ is a projection matrix, restricting the motion of $\mathbf{p}$ to directions allowed by the mapping between $\mathbf{u}$ and $\mathbf{p}$. (To be precise, $\frac{\partial\mathbf{p}}{\partial\mathbf{u}}\frac{\partial\mathbf{p}}{\partial\mathbf{u}}^{T}S(\mathbf{p})$ is not precisely the projection of $S(\mathbf{p})$ onto the subspace parametrized by $\mathbf{u}$ unless the columns of $\frac{\partial\mathbf{p}}{\partial\mathbf{u}}$ are unit-length; otherwise there's some scaling by positive numbers involved. But it definitely transforms the vector into a direction allowed by the restriction to points parametrized by $\mathbf{u}$.)
The motion of $\mathbf{A}$ is the same way, except that it is influenced by all the populations in the system, not just one, because two phenotypes are involved in each $a_{{ij}}$ value. For that reason, the dynamics of $\mathbf{A}$ has some extra terms...
But before we go into that, let's look at the selection gradient and the dynamics of $\mathbf{p}(\mathbf{u})$.
## Adaptive geometry of interaction terms
Now, let's get to how $a_{{ij}}$ and $k_{i}$ "would like to evolve" and why they move in the directions they do.
Expanding out the dynamics of the $\mathbf{A}$ vector has more involved than doing it for the $\mathbf{p}$ vector, because $\mathbf{A}$ depends on all the different phenotypes $\mathbf{u}_{j}$, not just one we're concerned with. Also, we have to keep careful track of $\mathbf{u}_{i}$, because it appears in $\mathbf{A}$ in two different ways: on the left-hand side of each $\mathbf{a}_{{ij}}=a(\mathbf{u}_{i},\mathbf{u}_{j})$ term, which describes how population $i$ (the "patient") is affected by an encounter with population $j$ (the "agent"); and also on the right-hand side of the $a_{{ii}}=a(\mathbf{u}_{i},\mathbf{u}_{i})$ term, as the "agent" in an encounter between $i$ and $i$. This distinction is important because every encounter has two effects, the effect on oneself and the effect on the other, and we'll see that selection treats these two effects very differently.
So to be clear, we'll work with the notation
$\mathbf{A}_{i}=\left.\mathbf{A}(\mathbf{v},\mathbf{u}_{1},\ldots,\mathbf{u}_{n})\right|_{{\mathbf{v}=\mathbf{u}_{i}}}$ $=\left(\begin{array}[]{c}\mathbf{a}(\mathbf{v},\mathbf{u}_{1})\\ \vdots\\ \mathbf{a}(\mathbf{v},\mathbf{u}_{n})\\ k(\mathbf{v})\end{array}\right)_{{\mathbf{v}=\mathbf{u}_{i}}}$,
to distinguish the two different roles of $\mathbf{u}_{i}$ (suppressing the intermediate variable $\mathbf{p}$), and we'll refer to the two different partial derivatives that relate to $\mathbf{u}_{i}$ as $\partial _{1}\mathbf{A}_{i}(\mathbf{u}_{i})=\left.\frac{\partial\mathbf{A}_{i}}{\partial\mathbf{v}}\right|_{{\mathbf{v}=\mathbf{u}_{i}}}$ and $\partial _{2}\mathbf{A}_{i}(\mathbf{u}_{i})=\left.\frac{\partial\mathbf{A}_{i}}{\partial\mathbf{u}_{i}}\right|_{{\mathbf{v}=\mathbf{u}_{i}}}$.
So first of all, if we use $\mathbf{A}_{i}$ as an intermediate variable, the dynamics of $\mathbf{u}_{i}$ is
$\frac{d\mathbf{u}_{i}}{dt}=\gamma\hat{X}_{i}\partial _{1}\mathscr{I}(\mathbf{u}_{i})^{T}$
$=\gamma\hat{X}_{i}(\partial _{1}\mathscr{I}(\mathbf{A}_{i})\partial _{1}\mathbf{A}_{i}(\mathbf{u}_{i}))^{T}$
$=\gamma\hat{X}_{i}\partial _{1}\mathbf{A}_{i}(\mathbf{u}_{i})^{T}\partial _{1}\mathscr{I}(\mathbf{A}_{i})^{T}$.
Then
$\frac{d\mathbf{A}_{i}}{dt}=\partial _{1}\mathbf{A}_{i}(\mathbf{u}_{i})\frac{d\mathbf{u}_{i}}{dt}+\sum _{{j=1}}^{n}\partial _{2}\mathbf{A}_{i}(\mathbf{u}_{j})\frac{d\mathbf{u}_{j}}{dt}$
$=\gamma\hat{X}_{i}\partial _{1}\mathbf{A}_{i}(\mathbf{u}_{i})\partial _{1}\mathbf{A}_{i}(\mathbf{u}_{i})^{T}\partial _{1}\mathcal{I}(\mathbf{A}_{i})^{T}+\sum _{{j=1}}^{n}\gamma\hat{X}_{j}\partial _{2}\mathbf{A}_{i}(\mathbf{u}_{j})\partial _{1}\mathbf{A}_{j}(\mathbf{u}_{j})^{T}\partial _{1}\mathcal{I}(\mathbf{A}_{j})^{T}$
$=\gamma\hat{X}_{i}\partial _{1}\mathbf{A}_{i}(\mathbf{u}_{i})\partial _{1}\mathbf{A}_{i}(\mathbf{u}_{i})^{T}S(\mathbf{A}_{i})+\sum _{{j=1}}^{n}\gamma\hat{X}_{j}\partial _{2}\mathbf{A}_{i}(\mathbf{u}_{j})\partial _{1}\mathbf{A}_{j}(\mathbf{u}_{j})^{T}S(\mathbf{A}_{j})$.
This expression is made up of two somewhat complex terms. The first term, the $S(\mathbf{A}_{i})$ term, expresses the change in the interactions experienced by population $i$ due to change in population $i$ in its patient role. I refer to this as the direct effect of selection on population $i$. The second term, the sum of $S(\mathbf{A}_{j})$ vectors, expresses the change in population $i$'s interactions due to change in all the different agents it encounters, including population $i$ itself in its agent role. I refer to this as an indirect effect of selection on the various populations.
What would happen if there were no constraints on $\mathbf{A}_{i}$? That is, if all the terms just mutate independently without being constrained by dependence on $\mathbf{u}$ variables. In this case, we would simply have
$\frac{d\mathbf{A}_{i}}{dt}=\gamma\hat{X}_{i}S(\mathbf{A}_{i})$.
In the constrained case, as with $\mathbf{p}$, the motion due to $S(\mathbf{A}_{i})$ is modified by a transformation matrix. Here, though, there's also another term dealing with the effect of changing $\mathbf{u}$ values as the second argument to $a(\mathbf{u}_{i},\mathbf{u}_{j})$. Let's expand that out one step further:
$\usepackage[usenames,dvipsnames,svgnames,table]{xcolor}{\color{blue}\frac{d\mathbf{A}_{i}}{dt}}={\color{green!70!black}\gamma\hat{X}_{i}\partial _{1}\mathbf{A}_{i}(\mathbf{u}_{i})\partial _{1}\mathbf{A}_{i}(\mathbf{u}_{i})^{T}S(\mathbf{A}_{i})+{\color{violet}\sum _{{j=1}}^{n}\gamma\hat{X}_{j}\partial _{2}\mathbf{A}_{i}(\mathbf{u}_{j})\partial _{1}\mathbf{A}_{j}(\mathbf{u}_{j})^{T}S(\mathbf{A}_{j})}\]$
$\usepackage[usenames,dvipsnames,svgnames,table]{xcolor}={\color{green!70!black}D(\mathbf{u}_{i})}+{\color{violet}I(\mathbf{u}_{i}|\mathbf{u}_{1},\ldots,\mathbf{u}_{n})}$.
Here I'm using colors to distinguish three different vector quantities, which I'll soon plot in the same colors:
## Supplementary materials
Here are the Sage classes that do the work for the Mac-Lev models. above. They use generalized Sage machinery that's stored at SageDynamics.
Now here's the MacArthur-Levins resource competition model.
|
# Why is water a liquid at room temperature and pressure? Please be concise.
And the propensity for hydrogen bonding results from the polarity of the water molecule, which we could represent as ""^(delta+)H-stackrel(delta-)O-H^(delta+) - in solution these dipoles align appropriately, and this is a potent intermolecular force.
You should compare the normal boiling of water to that of other small molecules, $H F$, $H C l$, $N {H}_{3}$, $C {H}_{4}$ etc. What do you find?
|
Pologirl19 What is the sum of the geometric series. one year ago one year ago
1. Pologirl19
$\sum_{x=1}^{10}6(2)^x$
2. Pologirl19
3. Spacelimbus
$\Large s_n=a_1\left( \frac{1-q^n}{1-q}\right)$
4. Pologirl19
I dont know what to do
5. Spacelimbus
Basically you plugin the values: $\Large q=2, a_1=6, n=10$
6. Pologirl19
Okay thank you so much
|
# High-frequency estimates on boundary integral operators for the Helmholtz exterior Neumann problem
### Abstract
We study a commonly-used second-kind boundary-integral equation for solving the Helmholtz exterior Neumann problem at high frequency, namely the Regularized Combined Field Integral Equation (RCFIE)1. Writing $\Gamma$ for the boundary of the obstacle, this integral operator map $L^2(\Gamma)$ to itself, contrary to its non-regularized version.
We prove new frequency-explicit bounds on the norms of both the RCFIE and its inverse. The bounds on the norm are valid for piecewise-smooth $\Gamma$ and are sharp, and the bounds on the norm of the inverse are valid for smooth $\Gamma$ and are observed to be sharp at least when $\Gamma$ is curved.
Together, these results give bounds on the condition number of the operator on $L^2(\Gamma)$; this is the first time $L^2(\Gamma)$ condition-number bounds have been proved for this operator for obstacles other than balls2.
1. Bruno and T. Elling and C. Turc, Regularized integral equations and fast high-order solvers for sound-hard acoustic scattering problems. International Journal for Numerical Methods in Engineering, 2012. ↩︎
2. Y. Boubendir and C. Turc, Wave-number estimates for regularized combined field boundary integral operators in acoustic scattering problems with Neumann boundary conditions. IMA Journal of Numerical Analysis, 2013 ↩︎
Date
October 7, 2021
Location
Nice, France
|
On growth of homology torsion in amenable groups
Kar, A
Kropholler, P
Nikolov, N
14 July 2016
Journal:
Mathematical Proceedings of the Cambridge Philosophical Society
Last Updated:
2021-04-29T07:59:40.687+01:00
2
162
DOI:
10.1017/S030500411600058X
337-351
abstract:
Suppose an amenable group $G$ is acting freely on a simply connected simplicial complex $\tilde X$ with compact quotient $X$. Fix $n \geq 1$, assume $H_n(\tilde X, \mathbb{Z})=0$ and let $(H_i)$ be a Farber chain in $G$. We prove that the torsion of the integral homology in dimension $n$ of $\tilde{X}/H_i$ grows subexponentially in $[G:H_i]$. This fails if $X$ is not compact. We provide the first examples of amenable groups for which torsion in homology grows faster than any given function. These examples include some solvable groups of derived length 3 which is the minimal possible.
527431
Submitted
Journal Article
|
Edit on GitHub
# Last to Cross the Finish Line: Part One
Recently, my colleague +Fred Sauer and I gave a tech talk called "Last Across the Finish Line: Asynchronous Tasks with App Engine". This is part one in a three part series where I will share our learnings and give some helpful references to the App Engine documentation.
## Intro
Before I dive in, a quick overview of our approach:
• "Fan out; Fan in" First spread tasks over independent workers; then gather the results back together
• Use task queues to perform background work in parallel
• Can respond quickly to the client, making UI more responsive
• Operate asynchronously when individual tasks can be executed independently, hence can be run concurrently
• If tasks are too work intensive to run synchronously, (attempt to) break work into small independent pieces
• Break work into smaller tasks, for example:
• rendering media (sounds, images, video)
• retrieving and parsing data from an external service (Google Drive, Cloud Storage, GitHub, ...)
• Keep track of all workers; notify client when work is complete
Before talking about the sample, let's check it out in action:
We are randomly generating a color in a worker and sending it back to the client to fill in a square in the "quilt". (Thanks to +Iein Valdez for this term.) In this example, think of each square as a (most likely more complex) compute task.
## Application Overview
The application has a simple structure:
gae-last-across-the-finish-line/
|-- app.yaml
|-- display.py
|-- main.py
|-- models.py
+-- templates/
+-- main.html
We'll inspect each of the Python modules display.py, main.py and models.py individually and explore how they interact with one another. In addition to this, we'll briefly inspect the HTML and Javascript contained in the template main.html, to understand how the workers pass messages back to the client.
In this post, I will explain the actual background work we did and briefly touch on the methods for communicating with the client, but won't get into client side code or the generic code for running the workers and watching them all as they cross the finish line. In the second post, we'll examine the client side code and in the third, we'll discuss the models that orchestrate the work.
## Workers
These worker methods are defined in display.py. To generate the random colors, we simply choose a hexadecimal digit six different times and throw a # on at the beginning:
import random
HEX_DIGITS = '0123456789ABCDEF'
def RandHexColor(length=6):
result = [random.choice(HEX_DIGITS) for _ in range(length)]
return '#' + ''.join(result)
With RandHexColor in hand, we define a worker that will take a row and column to be colored and a session ID that will identify the client requesting the work. This worker will generate a random color and then send it to the specified client along with the row and column.To pass messages to the client, we use the Channel API and serialize our messages using the json library in Python.
import json
|
# high energy milling mixer pw 700 i
### S Pinem dan I Kuntoro / Jurnal Teknologi Bahan Jurnal
22 Jun together with the milling time of 30 h in a high energy ball mill It was found that by mixing Ti Mixer Mill type PW 700i made by local
### Synthesis of PZT Ceramics by Sol Gel Method and Mixed Oxides
Thermal treatment at a 700°C and b 900°C after milling time at 4 8 and 12 h the mechanosynthesis process using different kind of high energy ball milling
### Free User Manuals By Brands ManualsOnline
Manuals and free owners instruction pdf guid Find the user manual and the help you need for the products you own at ManualsOnline
### IST600 I n S o l i d o Technologies
IST 600 is our top notch high energy mixer mill with currently unprecedented frequency of oscillation of 33 Hz making it an ideal choice for pulverization of extra
### Magnesium PMMA Composites Formed by Mechanical Alloying
Username Password Remember me magnesium Mg composites by high energy ball milling the blends for up to 10 hours Vickers hardness tests LECO LV 700 AT were performed with 1 kgf for 10 seconds at ten more intimate and homogeneous mixing with Mg One should also note that by 10 hours milling time
### Patent Olivia A Graeve
Oct 15 such as between about 350° C and about 700° C such as about C In another milled B1 boron using Procedure 4 at a fuel to oxidizer F/O ratio of HCl for 5 minutes and high energy ultrasonicated for 1 hour FIG beneficial to mix the oxidizer and fuel before mixing in the boron as it
### EMAX High Energy Ball Mill Retsch
The Emax is an entirely new type of ball mill for high energy milling particle size distribution thanks to special jar design which improves mixing of the sample
### Microstructural evolution and some mechanical properties of
Then ODS steel powder was generated by high energy ball milling of milled 13Cr resulting from continuous milling and mixing of the mixture during milling
### Mesoporous ceria zirconia solid solutions as oxygen gas sensing
Therefore these high stable mesoporous materials with high active surface become hydrothermal high temperature solid state reaction high energy ball milling Then lmmol of total metal ions was prepared through mixing proper ratios of Figure 4 corresponds to the oxygen storage capacity determined at to 700°C
### Properties of ${\rm MgB} {2}$ Tapes Prepared by Using MA In Ex
Critical current density measurements have shown relatively high values of Jc to MgB2 at 700°C The amount of MgB2 after annealing was about 91 wt
### SEMPrep2 SC SEM SPI Supplies
The SEMPrep2 is equipped both with high and low energy ion sourc The system also provides an ion milling based solution for improving and cleaning of
### Feeding Barley to Cattle Agriculture and Natural Resources The
improve the performance of cattle finished on high energy rations based on dry rolled barley and a standard weight of 630 g/L as compared to the 700 g/L for Klage Barley Gravity flow is used to pass grain kernels between Barker 75 found that it was profitable to mill barley when the unit value of the feed exceeded
### Hemp Inc Powers Up Milling Operation for Industrial
Jun 26 Now that the hemp mill in Spring Hope North Carolina is online and being That s a 700 increase from Perlowin said the high CBD clone industrial hemp plants were being build a Tiny Hemp House Hemp/lime recipes mixing and application Register Login Forgot Password Site Map
### Products in Mills on Thomas Scientific
found in Thomas Model 4 Wiley Mill Thomas Model 4 Wiley Mill Polystyrene Vial w/Screw On Cap For wet or dry mixing of liquids and powders Efficient size reduction of up to 700 ml feed quantity due to a powerful 900 W motor High energy ball mill that accommodates sample sizes ranging from 02 10 grams
### Protein Complex Affinity Capture from Cryomilled Mammalian
Dec 9 Cryogenic gloves will be needed to handle the LN2 cooled milling apparatus Incubate for 30 min at 4 °C with continuous gentle mixing on a rotator wheel produced by The Rockefeller University High Energy Physics Instrument Shop Microtip sonicator Q700 Qsonica Q700 For affinity capture III
### Study on the Effectiveness Precision and Reliability of X ray
The energy of the emitted x ray depends on the difference in energy of the High Definition XRF HDXRF is a type of EDXRF that utilizes special optics to enhance Standard test methods need to be developed to address pass/fail criteria and Mixer SPEX Industries mixer/mill catalog number located in building
### Basket Mill Industrial Mixers Blenders
Wet Mixers Basket Mill Product Code Basket Mill Share Basket Mill unit that requires less space less time and less energy to achieve bigger results MIKRONS BKM ensures that the feed materials pass through the high BKM 700
### 5 NOISE SOURCES
pressing and shearing lathes milling machines and grinders as well as textile on the part of the total mechanical or electrical energy that is transformed into formed due to the complex radiation sources high frequency noise is Specific octave band sound power levels K in dB re 1 pW of three types of fans
### M Mixer/Mill High Energy
The M Mixer/Mill is a high energy ball mill that grinds up to 02 10 grams of dry brittle sampl The vial which contains a sample and one or more balls
### studies of crystallite size and lattice strain in al Assiut
Al2O3 powders deformed by high energy mechanical milling using a Philips PW X ray diffractometer using Cu Kα radiation λ = nm at Figure 7 shows the SEM image of Al Al2O3 powder particles after mixing the powders 700 800 strength Mpa strain Al Al Al2O3 Fig 9 Engineering stress strain
### High Performance TiP2O7 Based Intercalation Negative
Jul 10 Password Combining high energy mechanical alloying with intimate carbon coating temperature for 12 hours in SPEX SamplePrep D Mixer/Mill The temperature was ramped to 700°C at a rate of 20°C/min and
### POULTRY NUTRITION AND FEEDING
Kellems and Church Waldroup P W Dietary nutrient use of expanders in mills for steam conditioning feed to reduce/eliminate Salmonella B Higher energy and nutrient contents vs others especially protein/amino acids but 700 700 770 Histidine 021 017 014 170 170 190 Isoleucine
### Iron nanoparticles produced by high energy ball
Official Full Text Paper PDF Iron nanoparticles produced by high energy ball milling 13 million members 100 million publications 700k research projects Join for free 5 Figur Fig Since beryllium detectors permit to pass The ball milling was carried out at room temperature using a SPEX mixer/mill
### Swiss Tower Mills Minerals AG
Swiss Tower Mills Minerals AG develops fine and ultra fine grinding mill products A 700 kW HIGmill for a copper concentrate regrind application at the Kevitsa mine in The ceramic grinding beads in the mixing cylinder are put in motion by the High energy intensity Efficient transfer of energy from the mill shaft into the
### Green Chemstry in the Laboratory Using
The inertia of the grinding balls causes them to impact with high energy on the Therefore Mixer Mills are especially suitable for laboratory scale screening
|
# Math Help - half-life
1. ## half-life
I need help setting up this problem
The radioactive decay of Rubidium-87 into Strontium-87 has a half-life of 48.8 billion years, making it one of the most useful decay modes for obtaining the ages of very old igneous rock. A sample of some of the oldest igneous rock found on Earth was chemically analyzed and found to contain 53 atoms of Sr 87 for ever 947 atoms of Rb 87. Assuming that no Sr 87 was present when this sample solidified, how old is this rock?
2. Originally Posted by viet
I need help setting up this problem
Since one atom of Rb turns into one atom of Sr, there were initially 87+ 947= 1034 atoms of Rb. Now there are 947 atoms left.
Saying that it has a half life of 48.8 billion years means the amount is multiplied by 1/2 every 48.8 billion years. If T is the number of billions of years that have passed, then T/48.8 is the number of "half lives" and so that amount left after T billions of years is $A(1/2)^{T/48.8}$ where "A" is the initial amount. Your equation is $1034 (1/2)^{T/48.8}= 947$
Solve that for T.
$1034 (\frac12)^{T/48.8}= 947$
$(\frac12)^{T/48.8}=.91586$
$\frac {T}{48.8} = \frac {log(0.91586)}{log(\frac{1}{2})}$
$\frac {T}{48.8} = 0.12680$
$T = 6.1878$
4. The method is correct; however, because there were 53 atoms of Sr for every 947 atoms of Rb, that means the ratio of the current amount of Rb to the original amount of Rb is 947/1000, not 947/1034. This will provide you with a slightly different answer.
5. Originally Posted by icemanfan
The method is correct; however, because there were 53 atoms of Sr for every 947 atoms of Rb, that means the ratio of the current amount of Rb to the original amount of Rb is 947/1000, not 947/1034. This will provide you with a slightly different answer.
so the answer would be T = 3.83 billion years?
6. Originally Posted by viet
so the answer would be T = 3.83 billion years?
Yes, that is correct.
|
# selectively clear direct formatting [closed]
I have to insert text from MS-Word documents into Writer that have a lot of direct formatting (fonts, font sizes etc.). However, superscript and subscript have here a semantic meaning (formulars, units) and need to be preserved.
When I do Format > Clear direct formatting it removes all formatting and I have to do a lot manual work restoring the superscript/subscript formatting. Is there a more efficient way?
edit retag reopen merge delete
### Closed for the following reason the question is answered, right answer was accepted by Alex Kemp close date 2016-02-19 06:10:11.654885
Are you sure, that there is not a character style applied?
( 2014-02-02 02:20:57 +0200 )edit
Sort by » oldest newest most voted
EDIT
There is a much easier way to achieve this, answered by @pierre-yves-samyn: https://ask.libreoffice.org/en/questi...
I have used the following workaround:
1. Search with regular expressions for (.*) with desired style, e.g for bold (see formatting button on lower right corner).
2. Replace it with e.g <b>$1</b> 3. Repeat with all other formatting you need, e.g italics and super- or subscript. 4. Select all, Clear all Formatting. Now your document looks something like "This <b> was bold,</b>, this was <i>italic</i>." 5. Do it the other way around. Search for <b>([^<]*)</b> and replace it with$1 and let the replace box have the desired formatting.
And you are done.
BTW, formatting applies to "search for" or "replace with" boxes, depending on which one is selected. So does the "No formatting" button.
more
Thank you for your reply! Actually I will test another versions, like EuroOffice or something. In OO I was able to assign a shortcut to this function, and have it at one click. And so my colleagues. Compare this to your kind recipe. Looks like someone manages LO development in the way to make it less, and less usable for an average user.
( 2014-02-11 10:00:53 +0200 )edit
@frankaen, true. But hey, why not write an extension?
( 2014-03-20 00:47:08 +0200 )edit
Very confusing mistake in the explanation at step 2: you need to use $0, not$1 or the output will be empty. The method is very effective though!
( 2014-11-01 19:09:48 +0200 )edit
@darioshanghai, no not a mistake at all. $0 returns the whole match, while$1 is just the first group, which means anything inside first pair of parenthesis (). So if your search is (.*) not .*, then it will work just fine. Although for this case using .* and \$0 would be 2 characters shorter.
( 2016-08-13 19:22:14 +0200 )edit
Interesting that in your case Clear Direct Formatting works, because I have some characters formatted more than bold, italic and so on, and CDF removes only this simple formatting, but not advanced, like changed intercharacter space, underline, colour.
more
|
# Decomposing and Recomposing
We have a partition of 2018. If the maximum value of the product of the numbers in the partition can be expressed as $a \times b^c,$ where $a$ and $b$ are primes and $c$ is an integer, then what is $a+b+c?$
×
Problem Loading...
Note Loading...
Set Loading...
|
» » Allergies au latex
• 07.08.2019
• 199
• 4
Category: Male
## Comment on:
Zulkigar | 12.08.2019
In it something is. Many thanks for the help in this question.
Faular | 17.08.2019
I apologise, but, in my opinion, you commit an error. Write to me in PM, we will discuss.
Gunos | 08.08.2019
I am sorry, that I interrupt you, but it is necessary for me little bit more information.
Goltinos | 15.08.2019
In it something is. Now all became clear, many thanks for the help in this question.
|
## Pages
### Difference Between Circular And Linear Permutations
Permutations are the different orders in which a group of objects can be arranged.
The single difference between circular and linear permutations is that circular permutations are the different orders in which a group of objects can be arranged in a circle, whereas linear permutations are the different orders in which a group of objects can be arranged in a straight line.
The formula for circular permutations is obviously not the same as that for linear permutations. For a group of 'n' different objects to be arranged in circular order, the formula is
P = (n - 1)!
whereas for the same group of 'n' different objects arranged in linear order, the formula is
P = n!
In order to understand the above difference in more detail, read on.
The main difference between a straight line and a circle is that a straight line has a fixed starting point and a fixed ending point, whereas a circle has no fixed starting or ending point. Any point on a circle can be chosen as a starting point and the circle can be drawn from that point.
Thus when you arrange, say, 'n' objects in a straight line, they start with one fixed position and end on a fixed position. On the other hand, when these 'n' objects are arranged in a circle, we can not assign any position to be the starting or ending one. This gives rise to a peculiar property of circular permutations:
When these 'n' objects are arranged in a given order in a circle, and then each object is moved to the position to its right (or left), you get a seemingly different order, but from the mathematical point of view, both orders are same.
For example, the following example shows a group of five differently colored balls arranged in two (seemingly) different circular forms.
Circular Permutations
In the first figure above, the topmost ball is yellow colored while in the second figure, the topmost ball is red. Both the permutations look different, but they are same because in each of the two figures, each ball has the same colored ball to its left and right. For example, consider the yellow colored ball: In both the figures 1 and 2, the ball to its left is red and that to its right is blue. Thus, both the orders above are considered the same.
Now we arrange the same five balls in a straight line.
Linear permutations
The above two orders are considered different because the ball at the starting and ending position are different.
Thus, because a circle does not have a definite starting point, but a straight line does, circular permutations can not be calculated in the same way as linear permutations.
### Calculation of Circular Permutations
Circular permutations are calculated by simply fixing one position as the starting and ending position (because the starting and ending points on a circle are always the same). We place any one of the 'n' objects in this fixed position. The remaining (n - 1) objects are now arranged in the remaining (n - 1) positions.
Circular permutations: Fixing one position
Since we have already fixed one position in the circle, the remaining positions can be considered to be in a straight line, starting from the left of the fixed position and ending on the right of the fixed position (or vice versa).
Thus, in order to calculate the different orders in which the remaining (n - 1) objects can be arranged in the remaining (n - 1) positions, we just need to apply the formula for linear permutations.
P = (n - 1)!
The above formula gives the permutations of the (n - 1) objects in the circle's (n - 1) positions. But how does it give us the number of permutations of 'n' objects in the circle?
Now comes in use the concept which we discussed with the first diagram above: In circular permutations, moving all objects by a fixed number of places to their right or left does not change the order. Thus, if we move the fixed position above to any position in the circle (and simultaneously move all the other positions in the circle as well), we will end up with the same order! This is illustrated in the figure below.
Circular permutations: Moving the fixed position gives rise to the same order.
Thus, we can say that the total number of different orders in which 'n' objects can be arranged in a circle are given by
P = (n - 1)!
### Conclusion
Thus we see that circular permutations are quite different from linear permutations, both in the concept and formulas.
|
# Full-Length TSI Math Practice Test-Answers and Explanations
Did you take the TSI Math Practice Test? If so, then it’s time to review your results to see where you went wrong and what areas you need to improve.
## TSI Mathematical Reasoning Practice Test Answers and Explanations
1- Choice D is correct
If $$1.05 < x ≤ 3.04$$, then $$x$$ cannot be equal to 3.40. Because: $$3.04<3.40$$
2- Choice B is correct
$$a=8⇒$$ area of triangle is $$=\frac{1}{2} (8×8)=\frac{64}{2}=32$$ cm
3- Choice C is correct
$$66^\circ + 42^\circ = 108^\circ$$
$$180^\circ – 108^\circ = 72^\circ$$
The value of the third angle is $$72^\circ$$.
4- Choice B is correct
Simplify:
$$10 – \frac{2}{3} x ≥ 12 ⇒ – \frac{2}{3} x ≥ 2 ⇒ – x ≥ 3 ⇒ x ≤ – 3$$
5- Choice A is correct
Factor each trinomial $$x^2 – 2x – 8$$ and $$x^2 – 6x + 8$$
$$x^2 – 2x – 8 ⇒ (x – 4)(x + 2)$$
$$x^2 – 6x + 8 ⇒ (x – 2)(x – 4)$$
$$(x – 4)$$ is a factor of both trinomial.
$14.99 Satisfied 85 Students 6- Choice B is correct $$\frac{1+b}{6b^2}=\frac{1}{b^2} ⇒(b≠0) b^2+b^3=6b^2⇒b^3-5b^2=0⇒b^2 (b-5)=0⇒b-5=0⇒b=5$$ 7- Choice D is correct Use FOIL (First, Out, In, Last) $$(x + 7) (x + 5) = x^2 + 5x + 7x + 35 = x^2 + 12x + 35$$ 8- Choice A is correct $$\frac{54}{6}=\frac{27}{3}=9, \frac{48}{6}=\frac{24}{3}=8, \frac{36}{6}=\frac{18}{3}=6, \frac{59}{6}=\frac{59}{6}$$ 59 is prime number The answer is 54. 9- Choice B is correct $$x^2 – 64 = 0 ⇒ x^2 = 64 ⇒ x = 8$$ 10- Choice C is correct If $$a = 8$$ then $$b = \frac{8^2}{4} + 4 ⇒ b = \frac{64}{4} + 4 ⇒b = 16 + 4 = 20$$ 11- Choice D is correct tan $$(– π/6) = – \frac{\sqrt{3}}{3}$$ ## Best TSI Math Prep Resource for 2022 12- Choice A is correct $$\frac{\sqrt{32a^5 b^3 }}{\sqrt{2ab^2}}= \frac{4a^2 b\sqrt{2ab}}{b\sqrt{2a}} = 4a^2\sqrt{b}$$ 13- Choice D is correct $$E = 4 + A$$ $$A = S – 3$$ 14- Choice C is correct $$\begin{cases}5x + y = 9\\10x-7y= -18\end{cases} ⇒$$ Multiplication $$(–2)$$ in first equation $$⇒ \begin{cases}-10x- 2y = -18\\10x-7y= -18\end{cases}$$ Add two equations together $$⇒ –9y = –36 ⇒ y = 4$$ then: $$x = 1$$ 15- Choice B is correct $$(x – h)^2 + (y – k)^2 = r^2$$ ⇒ center: (h,k) and radius: r $$(x – 3)^2 + (y + 6)^2 = 12$$ ⇒ center: $$(3,-6)$$ and radius: $$2\sqrt{3}$$ 16- Choice D is correct c$$(3)=(3)^2+10(3)+30=9+30+30=69$$ $$4×3=12⇒12-69=-57⇒57,000$$ loss 17- Choice B is correct sinB$$=\frac{the length of the side that is opposite that angle}{the length of the longest side of the triangle}=\frac{4}{5}$$ 18- Choice C is correct $$-7y=-6x-12⇒y=\frac{-6}{-7} x-\frac{12}{-7}⇒y=\frac{6}{7} x+\frac{12}{7}$$ 19- Choice D is correct $$(g – f)(x) = g(x) – f(x) = (– x^2 – 1 – 2x) – (5 + x) – x^2 – 1 – 2x – 5 – x = – x^2 – 3x – 6$$ 20- Choice B is correct $$\frac{|3+x|}{7}≤5⇒|3+x|≤35⇒-35≤3+x≤35⇒-35-3≤x≤35-3⇒-38≤x≤32$$ ## The Best Book to Ace the TSI Math Test$14.99
Satisfied 52 Students
## More from Effortless Math for TSI Test …
### Are you preparing for the TSI exam and looking for a TSI Math Course?
Try The Ultimate TSI Math Course (+ FREE Worksheets & Tests)! We’re sure this is what you need.
### Need the best TSI math books to improve your math skills?
Top 10 TSI Math Prep Books (Our 2022 Favorite Picks) introduces the Best TSI Math Books on the Market.
### Do not know the guaranteed tips for success in the TSI math test?
$15.99 Satisfied 70 Students$13.99
$14.99 Satisfied 85 Students ## Have any questions about the TSI Test? Write your questions about the TSI or any other topics below and we’ll reply! ## Related to This Article ### More math articles ### What people say about "Full-Length TSI Math Practice Test-Answers and Explanations"? No one replied yet. X 52% OFF Limited time only! Save Over 52% SAVE$40
It was $76.99 now it is$36.99
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.