text
stringlengths 104
605k
|
---|
Division of Fraction Practice Test Solutions and Answers
This is the complete solutions and answers to the Practice Test on Division of Fractions. If you are not familiar with the method, or you have forgotten how to do it, please read “How to Divide Fractions.
In dividing fractions, you must convert all mixed fractions to improper fractions before performing the division. The division involves getting the reciprocal (multiplicative inverse) of the divisor, and then multiplying both fractions instead of dividing them.
1.) $\frac{4}{5} \div \frac{2}{3}$.
Solution
We get the reciprocal of $\frac{2}{3}$ and multiply it to $\frac{4}{5}$. The reciprocal of $\frac{2}{3}$ is $\frac{3}{2}$. So,
$\displaystyle \frac{4}{5} \times \frac{3}{2} = \frac{12}{10}$
(you can use cancellation to do this quickly). Reducing to lowest terms by dividing both the numerator and denominator by 2 results to $\frac{6}{5}$. Converting this improper fraction to mixed form, we get $1 \frac{1}{5}$.
Answer: $1 \frac{1}{5}$.
2.) $\frac{2}{7} \div \frac{5}{21}$
Solution
The division $\frac{2}{7} \div \frac{5}{21}$ is the same as
$\displaystyle \frac{2}{7} \times \frac{21}{5} =\frac{42}{35}$.
Reducing to lowest terms by dividing both the numerator and the denominator of the preceding fraction by $7$, we get $\frac{6}{5}$ or $1 \frac{1}{5}$.
Answer: $1 \frac{1}{5}$
3.) $8 \div \frac{4}{5}$
Solution
Any whole number in multiplication has a denominator $1$, so
$\displaystyle \frac{8}{1} \times \frac{5}{4} = \frac{40}{4} = 10$.
Answer: $10$
4.) $\frac{3}{5} \div 12$
The reciprocal of $12$ is $\frac{1}{12}$. Now, we multiply:
$\displaystyle \frac{3}{5} \times \frac{1}{12} = \frac{3}{60}$.
Dividing both the numerator and denominator by $3$, gives $\frac{1}{20}$ as the lowest term.
Answer: $\frac{1}{20}$
5.) $15 \div \frac{2}{3}$
We get the reciprocal of $\frac{2}{3}$, we multiply:
$\displaystyle \frac{15}{1} \times \frac{3}{2} = \frac{45}{2}$.
Converting the improper fraction to mixed fraction gives us $22 \frac{1}{2}$.
Answer: $22 \frac{1}{2}$
6.) $3 \frac{2}{5} \div \frac{3}{4}$
First we convert the mixed fraction to improper fraction, then multiply it to the reciprocal of $\frac{3}{4}$. If we convert $3 \frac{2}{5}$ to mixed fraction, we have $\frac{17}{5}$.
We now multiply:
$\displaystyle\frac{17}{5} \times \frac{4}{3} = \frac{68}{15}$.
Converting $\frac{68}{15}$ to mixed form gives us $4 \frac{8}{15}$.
Answer: $4 \frac{8}{15}$
7.) $\frac{3}{4} \div 2 \frac{1}{9}$.
Converting $2 \frac{1}{9}$ to mixed fractions gives us $\frac{19}{9}$. Now, multiplying $\frac{3}{4}$ to the reciprocal of $\frac{19}{9}$.
$\frac{3}{4} \times \frac{9}{19} = \frac{27}{76}$
Answer: $\frac{27}{76}$
8.)$7\frac{2}{3} \div 7\frac{1}{2}$
Converting $7 \frac{2}{3}$ to improper fraction gives us $\frac{23}{3}$. Now, converting $7 \frac{1}{2}$ to improper gives us $\frac{15}{2}$. Now, multiplying $\frac{23}{3}$ to the reciprocal of $\frac{15}{2}$, we have
$\displaystyle \frac{23}{3} \times \frac{2}{15} = \frac{46}{45} = 1 \frac{1}{45}$.
9.) $\displaystyle \frac{2\frac{3}{5}}{4}$
The given above is the same as $2 \frac{3}{5} \div 4$. Now, converting $2 \frac{3}{5}$ to improper fraction results to $\frac{13}{2}$. Now, we multiply this result to the reciprocal of $4$ which is $\frac{1}{4}$.
$\displaystyle \frac{13}{5} \times \frac{1}{4} = \frac{13}{20}$
Answer: $\frac{13}{20}$.
10.) $\displaystyle \frac{2 \frac{1}{2}}{\frac{8}{3}}$
The fraction $2 \frac{1}{2}$ in improper form is $\frac{5}{2}$. We multiply it to the reciprocal of $\frac{8}{3}$.
$\displaystyle \frac{5}{2} \times \frac{3}{8} = \frac{15}{16}$
Answer: $\frac{15}{16}$.
If you have some questions, please use the comment box below.
|
# Gun control: Are you kidding me?
Gold Member
jcsd said:
Actually the UK has a slightly hogher violent crime rate than the US, but it's murder rate is very, very low to compared to the US for one simple reason - tight gun control.
Actually look at your own crime statistics. You had very low crime rates before that ban was put in place in... 93 or something. Then within a few years, your murder rates had doubled.
See: http://tim.2wgroup.com/blog/archives/000384.html [Broken] i believe
@Huckleberry
But a criminal could just sand off the serial number. I mean it would be so easy to do since it would require a very small amount of work since the serial number would have to be VERY small because even alpha-numerical would need quite a few spaces (about 12 to be safe with like, what, 8 million bullets produced per day). Plus thsi would all have to be on the rim (where the hammer hits the bullet) because teh side of the case would be subject to the high pressures and would destroy the serial. Thats gonna be small as heck printing and easily destroyed.
Last edited by a moderator:
Forensic ballistics tagging
Huckleberry said:
It is unlikely that a criminal will want to spend the time to collect any casings after firing a weapon.
I was thinking the same thing, but revolvers retain casings (as long as they aren't reloaded mid-gun-battle). I would think the code would be etched into both the bullet and the casing, and repeatedly so as to make it more likely for ballistics technicians to be able to read deformed bullets.
There is already a system in use that is very similar, but much less efficient. There are tests that can be done on thechemical composition of the lead in a bullet to determine where it was made and sold.
At the http://ne.oregonstate.edu/facilities/radiation_center/intro.html [Broken] on one or more bullets. I don't know what facts about the bullets the police were trying to determine, though.
Last edited by a moderator:
Of course, if you take the pragmatic actual-humans-killed stance, deaths from homicide are basically negligible, just like terrorism.
Gold Member
lol well thats a boring stance bicycle, what are legislatures to do with all their time if there not out there fixing problems with no real effect on peoples lives.
Legislatures are out to fix problems that people think about. Whether the problems are actually important relative to other problems is only incidental. Hence, gun control & war on terror.
Gold Member
well, terrorism would become a huge problem if a nuclear bomb went off in downtown NY or LA. I dont suppose theres going ot be any day that we would ever remember where gun-enthusiasts or criminals killed 3 million people or osmething so its not a good comparison lol.
Last edited:
BicycleTree said:
Of course, if you take the pragmatic actual-humans-killed stance, deaths from homicide are basically negligible, just like terrorism.
Firearms are a leading cause of death in the Unites States, close behind auto accidents.
http://www-medlib.med.utah.edu/WebPath/TUTORIAL/GUNS/GUNSTAT.html
And firearms non-fatal injury rates are about seven times as high as the fatality rates and highly traumatic.
--
The number of non-fatal injuries is considerable--over 200,000 per year in the U.S. Many of these injuries require hospitalization and trauma care. A 1994 study revealed the cost per injury requiring admission to a trauma center was over $14,000. The cumulative lifetime cost in 1985 for gunshot wounds was estimated to be$911 million, with $13.4 billion in lost productivity. (Mock et al, 1994) The cost of the improper use of firearms in Canada was estimated at$6.6 billion per year. (Chapdelaine and Maurice, 1996)
--
BicycleTree said:
Of course, if you take the pragmatic actual-humans-killed stance, deaths from homicide are basically negligible, just like terrorism.
That may be true, but I still wouldn't feel any safer walking through the Detroit projects at night. Heart disease may kill more people than homocide but it doesn't lurk in dark corners waiting to shoot you. I'd rather lose a game of chess to a computer than to a human opponent.
Of 626 shootings in or around a residence in three U.S. cities revealed that, for every time a gun in the home was used in a self-defense or legally justifiable shooting, there were four unintentional shootings, seven criminal assaults or homicides, and 11 attempted or completed suicides (Kellermann et al, 1998). Over 50% of all households in the U.S. admit to having firearms (Nelson et al, 1987). It would appear that, rather than beign used for defense, most of these weapons inflict injuries on the owners and their families.
That is a fairly alarming statistic.... 4 unintentionals, 7 criminal assaults, and 11 suicide attempts.... for every legitimate usage of self-defense, 22 atrocities elsewhere result.
Scary.
motai said:
That is a fairly alarming statistic.... 4 unintentionals, 7 criminal assaults, and 11 suicide attempts.... for every legitimate usage of self-defense, 22 atrocities elsewhere result.
Scary.
And gun control laws won't help much for the criminal assaults or suicide attempts. They may reduce the unintentional gun deaths. That is more a matter of hunting regulations and home safety, such as required trigger locks and gun safety classes. Unfortunately, the law can't stop people from doing stupid stuff with dangerous things.
Gold Member
Why is suicide attempts brought up so much? Does anyone really think suicide rates would drop of guns didnt exist? How many people would think "hmm... i would kill myself... but since theres no fast projectile creating mechanism that will allow me to do this, i wont do it"
brewnog said:
...need to get them to put a $500 tax on each bullet... Chris Rock did a bit on how each bullet should cost$5,000.00. There would be no more innocent bystanders then. At least thats what he says.
There is no real solution to gun control. You can take all the guns away and there will always be some crackerjack that can make one from scratch. As a matter of fact, I remember seeing a program where an Afghani family had a small forge and actually made their own AK-47 clones. The forge was tabletop size and their molds were made of wet sand.
Actually, there is a solution to gun control. Manditorially(sp) Arm EVERYONE, even kittens.
Pengwuino said:
Why is suicide attempts brought up so much? Does anyone really think suicide rates would drop of guns didnt exist? How many people would think "hmm... i would kill myself... but since theres no fast projectile creating mechanism that will allow me to do this, i wont do it"
Probably because suicide rates increase by a factor of five if a gun is in the house. People who may be suicidal will see guns as an easy option out, and oftentimes impulsive decisions are made.
It does make a difference.
http://www.handgunfree.org/HFAMain/topics/suicide/teen_suicide.htm
brewnog
Gold Member
Pengwuino said:
Why is suicide attempts brought up so much? Does anyone really think suicide rates would drop of guns didnt exist? How many people would think "hmm... i would kill myself... but since theres no fast projectile creating mechanism that will allow me to do this, i wont do it"
Yeah this is pretty ridiculous. The National Institute of Mental Health says that 60% of suicides involve a gun (which is possibly because guns are the most successful means of suicide, - "Ninety percent of suicide attempts that use a gun are successful") which doesn't surprise me.
But only ninety percent? How hard can it actually be to kill yourself with a gun?! What are they doing, trying to choke themselves on it?
Gold Member
Oh gotcha. Well, rather have people killing themselves then taking out their depression on other people (but then again, without guns, they wouldnt be able to do that very easy either lol).
But then again... if these people really want to kill themselves, go jump off a building. That website is pretty iffy to me since they used probably the most unreliable method of killing yourself as their sole comparison (drug overdose). I mean hell, go slit your throat ro jump off a cliff (literally).
And legitamite gun owners and law abiding citizens shouldnt be held accountable for some kid who listens too to much punk rock's and decides to go kill himself.
Last edited:
30,000 a year is a drop in the bucket (and, might I add, a far bigger drop than terrorism). Yes, terrorist nukes would be a serious problem, if terrorists had any nukes. In the meantime we should commit serious money to clamping down on the worldwide supply of bomb material and shrug off a measly 3000 deaths every few years. At least, that's what a rational nation would do.
Gold Member
What do you mean clamping down on the worldwide supply of bomb material? a nuclear bomb or bombs in general?
Nuclear bombs. Non-nuclear explosive bombs are, pragmatically, harmless to the USA unless we're going up against China or something.
Gold Member
Oh yah we should definitly do that. Problem though is we can't really walk into Russia and go "Ok you guys are too poor to secure your material so we'll do it for you".
Why not? If we're serious about NYC not becoming a radioactive hole. We could give money, lend experts and troops, buy up material.
Pengwuino said:
That website is pretty iffy to me since they used probably the most unreliable method of killing yourself as their sole comparison
I agree that the website is questionable. They say that a home with a firearm means the people who live there are 5 times more likely to commit suicide. I think they are looking at this backwards. Suicide is not an impulsive act. People do not just wake up one day and decide to kill themselves. It draws itself out over a long period of time. They prepare for their suicide. They buy the gun. They hold out for hope. When they can't deal with it anymore, then they make the final decision and 15 minutes later they work up the will to commit the act of suicide. They may never speak of their intentions to anyone.
Then someone who wants guns banned comes in and says, "he owned a gun and commited suicide." He makes a check on his list that confirms his data, "Yup, people who own guns are 5 times more liekly to commit suicide." It's like saying people who own cars are 5 times more likely to be in an automobile accident.
Gun control...
take away guns, killers use knives, take away knives, killers use clubs, take away clubs, killers use hands, that is, if you can possibly rid yourself of EVERY gun on the PLANET. right now, there are so many AK-47s out there, that they sell for 50 dollars, you would never get them all. also, there is this thing called the second amendment in america, you can't take away guns or make it impossible for the common man to get his/her hands on one. Think of it this way, criminals will get guns, and they will kill with them. make guns illegal/very expensive will get them out of the hands of those who could defens themselves with them, thus making them easier to kill. like i said before, there will always be murderers, and guns don't kill, humans do.
Fibonacci
Huckleberry said:
Suicide is not an impulsive act.
--
Prior studies have also found that many suicide attempts are made impulsively (Brown, Overholser, Spirito, & Fritz 1991; Kost-Grant, 1983; O’Donnell, Farmer, & Catalan, 1996; Read, 1997; Williams et al., 1980). Estimates of the proportion of suicide attempts that are made impulsively vary widely depending on the definitions used and the sample studied. Some estimates are based on the characteristics of the attempt and the amount of planning involved (Brown et al., 1991; O’Donnell et al., 1996). Another approach is to examine the amount of time spent contemplating the suicide attempt. For example, Williams and colleagues (1980) found that 40 percent of hospital patients treated for self-injury reported less than 5 minutes premeditation.
--
Last edited:
I read the articles and I'm not convinced. A person who is more or less satisfied with their life does not commit suicide. Suicides are performed by people who are unhappy with their life. They have low self-esteem and see no hope for themselves. Their decision to commit suicide is based on their personal image of themselves, which is often very different from the public image they present.
In my last post I said the final decision may take 15 minutes and seem impulsive to others. The actual decision making process is a long one.
Moonbear
Staff Emeritus
|
Open access peer-reviewed chapter
# Application of Harmony Search Algorithm in Power Engineering
By H. R. Baghaee, M. Mirsalim and G. B. Gharehpetian
Submitted: July 16th 2012Reviewed: December 13th 2012Published: February 13th 2013
DOI: 10.5772/55509
## 1. Introduction
### 1.1. On the use of harmony search algorithm in optimal placement of FACTS devices to improve power systems
With the increasing electric power demand, power systems can face to stressed conditions, the operation of power system becomes more complex, and power system will become less secure. Moreover, because of restructuring, the problem of power system security has become a matter of concern in deregulated power industry. Better utilization of available power system capacities by Flexible AC Transmission Systems (FACTS) devices has become a major concern in power systems too.
FACTS devices can control power transmission parameters such as series impedance, voltage, and phase angle by their fast control characteristics and continuous compensating capability. They can reduce flow of heavily loaded lines, resulting in low system losses, improved both transient and small signal stability of network, reduced cost of production, and fulfillment of contractual requirement by controlling the power flow in the network. They can enable lines to flow the power near its nominal rating and maintain its voltage at desired level and thus, enhance power system security in contingencies [1-6]. For a meshed network, an optimal allocation of FACTS devices allows to control its power flows and thus, to improve the system loadability and security [1].
The effect of FACTS devices on power system security, reliability and loadability has been studied according to proper control objectives [4-14]. Researchers have tried to find suitable location for FACTS devices to improve power system security and loadability [13-16]. The optimal allocation of these devices in deregulated power systems has been presented in [17-18]. Heuristic approaches and intelligent algorithms to find suitable location of FACTS devices and some other applications have been used in [15-21].
In this chapter, a novel heuristic method is presented based on Harmony Search Algorithm (HSA) to find optimal location of multi-type FACTS devices to enhance power system security and reduce power system losses considering investment cost of these devices. The proposed method is tested on IEEE 30-bus system and then, the results are presented.
## 2. Model of FACTS devices
### 2.1. FACTS devices
In this chapter, we select three different FACTS devices to place in the suitable locations to improve security margins of power systems. They are TCSC (Thyristor Controlled Series Capacitor), SVC (Static VAR Compensator), and UPFC (Unified Power Flow Controller) that are shown in Fig. 1.
Power flow through the transmission line i-j namely Pij, depends on the line reactance Xij, the bus voltage magnitudes Vi, and Vj, and phase angle between sending and receiving buses δiand δj, expressed by Eq. 1.
Pij=ViVjXijsin(δiδj)E1
TCSC can change line reactance, and SVC can control the bus voltage. UPFC is the most versatile member of FACTS devices family and controls all power transmission parameters (i.e., line impedance, bus voltage, and phase angles). FACTS devices can control and optimize power flow by changing power system parameters. Therefore, optimal device and allocation of FACTS devices can result in suitable utilization of power systems.
### 2.2. Mathematical model of FACTS devices
In this chapter, steady-state model of FACTS devices are developed for power flow studies. TCSC is simply modeled to modify just the reactance of transmission lines. SVC and UPFC are modeled using the power injection models. Therefore, SVC is modeled as shunt element of transmission line, and UPFC as decoupled model. A power flow program has been developed in MATLAB by incorporating the mathematical models of FACTS devices.
#### 2.2.1. TCSC
TCSC compensates the reactance of the transmission line. This changes the line flow due to change in series reactance. In this chapter, TCSC is modeled by changing transmission line reactance as follows:
Xij=Xline+XTCSCE2
XTCSC=rTCSC.XlineE3
where, Xlineis the reactance of the transmission line, and rTCSCis the compensation factor of TCSC. The rating of TCSC depends on transmission line. To prevent overcompensation, we choose TCSC reactance between 0.7Xlineto 0.2Xline[26-27].
#### 2.2.2. SVC
SVC can be used for both inductive and capacitive compensation. In this chapter, SVC is modeled as an ideal reactive power injection at bus i:
ΔQi=QSVCE4
#### 2.2.3. UPFC
Two types of UPFC models have been studied in the literature; one is the coupled model [28], and the other the decoupled type[29-31]. In the first, UPFC is modeled with series combination of a voltage source and impedance in the transmission line. In the decoupled model, UPFC is modeled with two separated buses. The first model is more complex than the second one because the modification of the Jacobian matrix is inevitable. In conventional power flow algorithms, we can easily implement the decoupled model. In this chapter, the decoupled model has been used to model the UPFC as in Fig. 2.
UPFC controls power flow of the transmission lines. To present UPFC in load flow studies, the variables Pu1, Qu1, Pu2, and Qu2are used. Assuming a lossless UPFC, real power flow from bus ito bus jcan be expressed as follows:
Pij=Pu1E5
Although UPFC can control the power flow but, it cannot generate the real power. Therefore, we have:
Pu1+Pu2=0E6
Reactive power output of UPFC, Qu1, and Qu2, can be set to an arbitrary value depending on the rating of UPFC to maintain bus voltage.
## 3. Security index
The security index for contingency analysis of power systems can be expressed as in the following [32-33]:
JV=iwi|ViVref,i|2E7
JP=jwj(SjSj,amx)2E8
Here we have:
Vi,wias voltage amplitude and associated weighting factor for the ithbus, respectively,Sj,wjas apparent power and associated weighting factor for the jthline, respectively,Vref,ias nominal voltage magnitude, which is assumed to be 1pufor all load buses (i.e., PQ buses), and to be equal to specified value for generation buses (i.e., PV buses), andSj,maxas nominal apparent power of the jthline or transformer.JPis the security index for the even distribution of the total active flow, and JVis the security index for the closeness of bus voltage to the reference voltage. If the number of overloaded lines decreases, the value of JPreduces too. Similarly, when the bus voltage have a value close to the desired level, JVbecomes a small value. Minimization of both JPand JVmeans the maximization of security margins.
## 4. The proposed algorithm
### 4.1. Harmony search algorithm
Harmony Search Algorithm (HSA) has recently been developed in an analogy with music improvisation process, where music players improvise the pitches of their instruments to obtain better harmony [34]. The steps in the procedure of harmony search are as follows [35]:
Step 1: Initialize the problem and algorithm parameters
Step 2: Initialize the harmony memory
Step 3: Improvise a new harmony
Step 4: Update the harmony memory
Step 5: Check the stopping criterion
The next following five subsections describe these steps.
1. Initialize the problem and algorithm parameters
In step 1, the optimization problem is specified as follows:
min{f(x)|xX}subject tog(x)0andh(x)=0
where, f(x)is the objective function, g(x)the inequality constraint function, and h(x)the equality constraint function. xis the set of each decision variable xi, and Xis the set of the possible range of values for each decision variable; that is Xi,minXiXi,maxwhere, Xi,minand Xi,maxare the lower and upper bounds of each decision variable. The HS algorithm parameters are also specified in this step. These are the harmony memory size (HMS), or the number of solution vectors in the harmony memory, harmony memory considering rate (HMCR), pitch adjusting rate (PAR), the number of decision variables (N), and the number of improvisations (NI), or stopping criterion. The harmony memory (HM) is a memory location where all the solution vectors (sets of decision variables) are stored. This HM is similar to the genetic pool in the GA [36]. Here, HMCR and PAR are parameters that are used to improve the solution vector and are defined in step 3.
1. Initialize the harmony memory
In step 2, the HM matrix is filled with as many randomly generated solution vectors as the HMS in the following:
HM=[x11x21...xN11xN1x12x22...xN12xN2x1HMS1x2HMS1...xN1HMS1xNHMS1x1HMSx2HMS...xN1HMSxNHMS]E9
1. Improvise a new harmony
A new harmony vector, xi=(x1,x2,...,xN), is generated based on three rules; (1) memory consideration, (2) pitch adjustment, and (3) random selection. Generating a new harmony is called ‘improvisation’ [36]. In the memory consideration, the value of the first decision variable x1for the new vector is chosen from any value in the specified HM range (x11x1HMS). Values of the other decision variables (x2,x3,...,xN), are chosen in the same manner. The HMCR, which varies between zero and one, is the rate of choosing one value from the historical values stored in the HM, while (1-HMCR) is the rate of randomly selecting one value from the possible range of values.
xi{xi{xi1,xi2,...,xiHMS}withprobabilityHMCRxiXiwithprobability(1HMCR)E10
For example, an HMCR of 0.85 indicates that the HS algorithm will choose the decision variable value from historically stored values in the HM with 85% probability or from the entire possible range with (100–85) % probability. Every component obtained by the memory consideration is examined to determine whether it should be pitch-adjusted. This operation uses the PAR parameter, which is the rate of pitch adjustment as follows:
xi
{YeswithprobabilityPARNowithprobability(1PAR)E11
The value of (1-PAR) sets the rate of doing nothing. If the pitch adjustment decision for xiis “Yes”, xiwill be replaced as follows:
xixi±rand()*bw
where, bwis an arbitrary distance bandwidth and rand () is a random number between 0 and 1.
In step 3, HM consideration, pitch adjustment or random selection in turn is applied to each variable of the new harmony vector.
1. Update harmony memory
If the new harmony vector xi=(x1,x2,...,xN)is better than the worst harmony in the HM, judged in terms of the objective function value, the new harmony is included in the HM, and the existing worst harmony is excluded from the HM.
1. Check stopping criterion
If the stopping criterion (maximum number of improvisations) is satisfied, the computation terminates. Otherwise, steps 3, and 4 are repeated.
### 4.2. Cost of FACTS devices
Using database of [32], cost function for SVC, TCSC, and UPFC shown in Fig. 3 are modeled as follows:
For TCSC:
CTCSC=0.0015s20.713s+153.75E12
For SVC:
CSVC=0.0003s20.3015s+127.38E13
For UPFC:
CUPFC=0.0003s20.2691s+188.22E14
### Table 2.
Simulation results for different cases
## 6. Harmonic optimization in multi-level inverters using harmony search algorithm
### 6.1. Introduction
Nowadays, dc-to-ac inverters are widely used in industry. All applications are mainly divided into two general groups; 1- Electric drives for all ac motors when dc supply is used, and 2- in systems including high voltage direct current (HVDC) transmission systems, custom power and flexible ac transmission systems (FACTS) devices, flexible distributed generation (FDG), and interconnection of distributed generation (DG) units to a grid. Several switching algorithm such as pulse width modulation (PWM), sinusoidal pulse width modulation (SPWM), space-vector modulation (SVM), selective harmonic eliminated pulse width modulation (SHEPWM), or programmed-waveform pulse width modulation (PWPWM) are applied extensively to control and determine switching angles to achieve the desired output voltage. In the recent decade, a new kind of inverter named multi-level inverter has been introduced. In various publications, this inverter has been used in place of the common inverters to indicate its advantages in different applications. Being multi-level, it can be used in high-power and high-voltage applications. In order to reach the desired fundamental component of voltage, all of various switching methods produce harmonics and hence, it is of interest to select the best method to achieve minimum harmonics and total harmonic distortion (THD). It is suggested to use optimized harmonic stepped waveform (OHSW) to eliminate low order harmonics by determining proper angles, and then removing the rest of the harmonics via filters. In addition, this technique lowers switching frequency down to the fundamental frequency and consequently, power losses and cost are reduced.
Traditionally, there are two states for DC sources in multi-level inverters: 1- Equal DC sources, 2- Non-equal DC sources. Several algorithms have been suggested for the above purposes. In [37] Newton-Raphson method has been used to solve equations. Newton-Raphson method is fast and exact for those modulation indices (M) that can satisfy equations, but it cannot obtain the best answer for other indices. Also, [38] has used the mathematical theory of resultants to find the switching angles such that all corresponding low-order harmonics are completely canceled out sequentially for both equal and non-equal DC sources separately. However, by increasing levels of multi-level converters, equation set tends to a high-order polynomial, which narrows its feasible solution space. In addition, this method cannot suggest any answer to minimize harmonics of some particular modulation indices where there is no acceptable solution for the equation set. Genetic algorithm (GA) method has been presented in [39] to solve the same problem with any number of levels for both eliminating and minimizing the harmonics, but it is not fast and exact enough. This method has also been used in [40] to eliminate the mentioned harmonics for non-equal DC sources. Moreover, all optimal solutions have used main equations in fitness function. This means that the fundamental component cannot be satisfied exactly.
Here, a harmony search (HS) algorithm approach will be presented that can solve the problem with a simpler formulation and with any number of levels without extensive derivation of analytical expressions. It is also faster and more precise than GA.
The cascaded multi-level inverter is one of the several multi-level configurations. It is formed by connecting several single-phase, H-bridge converters in series as shown in Fig. 1a for a 13-level inverter. Each converter generates a square-wave voltage waveform with different duty ratios. Together, these form the output voltage waveform, as shown in Fig. 1b. A three-phase configuration can be obtained by connecting three of these converters in Y, or Δ. For harmonic optimization, the switching angles θ1, θ2,… and θ6(for a 13-level inverter) shown in Fig. 1b have to be selected, so that certain order harmonics are eliminated.
## 8. Problem statement
Fig. 6b shows a 13-level inverter, where θ1, θ2,…and θ6are variables and should be determined. Each full-bridge inverter produces a three level waveform +Vdc, Vdcand 0, and each angle θiis related to the ith inverter i=1,2,...,S. Sis the number of DC sources that is equal to the number of switching angles (in this study S=6). The number of levels L, is calculated as L=2S+1. Considering equal amplitude of all dc sources, the Fourier series expansion of the output voltage waveform is as follows:
V(t)=n=1Vnsin(nωt)E16
where, Vnis the amplitude of harmonics. The angles are limited between 0 and 90 (0<θi<π/2). Because of odd quarter-wave symmetric characteristic, the harmonics with even orders become zero. Consequently, Vnwill be as follows:
Vn=(4Vdcnπi=1kcos(nθi)foroddn0forevennE17
There are two approaches to adjust the switching angles:
1. Minimizing the THD that is not common, because some low order harmonics may remain.
2. Canceling the lower order harmonics and removing the remained harmonics with a filter.
The second approach is preferred. For motor drive applications, it is necessary to eliminate low order harmonics from 5 to 17. Hence, in this section, a 13-level inverter is chosen to eliminate low-order harmonics from 5 to 17. It is not needed to delete triple harmonics because they will be eliminated in three-phase circuits. Thus, for a 13-level inverter, Eq. (17) changes into (18).
M=cos(θ1)+cos(θ2)+...+cos(θ6)0=cos(5θ1)+cos(5θ2)+...+cos(5θ6)0=cos(17θ1)+cos(17θ2)+...+cos(17θ6)E18
Here, Mis the modulation index and defined as:
M=V14Vdc/π(0<M6)E19
It is necessary to determine six switching angles, namely θ1, θ2,…and θ6so that equation set (18) is satisfied. These equations are nonlinear and different methods can be applied to solve them.
## 9. Genetic algorithm
In order to optimize the THD, genetic algorithm (GA) that is based on natural evolution and population is implemented. This algorithm is usually applied to reach a near global optimum solution. In each iteration of GA (referred as generation), a new set of strings (i.e. chromosomes) with improved fitness is produced using genetic operators (i.e. selection, crossover and mutation).
4.75 13.02 30.26 43.55 87.36 89.82
### Table 3.
A typical chromosome
1. Chromosome’s structure
Chromosome structure of a GA is shown in table 1 that involves θias parameter of the inverter.
1. Selection
The method of tournament selection is used for selections in a GA [41-42]. This method chooses each parent by choosing nt(Tournament size) players randomly, and choosing the best individual out of that set to be a parent. In this section, ntis chosen as 4.
1. Cross Over
Crossover allows the genes from different parents to be combined in children by exchanging materials between two parents. Crossover function randomly selects a gene at the same coordinate from one of the two parents and assigns it to the child. For each chromosome, a random number is selected. If this number is between 0.01 and 0.3 [42], the two parents are combined; else chromosome is transferred with no crossover.
1. Mutation
GA creates mutation-children by randomly changing the genes of individual parents. In this section, GA adds a random vector from a Gaussian distribution to the parents. For each chromosome, random number is selected. If this number is between 0.01 and 0.1 [42], mutation process is applied; else chromosome is transferred with no mutation.
## 10. Harmony search algorithm
Harmony Search Algorithm (HSA) has been implemented based on the algorithm described in section 4 of the first part of this chapter [34-35].
## 11. Simulation results
Harmony Search algorithm has been used to solve the optimization problem. The objective function has been chosen as follows:
f={(100V1*V1V1*)4+i=261h(50ViV1)}E20
where, V1*is the desired fundamental harmonic, h1=1,h2=5… and h6=17, are orders of the first six viable harmonics at the output of a three-phase multi-level inverter, respectively. The parameters of the harmony search algorithm have been chosen as: HMS=10, HMCR=0.9, PAR=0.6, and bw=0.01. The optimal solution vector is obtained after 1000 iterations as: [10.757, 16.35, 26.973, 39.068, 59.409, 59.409]. With these switching angles, the output voltage waveform and its spectrum will be obtained as shown in Fig. 2. The values of the objective function and the total harmonic distortion (THD) has been obtained as: THD=4.73%, and f=4.8e8. Simulation has also been performed by GA and results obtained as: THD=7.11%, and f=0.05. It is obvious that the harmony search algorithm performed much better than GA approach.
## 12. Conclusion
In the first part of the presented chapter, we presented a novel approach for optimal placement of multi-type FACTS devices based on harmony search algorithm. Simulations of IEEE 30-bus test system for different scenarios demonstrate that the placement of multi-type FACTS devices leads to improvement in security, and reduction in losses of power systems.
In the second part, the harmony search algorithm was proposed for harmonic optimization in multi-level inverters. Harmony search algorithm has more flexibility than conventional methods. This method can obtain optimum switching angles for a wide range of modulation indices. This advantage is of importance, especially when the number of switching angles goes up, where equation set may not have any solution, or when it is solvable only for a short range of modulation indices. Moreover, the implementation of the harmony search algorithm is very straightforward compared to the conventional methods like Newton-Raphson, where it is necessary to calculate the Jacobean matrix. In addition, one of the most attractive features of intelligent algorithms is their independency from case studies. Actually, intelligent algorithm can be imposed to a variety of different problems without any need for extensive manipulations. For example, the harmony search algorithm and GA algorithms are able to find optimum switching angles in order to cancel out low-order harmonics, and if it is not possible to completely remove them, they can suggest optimum switching angles so that, low-order harmonics will be reduced as much as possible. Furthermore, with a little manipulation in the defined objective function, one can use HSA and GA as a tool for THD optimization. Also, the results indicate that, harmony search algorithm has many benefits over GA such as simplicity in the implementation, precision, and speed in global convergence.
chapter PDF
Citations in RIS format
Citations in bibtex format
## More
© 2013 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
## How to cite and reference
### Cite this chapter Copy to clipboard
H. R. Baghaee, M. Mirsalim and G. B. Gharehpetian (February 13th 2013). Application of Harmony Search Algorithm in Power Engineering, Search Algorithms for Engineering Optimization, Taufik Abrão, IntechOpen, DOI: 10.5772/55509. Available from:
### chapter statistics
1Crossref citations
### Related Content
#### Search Algorithms for Engineering Optimization
Edited by Taufik Abrão
Next chapter
#### Heuristic Search Applied to Fuzzy Cognitive Maps Learning
By Bruno Augusto Angélico, Márcio Mendonça, Lúcia Valéria R. de Arruda and Taufik Abrão
#### Manufacturing the Future
Edited by Vedran Kordic
First chapter
#### Multi-Agent Based Distributed Manufacturing
By J. Li, J.Y. H. Fuh, Y.F. Zhang and A.Y.C. Nee
We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.
|
org.apache.commons.math3.ode.sampling
## Interface StepInterpolator
• All Superinterfaces:
Externalizable, Serializable
All Known Implementing Classes:
AbstractStepInterpolator, NordsieckStepInterpolator
public interface StepInterpolator
extends Externalizable
This interface represents an interpolator over the last step during an ODE integration.
The various ODE integrators provide objects implementing this interface to the step handlers. These objects are often custom objects tightly bound to the integrator internal algorithms. The handlers can use these objects to retrieve the state vector at intermediate times between the previous and the current grid points (this feature is often called dense output).
One important thing to note is that the step handlers may be so tightly bound to the integrators that they often share some internal state arrays. This imply that one should never use a direct reference to a step interpolator outside of the step handler, either for future use or for use in another thread. If such a need arise, the step interpolator must be copied using the dedicated copy() method.
Since:
1.2
Version:
$Id: StepInterpolator.java 1416643 2012-12-03 19:37:14Z tn$
|
# Is my used Geiger counter radioactive?
artis
@justamom you should really stop being concerned, I can understand the fear it is natural for humans to fear the unknown but in all honesty this fear is baseless.
https://en.wikipedia.org/wiki/Guarapari
There are beaches and other geographical spots on Earth that have a natural average dose twice or three times as high as the one you measured. Some older houses that are built from rocks that contain naturally radioactive elements also have elevated levels.
There are two ways a geiger counter or anything else can become radioactive , either it has dust or particles from a radioactive source on it physically or it has been exposed to neutron radiation which is a specific sort of radiation normally only found in working nuclear reactor cores and other nuclear reactions. We can definitely rule out the second, as for the first the chances the detector has been contaminated with radioactive dust are low.
Now if this makes you feel safer although not really needed but you can put some rubber gloves on take some electronics cleaning alcohol and clean the detector with it somewhere outside. then put the detector to dry, wash your gloves with normal water and soap and your done. If you feel like you can open the case and the reassemble it you can try to do that and clean the inside too with the same alcohol.
Although as I said , this is a rather pointless exercise and most likely the detector is simply reading off by a little and you have just a boring regular normal background level as most of us do.
It is hard to tell what would the reading be if it was contaminated because that would depend on what substance and how much.
But really you should stop worrying. The stress from that is causing more harm than the actual radiation even if there was some.
I have worked around liquid mercury and I still have some in a bottle. The same thing could be said here. There are many things that are contaminated with some levels of mercury in our world. As long as the levels are not high enough we are fine.
artis
@fresh_42 Well not just to radon you are then also more exposed to Po 210 and Pb210 both of which are in tobacco along countless other nasty stuff.
https://pubmed.ncbi.nlm.nih.gov/14557035/
By the way Po210 also the poison the KGB/FSB used to assassinate their former agent turned "traitor" Litvinenko.
https://en.wikipedia.org/wiki/Poisoning_of_Alexander_Litvinenko
I am not judging you by no means I used to smoke too, before I kind of came to realize it's rather useless , especially with the crap tobacco they use in cigarettes nowadays. Also I have had problems when for example having a morning coffee and then smoking a cigarette ,since I have vegetative dystonia , the few times i tried to mix these things my heart went into arrhythmia one time almost to the point I felt like having convulsions and thought I was about to die. Caffeine and nicotine mixed can be a real killer for some no need for heroin.
After that I put a stop to any such activities.
Kinda of ironical people afraid of radiation meanwhile there are atleast 10 substances we use as humans daily that can kill easily if done the wrong way, starting from alcohol to caffeine and tobacco etc.
Last edited:
Staff Emeritus
@justamom , nobody here can tell you the history of the device. Doubly so because we cannot even take a look at this. If you told us "I have a spoon, where has it been?" we couldn't answer it either.
What we can say is that the meter's reading does not indicate any danger. It shows the normal, natural level of radiation in the environment, and if anything, is on the low side of average. In this regard, it's better than a spoon - it still can't tell us its history, but it can say that right now there is no elevated risk.
The caveat is that this device is a toy. It is not intended for any serious health physics use. It isn't calibrated and isn't calibratable. It's probably not too far off, but I wouldn't expect two different devices to give the exact same readings anyway.
Finally, the most important source of radiation is radon, and this device doesn't measure it.
berkeman and fresh_42
Well, it's pretty much a toy
Actually, this can be considered a serious instrument capable of providing the information for which it is capable. What it is not capable of is determining your biological dose which the Sievert is supposed to represent. This device detects ionizing events due to radiation that can penetrate the detector's wall. From there on the significance of the reading must be interpreted based on the type of radiation and concentration of the radioactive material assumed. Typically the GM counter is used to indicate or find problems in areas where the radiation material environment is known. Just walking into some arbitrary and especially high radiation environment with a GM counter can be very dangerous. "Caveat Usor"
Staff Emeritus
I disagree. This device is intended for "Hey look! Fiestaware is slightly radioactive!" If I wanted to "indicate or find problems in areas where the radiation material environment is known" I would want something intended for that: an honest-to-goodness survey meter. These cost 20-30x what this one costs.
All GM counter do the same thing the only difference is the quality of the materials from which they are constructed and perhaps some characteristics like stability and robustness. The most expensive GM counter will tell you no more.
justamom
@justamom you should really stop being concerned, I can understand the fear it is natural for humans to fear the unknown but in all honesty this fear is baseless.
https://en.wikipedia.org/wiki/Guarapari
There are beaches and other geographical spots on Earth that have a natural average dose twice or three times as high as the one you measured. Some older houses that are built from rocks that contain naturally radioactive elements also have elevated levels.
There are two ways a geiger counter or anything else can become radioactive , either it has dust or particles from a radioactive source on it physically or it has been exposed to neutron radiation which is a specific sort of radiation normally only found in working nuclear reactor cores and other nuclear reactions. We can definitely rule out the second, as for the first the chances the detector has been contaminated with radioactive dust are low.
Now if this makes you feel safer although not really needed but you can put some rubber gloves on take some electronics cleaning alcohol and clean the detector with it somewhere outside. then put the detector to dry, wash your gloves with normal water and soap and your done. If you feel like you can open the case and the reassemble it you can try to do that and clean the inside too with the same alcohol.
Although as I said , this is a rather pointless exercise and most likely the detector is simply reading off by a little and you have just a boring regular normal background level as most of us do.
It is hard to tell what would the reading be if it was contaminated because that would depend on what substance and how much.
But really you should stop worrying. The stress from that is causing more harm than the actual radiation even if there was some.
I have worked around liquid mercury and I still have some in a bottle. The same thing could be said here. There are many things that are contaminated with some levels of mercury in our world. As long as the levels are not high enough we are fine.
How do you know it wasn’t exposed to neutron radiation?
What if someone took it on a tour to Chernobyl or Fukushima or some nuclear power plant?
Also if there is radioactive dust on it, then the readings would’ve been how much hypothetically?
Keith_McClary
There are many things that are contaminated with some levels of mercury in our world.
Someone in another forum had broken a compact fluorescent bulb in a child's room and were so concerned about the mercury that they moved the kid out and taped the door shut.
Staff Emeritus
2022 Award
What would the reading be if the Geiger counter was contaminated with radioactive materials?
More than what the counter was reading. Seriously, you're worried over nothing.
There's no indication that your counter is contaminated.
Staff Emeritus
2022 Award
How do you know it wasn’t exposed to neutron radiation?
What if someone took it on a tour to Chernobyl or Fukushima or some nuclear power plant?
Also if there is radioactive dust on it, then the readings would’ve been how much hypothetically?
Listen to me. There is no reason to believe your counter is contaminated. The sell and transport of radioactive/contaminated materials is a SERIOUS crime and no one in their right mind would do so. Even over ebay or through other online retailers.
Staff Emeritus
What if someone took it on a tour to Chernobyl or Fukushima or some nuclear power plant?
We have no evidence of that, but we know it's not very radioactive. It tells us this when it's turned on.
How do we know that your spoons weren't on a tour of Chernobyl or Fukushima or some nuclear power plant?
artis
Just to be clear Chernobyl as well as Fukushima are not working power plants , they have been closed for a rather long time now. Particle contamination tends to settle over time into soil and elsewhere , winds carry it away etc. So unless you go to Chernobyl then find a "hotspot" and then dig it together with all the dirt and put it into a jar , otherwise no need to worry.
@justamom don't be afraid.
What I find more interesting is in your first post you said
I’m a mom and was worried about radiation in the new house we moved into
Now what made you think that there could be radiation (beyond safe levels since background is everywhere) in that new house? Was it some random thought or was there legitimate concern/information ?
PS. Although I am in no position to teach you what to do with your money or life, I would advise now that you have already spent the money and have the device , don't be afraid of it, instead keep it and don't throw it away, if you have kids (especially a son) and you happen to travel somewhere , like in those old mines or on other tourist locations, you can take that dosimeter with you and teach your kids something about physics. Like for example how certain rocks have higher concentrations of naturally radioactive elements.
Just a thought.
Last edited:
justamom
Just to be clear Chernobyl as well as Fukushima are not working power plants , they have been closed for a rather long time now. Particle contamination tends to settle over time into soil and elsewhere , winds carry it away etc. So unless you go to Chernobyl then find a "hotspot" and then dig it together with all the dirt and put it into a jar , otherwise no need to worry.
@justamom don't be afraid.
What I find more interesting is in your first post you said
Now what made you think that there could be radiation (beyond safe levels since background is everywhere) in that new house? Was it some random thought or was there legitimate concern/information ?
PS. Although I am in no position to teach you what to do with your money or life, I would advise now that you have already spent the money and have the device , don't be afraid of it, instead keep it and don't throw it away, if you have kids (especially a son) and you happen to travel somewhere , like in those old mines or on other tourist locations, you can take that dosimeter with you and teach your kids something about physics. Like for example how certain rocks have higher concentrations of naturally radioactive elements.
Just a thought.
They poured mortar all over the original floor to raise the subfloor and to plug up all the expansion gaps in the original floor, then they covered it with a vinyl floor that has a limestone composite, and there’s also granite countertops. Mortar, limestone and granite all emit radiation so I wanted to check them. Also has pink tiles in the bathroom that I wanted to make sure weren’t radioactive
justamom
We have no evidence of that, but we know it's not very radioactive. It tells us this when it's turned on.
How do we know that your spoons weren't on a tour of Chernobyl or Fukushima or some nuclear power plant?
The spoons I use are all brand new, I always buy new spoons.
justamom
Listen to me. There is no reason to believe your counter is contaminated. The sell and transport of radioactive/contaminated materials is a SERIOUS crime and no one in their right mind would do so. Even over ebay or through other online retailers.
I only worry because Amazon sometimes will get returned products and repackage and sell them off as new. This has happened many times with other things like clothing, hats, toys I bought... sometimes they’ve been opened before but Amazon still sells them to me.
If u look up the reviews online for the GQ GMC500 plus or look on YouTube, u can see people using it on a tour in chernobyl, they put it right on the ground in the dirt, or they put it on top of uranium ore or cesium 137 or other radioactive things etc.
Mentor
2022 Award
I only worry because Amazon sometimes will get returned products and repackage and sell them off as new.
I have heard the story the other way around: they throw away brand new returns because repackage and restore them would be too expensive.
justamom
I have heard the story the other way around: they throw away brand new returns because repackage and restore them would be too expensive.
Also yesterday my 2 year old found a big rock in the backyard, she was holding it and I took it from her to see it had green on it too, does this mean it is radioactive and has uranium in it? Do I need to throw away the clothes she wore when she was playing with it? Can the uranium from the rock contaminate her hands her clothes etc?
Mentor
2022 Award
Also yesterday my 2 year old found a big rock in the backyard, she was holding it and I took it from her to see it had green on it too, does this mean it is radioactive and has uranium in it? Do I need to throw away the clothes she wore when she was playing with it? Can the uranium from the rock contaminate her hands her clothes etc?
The green on the rock is most likely of biological origin, algae or moss. If you are lucky, then it is olivine. Neither of them is radioactive. The chances that there is uranium in or on the rock are basically zero.
justamom
The green on the rock is most likely of biological origin, algae or moss. If you are lucky, then it is olivine. Neither of them is radioactive. The chances that there is uranium in or on the rock are basically zero.
it didn’t look like algae or moss,I hope it was just olivine...
btw but how do u know so much about radiation? What kind of background or profession do u have?
Mentor
2022 Award
What kind of background or profession do u have?
None which is related to it. But nuclear energy and its dangers are other than in France or the US a topic here since the 80's - even before Chernobyl. The next plant is just a few miles away, and in case of a nuclear war, I would be hit as one of the first (10 miles away from a #1 target).
It is not necessary to have specific knowledge to assess your situation. School physics, and endless discussions about the danger of nuclear energy are sufficient. It is probably far more dangerous (in terms of radiation) to fly from New York to Hawaii than it is to be in your home.
If you had said that there is a nuclear power plant or other nuclear facility in your neighborhood, then I would have spoken otherwise. Those who support the use of nuclear energy say it's harmless, but truth is that e.g. leukemia casualties around the British Sellafield have been significantly higher than elsewhere (in the 80's, not sure about the current situation). Those nuclear industry complexes might be a danger, and I am personally not really happy, that so many of them are old and placed along the American pacific coast, a seismic hot spot, but this is not the subject here. As mentioned by others, smoke from tobacco is far more radioactive than what usually can be found in a household. If it was so easy to find radioactive material anywhere, then some bad guys would already had used it.
justamom
None which is related to it. But nuclear energy and its dangers are other than in France or the US a topic here since the 80's - even before Chernobyl. The next plant is just a few miles away, and in case of a nuclear war, I would be hit as one of the first (10 miles away from a #1 target).
It is not necessary to have specific knowledge to assess your situation. School physics, and endless discussions about the danger of nuclear energy are sufficient. It is probably far more dangerous (in terms of radiation) to fly from New York to Hawaii than it is to be in your home.
If you had said that there is a nuclear power plant or other nuclear facility in your neighborhood, then I would have spoken otherwise. Those who support the use of nuclear energy say it's harmless, but truth is that e.g. leukemia casualties around the British Sellafield have been significantly higher than elsewhere (in the 80's, not sure about the current situation). Those nuclear industry complexes might be a danger, and I am personally not really happy, that so many of them are old and placed along the American pacific coast, a seismic hot spot, but this is not the subject here. As mentioned by others, smoke from tobacco is far more radioactive than what usually can be found in a household. If it was so easy to find radioactive material anywhere, then some bad guys would already had used it.
Actually we live 30 miles from a nuclear power plant as well as a few miles from former marines base
Mentor
but truth is that e.g. leukemia casualties around the British Sellafield have been significantly higher than elsewhere
Leukemia cases around planned but never constructed nuclear power plants are higher as well.
Cherry-picking and mixing correlation with causality isn't evidence of anything.
If you account for the demographics the effect disappears.
https://www.nature.com/news/2011/110506/full/news.2011.275.html
@justamom: There are just a few hundred people who have access to the dangerous parts of Chernobyl and Fukushima, and they don't use $100 Geiger counters from Amazon. And if they would then your Geiger counter would detect it. There is really nothing to worry about. I work with particle accelerators and have worked with irradiated materials. PeterDonis justamom Leukemia cases around planned but never constructed nuclear power plants are higher as well. Cherry-picking and mixing correlation with causality isn't evidence of anything. If you account for the demographics the effect disappears. https://www.nature.com/news/2011/110506/full/news.2011.275.html @justamom: There are just a few hundred people who have access to the dangerous parts of Chernobyl and Fukushima, and they don't use$100 Geiger counters from Amazon. And if they would then your Geiger counter would detect it.
There is really nothing to worry about.
I work with particle accelerators and have worked with irradiated materials.
Thank you, but what if someone put it directly on top of cesium 137? I saw some guy on YouTube who bought the same Geiger counter as me and he placed it directly on top of a piece of cesium 137
weirdoguy
Staff Emeritus
Oh, for heavens sake. You got your answer, several times. It's clear you don't want to believe it (I don't understand why), so don't. Throw the doggone thing out and be done with it then.
weirdoguy, Wrichik Basu and fresh_42
justamom
Oh, for heavens sake. You got your answer, several times. It's clear you don't want to believe it (I don't understand why), so don't. Throw the doggone thing out and be done with it then.
I have really bad contamination OCD, it’s so hard for me to be logical because my brain keeps telling me I harmed my kids and baby somehow by contaminating them...
Gold Member
I have really bad contamination OCD, it’s so hard for me to be logical because my brain keeps telling me I harmed my kids and baby somehow by contaminating them...
Yup. I just drew attention to that (sorry @Vanadium 50 I was too quick on-the-draw), for readers just tuning in.
You know your condition, and I'll bet you know that knowing it doesn't make it go way, yes?
I think it's important to point out that there's very little we can do to mollify your concerns beyond what has already been said. If you are still concerned at this point, you will have to decide for yourself what more direct and more conclusive measures you need to take.
justamom
Yup. I just drew attention to that (sorry @Vanadium 50 I was too quick on-the-draw), for readers just tuning in.
You know your condition, and I'll bet you know that knowing it doesn't make it go way, yes?
I think it's important to point out that there's very little we can do to mollify your concerns beyond what has already been said. If you are still concerned at this point, you will have to decide for yourself what more direct and more conclusive measures you need to take.
i know. The only reason I even bought a Geiger counter is because I had worried about radiation in the new house we were moving into, worried about the mortar inside the house, the limestone floors, the granite, the previous owners or people who entered the house after who could have contaminated it. Then after I got the Geiger counters, I kept worrying their used because they had scratches on them or stains on the manual... it is all OCD, because nothing can bring me peace of mind... I’m so sorry for bothering all of u. I really appreciate all of ur answers and help.
artis
@justamom I have vegetative dystonia it doesn't make me afraid of anything but it causes other setbacks. The only thing I can suggest to you is to just care less , yes sometimes it is better to care less for the things around us than to care more because if caring more about any minor thing causes you to be stressed then trust me it will only lead to poorer health and less happiness long term.
I don't want to be judgemental but the way I see it , too many parents these days torture their kids with all kinds of made up dangers and they scare them. Kids even though they don't understand the language they feel the parents emotional state even better than we adults feel one another. In the long term I believe and actual studies have also concluded that our children are affected by the emotional background that their parents raised them up in. So I would suggest to you to not worry and actually work with your OCD because that will do you more harm long term than anything radiation related ever. This is a fact.
Remember what Roosevelt once said "The only thing we have to fear is fear itself". I knew a woman who had to learn this literally, her nerve problems prevented her from doing many simple things as she was afraid of them, like taking a train or a bus if it crossed a bridge anywhere etc. Irrational fear severely affects a person even to the point of premature illness and death.
Speaking about cancer and radiation. A human body has trillions of cells , cells constantly die off and new ones are created, statistically speaking this is a nightmare , hard to make any long term predictions out of a process that involves so many unknown variables. What I want to say with this?
The truth is that linking cancer to a specific cause , unless the cause is extremely obvious , like working in a asbestos mine, is very hard if not impossible. Cancer can be caused by lots of factors and not all people are equally susceptible to it. By the way, not enough sleep, stress and bad diet can also lead to it. By far I think this is a much bigger cause than anything radiation related. Tobacco is surely another proven danger.
Your tiny little dosimeter and your granite tops and mortar play no real role in all of this, you can be sure of that.
And in case you are afraid of green rocks, well you now have a dosimeter I suggest you use it , but not out of fear but for learning purposes and curiosity. Take a reading of the rock, if nothing changes you know it's not Uranium...
One other thing you should know. Naturally radioactive ore or elements are not really dangerous, unless you eat them or try to use them as table salts, elements only really become dangerous once they have been irradiated/underwent a nuclear reaction. Stuff like spent fuel, reactor core parts etc.
Trust me you will not come across them in your entire life even if you wanted to.
Gold Member
Throw the doggone thing out and be done with it then.
NO! I'll take it. I like radiation.
Staff Emeritus
|
1. Dec 5, 2003
### dhris
Hi, I'm hoping someone out there is going to see something in this problem that I don't because I really don't get it:
Consider the equation:
$$\sigma=(\omega + i \nu k^2)+\frac{\alpha^2}{\omega + i \eta k^2}$$
It doesn't really matter what the variables mean, (i^2=-1 of course) but what I really need is to figure out $$\omega$$, which is complex, as a function of the rest (under a certain approximation). The book I found this in claims that under the following conditions:
$$|\sigma|>>|\alpha|$$
as well as some vague statement about $$\nu, \eta$$ being small, the two roots of the quadratic are:
$$\omega \approx -i \nu k^2 + \sigma + \frac{\alpha^2}{\sigma + i(\eta-\nu)k^2}$$
and
$$\omega \approx -i \eta k^2 - \frac{\alpha^2}{\sigma}$$
I don't know how they came up with this, but it would be really great to find out. Anybody have any ideas?
Thanks,
dhris
Last edited: Dec 5, 2003
2. Dec 5, 2003
### Hurkyl
Staff Emeritus
Well, what is the exact solution for $\omega$; maybe dwelling upon that will indicate how to come up with those approximations.
3. Dec 7, 2003
### dhris
Thanks, that's what I was doing. I couldn't see how they applied the approximation though, but figured it out soon after I posted. Why does it always happen that way?
dhris
|
# Why are text files 4kB?
For some reason, when I make a text file on OS X, it's always at least 4kB, unless it's blank. Why is this? Could there be 4,000 bytes of metadata about 1 byte of plain text?
-
4096 bytes, not 4000. – Mechanical snail Jan 22 '13 at 3:45
@Mechanicalsnail 4095. You forgot the one byte of actual data – Tobias Kienzler Jan 22 '13 at 8:19
@Mechanicalsnail it's a leap year, isn't it? xkcd.com/394 :P – tkbx Jan 22 '13 at 12:26
## 3 Answers
The block size of the file system must be 4 kB. When data is written to a file that is contained in a file system the operating system must allocate blocks of storage to contain the data that will be written to the file.
Typically, when a file system is created the storage contained in that file system is segmented into blocks of a fixed size. This Wikipedia article briefly explains this process.
The underlying block size of the file system for this file must have a 4K byte block size. This file is using 1 4K block and only one byte within that block contains actual data.
-
A comment: In Windows, the actual file size is displayed by default, and the size on disk is displayed in the Options pane. – Joe Z. Jan 22 '13 at 1:11
All file systems have a cluster or block size, or the smallest amount of disk space that can be allocated to hold a file. Even if the actual file size is smaller than the cluster/block size, it will still consume one cluster, or 4K on your file system. The cluster size depends on the file system, and the file system options.
If it contains zero bytes, as Gilles pointed out, it uses zero blocks/clusters but one inode on typical *nix file systems, which better answers the caveat, "unless it's blank."
-
“Even if a file size is zero bytes, it will still consume one cluster.” Actually, no: on typical unix filesystems, an empty file consumes one inode and zero blocks, and there is no notion of cluster that differs from blocks. – Gilles Jan 21 '13 at 22:36
@Gilles Ah, thanks re: the inode. And yup, cluster/block. – Christopher Jan 21 '13 at 22:59
An little experiment to help illustrate this:
First, let's see what the actual block size of my root ext4 (LVM) partition is:
[root@fedora17 blocksize]# dumpe2fs /dev/mapper/vg_fedora17-lv_root | grep -i "block size"
dumpe2fs 1.42.3 (14-May-2012)
Block size: 4096
It is 4096 (4 KiB), as expected. Now, let's create three files: The first is zero bytes, the second is just one byte, and the third is 4 KiB (the block size):
[root@fedora17 blocksize]# touch 0_bytes.bin
[root@fedora17 blocksize]# dd if=/dev/zero of=1_byte.bin bs=1 count=1
[root@fedora17 blocksize]# dd if=/dev/zero of=4096_bytes.bin bs=1 count=4096
Now, we ls the directory. We use the -s option to see the allocated size (the left-most column), in number of 1024-byte "blocks."
(ls doesn't know the real block size is 4096 -- we could specify --block-size but that scales everything by that value, and we want to see the actual file size in bytes, too).
[root@fedora17 blocksize]# ls -ls
total 8
0 -rw-r--r--. 1 root root 0 Jan 21 23:56 0_bytes.bin
4 -rw-r--r--. 1 root root 1 Jan 21 23:38 1_byte.bin
4 -rw-r--r--. 1 root root 4096 Jan 21 23:38 4096_bytes.bin
Two things can be noted here:
• The zero byte file takes up zero blocks in the filesystem, confirming what Giles stated.
• Even though the other two files have different file sizes, they both take up 4*1024 = one 4KiB ext4 block.
-
Yes you are quite correct the 4k is the size the file system uses to store information regarding the storage od the file inside the file system. Things such as the index of the file from the beginning of a block, index of the block and size of memory utilized by the file are stored which eat up 4k. This information is used to reference the text file from the file system. – pvn Jan 22 '13 at 4:24
This is incorrect. File metadata like you mention do not "eat up" any of the 4KiB. Those structures are part of the filesystem formatting overhead. See my answer above for proof. If what you said was true, then my 4096-byte file would need more than one block. – Jonathon Reinhart Jan 22 '13 at 4:55
Pointers to the file (segment no, blk no) in the file system are the things that have to be stored and require one block to be assigned. If the text file is has very less content that can fit in the first block already assigned to it, then it won't require second block allocation. I agree that the whole of 4k is not used for the metadata and some internal fragmentation arises. – pvn Jan 22 '13 at 5:19
I'm saying that none of the 4 KiB block size is used for metadata. I think my example proves that. – Jonathon Reinhart Jan 22 '13 at 5:51
@pvn: Jonathon is right. Metadata is stored in the inode for the file, which is separate from the block used to store file data. – Mechanical snail Feb 7 '13 at 3:49
|
# How do you simplify square root of 25(A+2)to the 2nd power?
Jun 2, 2018
$\sqrt{25 {\left(A + 2\right)}^{2}} = \pm 5 \left(A + 2\right)$
#### Explanation:
Your question is somewhat ambiguous as it is not clear whether "to the 2nd power" refers to $\left(A + 2\right)$ or the entire term.
I will assume we are looking for: $\sqrt{25 {\left(A + 2\right)}^{2}}$
Expression $= \sqrt{25 {\left(A + 2\right)}^{2}}$
$= \sqrt{25} \cdot \sqrt{{\left(A + 2\right)}^{2}}$
$= \pm 5 \left(A + 2\right)$
|
On Finding Quantum Multi-collisions
A k-collision for a compressing hash function H is a set of k distinct inputs that all map to the same output. In this work, we show that for any constant k, Θ(N^1/2(1-1/2^k-1)) quantum queries are both necessary and sufficient to achieve a k-collision with constant probability. This improves on both the best prior upper bound (Hosoyamada et al., ASIACRYPT 2017) and provides the first non-trivial lower bound, completely resolving the problem.
Authors
• 5 publications
• 6 publications
• Quantum Algorithm for the Multicollision Problem
The current paper presents a new quantum algorithm for finding multicoll...
11/07/2019 ∙ by Akinori Hosoyamada, et al. ∙ 0
• Improved Quantum Multicollision-Finding Algorithm
The current paper improves the number of queries of the previous quantum...
11/20/2018 ∙ by Akinori Hosoyamada, et al. ∙ 0
• Lower Bounds for Function Inversion with Quantum Advice
Function inversion is that given a random function f: [M] → [N], we want...
11/20/2019 ∙ by Kai-Min Chung, et al. ∙ 0
• Quantum Lightning Never Strikes the Same State Twice
Public key quantum money can be seen as a version of the quantum no-clon...
11/07/2017 ∙ by Mark Zhandry, et al. ∙ 0
• Near-optimal ground state preparation
Preparing the ground state of a given Hamiltonian and estimating its gro...
02/28/2020 ∙ by Lin Lin, et al. ∙ 0
• Distributional Collision Resistance Beyond One-Way Functions
Distributional collision resistance is a relaxation of collision resista...
05/03/2021 ∙ by Nir Bitansky, et al. ∙ 0
• Quantum adiabatic optimization without heuristics
Quantum adiabatic optimization (QAO) is performed using a time-dependent...
10/10/2018 ∙ by Michael Jarret, et al. ∙ 0
This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
1 Introduction
Collision resistance is one of the central concepts in cryptography. A collision for a hash function is a pair of distinct inputs that map to the same output: .
Multi-collisions.
Though receiving comparatively less attention in the literature, multi-collision resistance is nonetheless an important problem. A -collision for is a set of distinct inputs such that for where for all .
Multi-collisions frequently surface in the analysis of hash functions and other primitives. Examples include MicroMint [RS97], RMAC [JJV02], chopMD [CN08], Leamnta-LW [HIK11], PHOTON and Parazoa [NO14], the Keyed-Sponge [JLM14], all of which assume the multi-collision resistance of a certain function. Multi-collisions algorithms have also been used in attacks, such as the MDC-2 [KMRT09], HMAC [NSWY13], Even-Mansour [DDKS14], and LED [NWW14]. Multi-collision resistance for polynomial has also recently emerged as a theoretical way to avoid keyed hash functions [BKP18, BDRV18], or as a useful cryptographic primitives, for example, to build statistically hiding commitment schemes with succinct interaction[KNY18].
Quantum.
Quantum computing stands to fundamentally change the field of cryptography. Importantly for our work, Grover’s algorithm [Gro96] can speed up brute force searching by a quadratic factor, greatly increasing the speed of pre-image attacks on hash functions. In turn, Grover’s algorithm can be used to find ordinary collisions () in time , speeding up the classical “birthday” attack which requires time. It is also known that, in some sense (discussed below), these speedups are optimal [AS04, Zha15a]. These attacks require updated symmetric primitives with longer keys in order to make such attacks intractable.
1.1 This Work: Quantum Query Complexity of Multi-collision Resistance
In this work, we consider quantum multi-collision resistance. Unfortunately, little is known of the difficulty of finding multi-collisions for in the quantum setting. The only prior work on this topic is that of Hosoyamada et al. [HSX17], who give a algorithm for 3-collisions, as well as algorithms for general constant . On the lower bounds side, the from the case applies as well for higher , and this is all that is known.
We completely resolve this question, giving tight upper and lower bounds for any constant . In particular, we consider the quantum query complexity of multi-collisions. We will model the hash function as a random oracle. This means, rather than getting concrete code for a hash function , the adversary is given black box access to a function chosen uniformly at random from the set of all functions from into . Since we are in the quantum setting, black box access means the adversary can make quantum queries to . Each query will cost the adversary 1 time step. The adversary’s goal is to solve some problem — in our case find a -collision — with the minimal cost. Our results are summarized in Table 1. Both our upper bounds and lower bounds improve upon the prior work for ; for example, for , we show that the quantum query complexity is .
1.2 Motivation
Typically, the parameters of a hash function are set to make finding collisions intractable. One particularly important parameter is the output length of the hash function, since the output length in turn affects storage requirements and the efficiency of other parts of a cryptographic protocol.
Certain attacks, called generic attacks, apply regardless of the implementation details of the hash function , and simply work by evaluating on several inputs. For example, the birthday attack shows that it is possible to find a collision in time approximately by a classical computer. Generalizations show that -collisions can be found in time 111Here, the Big Theta notation hides a constant that depends on .
These are also known to be optimal among classical generic attacks. This is demonstrated by modeling as an oracle, and counting the number of queries needed to find (-)collisions in an arbitrary hash function . In cryptographic settings, it is common to model as a random function, giving stronger average case lower bounds.
Understanding the effect of generic attacks is critical. First, they cannot be avoided, since they apply no matter how is designed. Second, other parameters of the function, such as the number of iterations of an internal round function, can often be tuned so that the best known attacks are in fact generic. Therefore, for many hash functions, the complexity of generic attacks accurately represents the actual cost of breaking them.
Therefore, for “good” hash functions where generic attacks are optimal, in order to achieve security against classical adversaries must be chosen so that time steps are intractable. This often means setting , so . In contrast, generic classical attacks can find -collisions in time . For example, this means that must be set to to avoid -collisions, or to avoid -collisions.
Once quantum computers enter the picture, we need to consider quantum queries to in order to model actual attacks that evaluate in superposition. This changes the query complexity, and makes proving bounds much more difficult. Just as understanding query complexity in the classical setting was crucial to guide parameter choices, it will be critical in the quantum world as well.
We also believe that quantum query complexity is an important study in its own right, as it helps illuminate the effects quantum computing will have on various areas of computer science. It is especially important to cryptography, as many of the questions have direct implications to the post-quantum security of cryptosystems. Even more, the techniques involved are often closely related to proof techniques in post-quantum cryptography. For example, bounds for the quantum query complexity of finding collisions in random functions [Zha15a], as well as more general functions [EU17, BES17], were developed from techniques for proving security in the quantum random oracle model [BDF11, Zha12, TU16]. Similarly, the lower bounds in this work build on techniques for proving quantum indifferentiability [Zha18]. On the other hand, proving the security of MACs against superposition queries [BZ13] resulted in new lower bounds for the quantum oracle interrogation problem [van98] and generalizations [Zha15b].
Lastly, multi-collision finding can be seen as a variant of -distinctness, which is essentially the problem of finding a -collision in a function , where the -collision may be unique and all other points are distinct. The quantum query complexity of -distinctness is currently one of the main open problems in quantum query complexity. An upper bound of was shown by Belovs [Bel12]. The best known lower bound is [BKT18]. Interestingly, the dependence of the exponent on is exponential for the upper bound, but polynomial for the lower bound, suggesting a fundamental gap our understanding of the problem.
Note that our results do not immediately apply in this setting, as our algorithm operates only in a regime where there are many (-)collisions, whereas -distinctness applies even if the -collision is unique and all other points are distinct (in particular, no -collisions). On the other hand, our lower bound is always lower than , which is trivial for this problem. Nonetheless, both problems are searching for the same thing — namely a -collisions — just in different settings. We hope that future work may be able to extend our techniques to solve the problem of -distinctness.
1.3 The “Reciprocal Plus 1” Rule
For many search problems over random functions, such as pre-image search, collision finding, -sum, quantum oracle interrogation, and more, a very simple folklore rule of thumb translates the classical query complexity into quantum query complexity.
In particular, all of these problems have a classical query complexity for some rational number . Curiously, the quantum query complexity of all these problems is always .
In slightly more detail, for all of these problems the best classical -query algorithm solves the problem with probability for some constants , where . Then the classical query complexity is . For this class of problems, the success probability of the best query quantum algorithm is obtained simply by increasing the power of by . This results in a quantum query complexity of . Examples:
• Grover’s pre-image search [Gro96] improves success probability from to , which is known to be optimal [BBBV97]. The result is a query complexity improvement from to .
Similarly, finding, say, 2 pre-images has classical success probability ; it is straightforward to adapt known techniques to prove that the best quantum success probability is . Again, the query complexity goes from to . Analogous statements hold for any constant number of pre-images.
• The BHT collision finding algorithm [BHT98] finds a collision with probability , improving on the classical birthday attack . Both of these are known to be optimal [AS04, Zha15a]. Thus quantum algorithms improve the query complexity from to .
Similarly, finding, say, 2 distinct collisions has classical success probability , whereas we show that the quantum success probability is . More generally, any constant number of distinct collisions conforms to the Reciprocal Plus 1 Rule.
• -sum asks to find a set of inputs such that the sum of the outputs is 0. This is a different generalization of collision finding than what we study in this work. Classically, the best algorithm succeeds with probability . Quantumly, the best algorithm succeeds with probability [BS13, Zha18]. Hence the query complexity goes from to .
Again, solving for any constant number of distinct -sum solutions also conforms to the Reciprocal Plus 1 Rule.
• In the oracle interrogation problem, the goal is to compute input/output pairs, using only queries. Classically, the best success probability is clearly . Meanwhile, Boneh and Zhandry [BZ13] give a quantum algorithm with success probability roughly , which is optimal.
Some readers may have noticed that Reciprocal Plus 1 (RP1) rule does not immediately appear to apply the Element Distinctness. The Element Distinctness problem asks to find a collision in where the collision is unique. Classically, the best algorithm succeeds with probability . On the other hand, quantum algorithms can succeed with probability , which is optimal [Amb04, Zha15a]. This does not seem to follow the prediction of the RP1 rule, which would have predicted . However, we note that unlike the settings above which make sense when , and where the complexity is characterized by , the Element Distinctness problem requires and the complexity is really characterized by the domain size . Interestingly, we note that for a random expanding function, when , there will with constant probability be exactly one collision in . Thus, in this regime the collision problem matches the Element Distinctness problem, and the RP1 rule gives the right query complexity!
Similarly, the quantum complexity for -sum is usually written as , not . But again, this is because most of the literature considers for which there is a unique -sum and is non-compressing, in which case the complexity is better measured in terms of . Notice that a random function will contain a unique collision when , in which case the bound we state (which follows the RP1 rule) exactly matches the statement usually given.
On the other hand, the RP1 rule does not give the right answer for -distinctness for , since the RP1 rule would predict the exponent to approach for large , whereas prior work shows that it approaches for large . That RP1 does not apply perhaps makes sense, since there is no setting of where a random function will become an instance of -distinctness: for any setting of parameters where a random function has a -collision, it will also most likely have many -collisions.
The takeaway is that the RP1 Rule seems to apply for natural search problems that make sense on random functions when . Even for problems that do not immediately fit this setting such as Element Distinctness, the rule often still gives the right query complexity by choosing so that a random function is likely to give an instance of the desired problem.
Enter k-collisions.
In the case of -collisions, the classical best success probability is , giving a query complexity of . Since the -collision problem is a generalization of collision finding, is similar in spirit to the problems above, and applies to compressing random functions, one may expect that the Reciprocal Plus 1 Rule applies. If true, this would give a quantum success probability of , and a query complexity of .
Even more, for small enough , it is straightforward to find a -collision with probability as desired. In particular, divide the queries into blocks. Using the first queries, find a 2-collision with probability . Let be the image of the collision. Then, for each of the remaining blocks of queries, find a pre-image of with probability using Grover search. The result is colliding inputs with probability . It is also possible to prove that this is a lower bound on the success probability (see lower bound discussion below). Now, this algorithm works as long , since beyond this range the 2-collision success probability is bounded by . Nonetheless, it is asymptotically tight in the regime for which it applies. This seems to suggest that the limitation to small might be an artifact of the algorithm, and that a more clever algorithm could operate beyond the barrier. In particular, this strongly suggests -collisions conforms to the Reciprocal Plus 1 Rule.
Note that the RP1 prediction gives an exponent that depends polynomially on , asymptotically approaching . In contrast, the prior work of [HSX17] approaches exponentially fast in . Thus, prior to our work we see an exponential vs polynomial gap for -collisions, similar to the case of -distinctness.
Perhaps surprisingly given the above discussion222At least, the authors found it surprising!, our work demonstrates that the right answer is in fact exponential, refuting the RP1 rule for -collisions.
As mentioned above, our results do not immediately give any indication for the query complexity of -distinctness. However, our results may hint that -distinctness also exhibits an exponential dependence on . We hope that future work, perhaps building on our techniques, will be able to resolve this question.
1.4 Technical Details
1.4.1 The Algorithm
At their heart, the algorithms for pre-image search, collision finding, -sum, and the recent algorithm for -collision, all rely on Grover’s algorithm. Let be a function with a fraction of accepting inputs. Grover’s algorithm finds the input with probability using quantum queries to . Grover’s algorithm finds a pre-image of a point in by setting to be 1 if and only if .
The BHT algorithm [BHT98] uses Grover’s to find a collision in . First, it queries on random points, assembling a database . As long as , all the images in will be distinct. Now, it lets be the function that equals 1 if and only if is found amongst the images in , and is not among the pre-images. By finding an accepting input to , one immediately finds a collision. Notice that the fraction of accepting inputs is approximately .
By running Grover’s for steps, one obtains a such a pre-image, and hence a collision, with probability .
Hosoyamada et al. show how this idea can be recursively applied to find multi-collisions. For , the first step is to find a database consisting of distinct 2-collisions. By recursively applying the BHT algorithm, each 2-collision takes time . Then, to find a 3 collision, set up as before: if and only if is amongst the images in and is not among the pre-images. The fraction of accepting inputs is approximately , so Grover’s algorithm will find a 3-collision in time . Setting to be optimizes the total query count as . For , recursively build a table of 3-collisions, and set up to find a collision with the database.
The result is an algorithm for -collisions for any constant , using queries.
Our algorithm improves on Hosoyamada et al.’s, yielding a query complexity of . Note that for Hosoyamada et al.’s algorithm, when constructing , many different databases are being constructed, one for each entry in . Our key observation is that a single database can be re-used for the different entries of . This allows us to save on some of the queries being made. These extra queries can then be used in other parts of the algorithm to speed up the computation. By balancing the effort correctly, we obtain our algorithm. Put another way, the cost of finding many (-)collisions can be amortized over many instances, and then recursively used for finding collisions with higher . Since the recursive steps involve solving many instances, this leads to an improved computational cost.
In more detail, we iteratively construct databases . Each will have -collisions. We set , indicating that we only need a single -collision. To construct database , simply query on arbitrary points. To construct database , define the function that accepts inputs that collide with but are not contained in . The fraction of points accepted by is approximately . Therefore, Grover’s algorithm returns an accepting input in time . We simply run Grover’s algorithm times using the same database to construct in time .
Now we just optimize by setting the number of queries to construct each database to be identical. Notice that , so solving for gives us
rk=O⎛⎜⎝q2−12k−1N1−12k−1⎞⎟⎠
Setting and solving for gives the desired result. In particular, in the case , our algorithm finds a collision in time .
1.4.2 The Lower Bound.
Notice that our algorithm fails to match the result one would get by applying the “Reciprocal Plus 1 Rule”. Given the discussion above, one may expect that our iterative algorithm could potentially be improved on even more. To the contrary we prove that, in fact, our algorithm is asymptotically optimal for any constant .
Toward that end, we employ a recent technique developed by Zhandry [Zha18] for analyzing quantum queries to random functions. We use this technique to show that our algorithm is tight for random functions, giving an average-case lower bound.
Zhandry’s “Compressed Oracles.”
Zhandry demonstrates that the information an adversary knows about a random oracle can be summarized by a database of input/output pairs, which is updated according to special rules. In Zhandry’s terminology, is the “compressed standard/phase oracle”.
This is not a classical database, but technically a superposition of databases, meaning certain amplitudes are assigned to each possible database. can be measured, obtaining an actual classical database with probability equal to its amplitude squared. In the following discussion, we will sometimes pretend that is actually a classical database. While inaccurate, this will give the intuition for the lower bound techniques we employ. In the section 4 we take care to correctly analyze as a superposition of databases.
Zhandry shows roughly the following:
• Consider any “pre-image problem”, whose goal is to find a set of pre-images such that the images satisfy some property. For example, -collision is the problem of finding pre-images such that the corresponding images are all the same.
Then after queries, consider measuring . The adversary can only solve the pre-image problem after queries if the measured has a solution to the pre-image problem.
Thus, we can always upper bound the adversary’s success probability by upper bounding the probability contains a solution.
• starts off empty, and each query can only add one point to the database.
• For any image point , consider the amplitude on databases containing as a function of (remember that amplitude is the square root of the probability). Zhandry shows that this amplitude can only increase by from one query to the next. More generally, for a set of different images, the amplitude on databases containing any point in can only increase by .
The two results above immediately imply the optimality of Grover’s search. In particular, the amplitude on databases containing is at most after queries, so the probability of obtaining a solution is the square of this amplitude, or . This also readily gives a lower bound for the collision problem. Namely, in order to introduce a collision to , the adversary must add a point that collides with one of the existing points in . Since there are at most such points, the amplitude on such can only increase by . This means the overall amplitude after queries is at most . Squaring to get a probability gives the correct lower bound.
A First Attempt.
Our core idea is to attempt a lower bound for -collision by applying these ideas recursively. The idea is that, in order to add, say, a 3-collision to , there must be an existing 2-collision in the database. We can then use the 2-collision lower bound to bound the increase in amplitude that results from each query.
More precisely, for very small , we can bound the amplitude on databases containing distinct 2-collisions as . If , must be a constant else this term is negligible. So we can assume for that is a constant.
Then, we note that in order to introduce a 3-collision, the adversary’s new point must collide with one of the existing 2-collisions. Since there are at most , we know that the amplitude increases by at most since is a constant. This shows that the amplitude on databases with 3-collisions is at most .
We can bound the amplitude increase even smaller by using not only the fact that the database contains at most 2-collisions, but the fact that the amplitude on databases containing even a single 2-collision is much less than 1. In particular, it is as demonstrated above. Intuitively, it turns out we can actually just multiply the amplitude increase in the case where the database contains a 2-collision by the amplitude on databases containing any 2-collision to get an overall amplitude increase of .
Overall then, we upper bound the amplitude after queries by , given an upper bound of on the probability of finding a 3-collision. This lower bound can be extended recursively to any constant -collisions, resulting in a bound that exactly matches the Reciprocal Plus 1 Rule, as well as the algorithm for small ! This again seems to suggest that our algorithm is not optimal.
Our Full Proof.
There are two problems with the argument above that, when resolved, actually do show our algorithm is optimal. First, when , the part of the amplitude bound becomes vacuous, as amplitudes can never be more than 1. Second, the argument fails to consider algorithms that find many 2-collisions, which is possible when . Finding many 2-collisions of course takes more queries, but then it makes extending to 3-collisions easier, as there are more collisions in the database to match in each iteration.
In our full proof, we examine the amplitude on the databases containing a 3-collision as well as 2-collisions, after queries. We call this amplitude . We show a careful recursive formula for bounding using Zhandry’s techniques, which we then solve.
More generally, for any constant , we let be the amplitude on databases containing exactly distinct -collisions and at least distinct -collisions after queries. We develop a multiply-recursive formula for the in terms of the and . We then recursively plug in our solution to so that the recursion is just in terms of , which we then solve using delicate arguments.
Interestingly, this recursive structure for our lower bound actually closely matches our algorithm. Namely, our proof lower bounds the difficulty of adding an -collision to a database containing many collisions, exactly the problem our algorithm needs to solve. Our techniques essentially show that every step of our algorithm is tight, resulting in a lower bound of , exactly matching our algorithm. Thus, we solve the quantum query complexity of -collisions.
Acknowledgements
This work is supported in part by NSF. Opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of NSF.
2 Preliminaries
Here, we recall some basic facts about quantum computation, and review the relevant literature on quantum search problems.
2.1 Quantum Computation
A quantum system is defined over a finite set of classical states. In this work we will mostly consider . A pure state over
is a unit vector in
, which assigns a complex number to each element in . In other words, let be a pure state in , we can write as:
|ϕ⟩=∑x∈Bαx|x⟩
where and is called the “computational basis” of . The computational basis forms an orthonormal basis of .
Given two quantum systems over and over , we can define a product quantum system over the set . Given and , we can define the product state .
We say is entangled if there does not exist and such that . For example, consider and , is entangled. Otherwise, we say is un-entangled.
A pure state can be manipulated by a unitary transformation . The resulting state .
We can extract information from a state is by performing a measurement. A measurement specifies an orthonormal basis, typically the computational basis, and the probability of getting result is . After the measurement, “collapses” to the state if the result is .
For example, given the pure state measured under , with probability the result is and collapses to ; with probability the result is and collapses to .
We finally assume a quantum computer can implement any unitary transformation (by using so-called Hadamard, phase, CNOT and gates), especially the following two gates:
• Classical Computation: Given a function , one can implement a unitary over such that for any ,
Uf|ϕ⟩=∑x∈X,y∈Yαx,y|x,y+f(x)⟩
Here, is a commutative group operation defined over .
• Quantum Fourier Transform: Let . Given a quantum state , by applying only basic gates, one can compute where the sequence
is the sequence achieved by applying the classical Fourier transform
to the sequence :
yk=1√N2n−1∑i=0xiωikN
where , is the imaginary unit.
One interesting property of QFT is that by preparing and applying
to each qubit,
which is a uniform superposition over all possible .
For convenience, we sometimes ignore the normalization of a pure state which can be calculated from the context.
2.2 Grover’s algorithm and BHT algorithm
Definition 1 (Database Search Problem).
Suppose there is a function/database encoded as and is non-empty. The problem is to find such that .
We will consider adversaries with quantum access to , meaning they submit queries as and receive in return . Grover’s algorithm [Gro96] finds a pre-image using an optimal number of queries:
Theorem 2 ([Gro96, Bbht98]).
Let be a function . Let be the number of pre-images of . There is a quantum algorithm that finds such that with an expected number of quantum queries to F at most even without knowing in advance.
We will normally think of the number of queries as being fixed, and consider the probability of success given the number of queries. The algorithm from Theorem 2, when run for queries, can be shown to have a success probability . For the rest of the paper, “Grover’s algorithm” will refer to this algorithm.
Now let us look at another important problem: -collision finding problem on -to- functions.
Definition 3 (Collision Finding on 2-to-1 Functions).
Assume . Consider a function such that for every , . In other words, every image has exactly two pre-images. The problem is to find such that .
Brassard, Høyer and Tapp proposed a quantum algorithm [BHT98] that solved the problem using only quantum queries. The idea is the following:
• Prepare a list of input and output pairs, where is drawn uniformly at random and ;
• If there is a -collision in , output that pair. Otherwise,
• Run Grover’s algorithm on the following function : if and only if there exists , and . Output the solution , as well as whatever it collides with.
This algorithm takes quantum queries and when , the algorithm finds a -collision with quantum queries.
2.3 Multi-collision Finding and [Hsx17]
Hosoyamada, Sasaki and Xagawa proposed an algorithm for -collision finding on any function where ( is a constant). They generalized the idea of [BHT98] and gave the proof for even arbitrary functions. We now briefly talk about their idea. For simplicity in this discussion, we assume is a -to- function.
The algorithm prepares pairs of -collisions by running the BHT algorithm times. If two pairs of -collisions collide, there is at least a -collision (possibly a -collision). Otherwise, it uses Grover’s algorithm to find a , and . The number of queries is . When , the query complexity is minimized to .
By induction, finding a -collision requires quantum queries. By preparing -collisions and applying Grover’s algorithm to it, it takes quantum queries to get one -collision. It turns out that and the complexity of finding -collision is .
In Section 3, we improve their algorithm to quantum queries.
2.4 Compressed Fourier Oracles and Compressed Phase Oracles
In [Zha18], Zhandry showed a new technique for analyzing cryptosystems in the random oracle model. He also showed that his technique can be used to re-prove several known quantum query lower bounds. In this work, we will extend his technique in order to prove a new optimal lower bound for multi-collisions.
The basic idea of Zhandry’s technique is the following: assume is making a query to a random oracle and the query is . Instead of only considering the adversary’s state for a random oracle , we can actually treat the whole system as
∑x,u∑hax,u|x,u+h(x)⟩⊗|h⟩
where is the truth table of . By looking at random oracles that way, Zhandry showed that these five random oracle models are equivalent:
1. Standard Oracles:
StO∑x,uax,u|x,u⟩⊗∑h|h⟩⇒∑x,u∑hax,u|x,u+h(x)⟩⊗|h⟩
2. Phase Oracles:
PhO∑x,uax,u|x,u⟩⊗∑h|h⟩⇒∑x,uax,u|x,u⟩⊗∑h(−1)h(x)u|h⟩
where . In other words, apply the QFT to the registers, apply the Standard query, and then apply the QFT one more time.
3. Fourier Oracles: We can view as . In other words, if we perform the Fourier transform on a function that always outputs , we will get a uniform superposition over all the possible functions .
Moreover, is equivalent to . Here means updating (xor) the -th entry in the database with .
So in this model, we start with where is an all-zero function. By making the -th query, we have
PhO∑x,uai−1x,u|x,u⟩⊗QFT|Di−1,x,u⟩⇒∑x,uai−1x,u|x,u⟩⊗QFT|Di−1,x,u⊕(x,u)⟩
The Fourier oracle incorporates the and operates directly on the registers:
FourierO∑x,uai−1x,u|x,u⟩⊗|Di−1,x,u⟩⇒∑x,uai−1x,u|x,u⟩⊗|Di−1,x,u⊕(x,u)⟩
4. Compressed Fourier Oracles: The idea is basically the same as Fourier oracles. But when the algorithm only makes queries, the function for any contains at most non-zero entries.
So to describe , we only need at most different pairs () which says the database outputs on and everywhere else. And is doing the following: 1) if is not in the list and , put in ; 2) if is in the list and , update to in ; 3) if is in the list and , remove from .
In the model, we start with where is an empty list. After making the -th query, we have
CFourierO∑x,uai−1x,u|x,u⟩⊗|Di−1,x,u⟩⇒∑x,uai−1x,u|x,u⟩⊗|Di−1,x,u⊕(x,u)⟩
5. Compressed Standard/Phase Oracles: These two models are essentially equivalent up to an application of applied to the query response register. From now on we only consider compressed phase oracles.
By applying QFT on the entries of the database registers of a compressed Fourier oracle, we get a compressed phase oracle.
In this model, contains all the pair which means the oracle outputs on and uniformly at random on other inputs. When making a query on ,
• if is in the database for some , a phase will be added to the state; it corresponds to update to in the compressed Fourier oracle model;
• otherwise a superposition is appended to the state ; it corresponds to put a new pair in the list in the compressed Fourier oracle model;
• also make sure that the list will never have an pair in the compressed Fourier oracle model (in other words, it is in the compressed phase oracle model); if there is one, delete that pair;
• All the ‘append’ and ‘delete’ operations above mean applying QFT.
3 Algorithm for Multi-collision Finding
In this section, we give an improved algorithm for -collision finding. We use the same idea from [HSX17] but carefully reorganize the algorithm to reduce the number of queries.
As a warm-up, let us consider the case and the case where is a -to- function, . They gives an algorithm with quantum queries. Here is our algorithm with only quantum queries:
• Prepare a list where are distinct and . This requires classical queries on random points.
• Define the following function on :
F′(x)={1,x∉{x1,x2,⋯,xt1} % and F(x)=yj for some j0,otherwise
Run Grover’s algorithm on function . Wlog (by reordering ), we find such that and using quantum queries.
• Repeat the last step times, we will have -collisions . This takes quantum queries.
• If two elements in collide, simply output a -collision. Otherwise, run Grover’s on function :
G(x)={1,x∉{x1,x2,⋯,xt2,x′1,⋯,x′t2} and F(x)=yj for some j0,otherwise
A -collision will be found when Grover’s algorithm finds a pre-image of on . It takes quantum queries.
Overall, the algorithm find a -collision using quantum queries.
The similar algorithm and analysis works for any constant and any -to- function which only requires quantum queries. Let . The algorithm works as follows:
• Assume is a -to- function and .
• Prepare a list of input-output pairs of size . With overwhelming probability (), does not contain a collision. By letting , this step makes quantum queries.
• Define a function that returns if the input is not in but the image collides with one of the images in , otherwise it returns . Run Grover’s on times. Every time Grover’s algorithm outputs , it gives a -collision. With probability (explained below), all these collisions do not collide. So we have a list of different -collisions. This step makes quantum queries.
• For , define a function that returns if the input is not in but the image collides with one of the images of -collisions in , otherwise it returns . Run Grover’s algorithm on times. Every time Grover’s algorithm outputs , it gives a -collision. With probability , all these collisions do not collide. So we have a list of different -collisions. This step makes quantum queries.
• Finally given -collisions, using Grover’s to find a single that makes a -collision with one of the -collision in . This step makes quantum queries by letting .
The number of quantum queries made by the algorithm is simply:
k−1∑i=0ti+1√N/ti = k−1∑i=0√Nt2i+1ti = k−1∑i=0√N⋅N2⋅(2k−(i+1)−1)−(2k−i−1)2k−1 = k⋅N(2k−1−1)/(2k−1)
So we have the following theorem:
Theorem 4.
For any constant , any -to- function (), there is an algorithm that finds a -collision using quantum queries.
We now show the above conclusion holds for an arbitrary function as long as . To prove this, we use the following lemma:
Lemma 5.
Let be a function and . Let be the probability that if we choose uniformly at random and , the number of pre-images of is at least . We have .
Proof.
To make the probability as small as possible, we want that if has less than pre-images, should have exactly pre-images. So the probability is at least
μF=∣∣{x|f(x) has at least k pre-images}∣∣|X|≥kN−(k−1)NkN≥1k
Theorem 6.
Let be a function and . The above algorithm finds a -collision using quantum queries with constant probability.
Proof.
We prove the case . The case
|
# Clustering / high availability setup
## Introduction
Clustering / high availability can be achieved by setting up two midPoint nodes working against common midPoint repository.
In order to do this, it is necessary to set a couple of parameters in midPoint configuration.
An example, when using PostgreSQL database:
<repository>
<repositoryServiceFactoryClass>com.evolveum.midpoint.repo.sql.SqlRepositoryFactory</repositoryServiceFactoryClass>
<database>postgresql</database>
<jdbcUrl>jdbc:postgresql://..../midpoint</jdbcUrl>
<hibernateHbm2ddl>none</hibernateHbm2ddl>
<missingSchemaAction>create</missingSchemaAction>
</repository>
<clustered>true</clustered>
Typically you set the following configuration parameter only:
Parameter Description
clustered
Determines if the installation is running in clustered (failover) mode. Default is false. If you need clustering/failover, set this to true.
In some circumstances the Quartz component in task manager needs to use separate database (usually only if you want to run it on H2). Then you need to configure that properly; see the section on H2 use below.
Important: if there are more nodes sharing a repository, all of them must have parameter clustered set to true. Otherwise, tasks will not be scheduled correctly. (midPoint will disable scheduling tasks on non-conformant nodes, i.e. on non-clustered nodes that are parts of such a system.) The best way how to ensure this is to have common config file. But if that’s not possible or practical, make sure that all nodes have the same settings.
Also, ensure your system time is synchronized across all node members (using NTP or similar service), otherwise strange behaviour may occur such as tasks restart on different nodes.
## Other cluster configuration items (midPoint 4.0 and above)
The main difference between 4.0 and previous versions is that since midPoint 4.0 the main mechanism for clusterwide task management is REST instead of JMX.
You can use config.xml configuration parameters placed directly under the <midpoint> element (Configuration parameter in the table below). Alternatively, you can use the command line options -Dkey=value to set these parameters (Command-line parameter in the table below).
Configuration items are:
Command-line parameter Configuration parameter Description
-Dmidpoint.nodeId
nodeId
The node identifier. The default is DefaultNode for non-clustered deployments. For clustered ones, either nodeId or nodeIdSource must be used.
-Dmidpoint.nodeIdSource
nodeIdSource
Source of the node identifier. It is applied if explicit node ID is not defined. The source can be either hostname meaning that the host name is used as the node identifier or random meaning that the random value for node ID is generated when the node is started.
-Dmidpoint.hostName
hostName
Overrides the local host name information. If not specified, the operatig system is used to determine the host name.
Normally you do not need to specify this information.
-Dmidpoint.httpPort
httpPort
Overrides the local HTTP port information. If not specified, Tomcat/Catalina JMX objects are queried to determine the HTTP port information. This information is used only to construct URL address used for intra-cluster communication (see below).
Normally you do not need to specify this information.
Use -Dserver.port=xxx instead to start midPoint using the right port.
-Dmidpoint.url
url
Overrides the intra-cluster URL information (see below).
Normally you do not need to specify this information.
### How intra-cluster URL is determined
In order to minimize the configuration work needed while keeping the maximum level of flexibility, the node URL used for intra-cluster communication (e.g. https://node1.acme.org:8080/midpoint) is derived from the following items - in this order:
1. <urlOverride> property in the Node object in the repository
2. -Dmidpoint.url / <url> information in command line or config.xml file
3. computed based on information in infrastructure/intraClusterHttpUrlPattern property, if defined; that property can use the following macros:
1. $host for host name (obtained dynamically from OS or overridden via -Dmidpoint.hostname or <hostname> config property) 2.$port for HTTP port (obtained dynamically from Tomcat JMX objects or overridden via -Dmidpoint.httpPort config property)
3. $path for midPoint URL path (obtained dynamically from the servlet container) 4. computed based on protocol scheme (obtained dynamically from Tomcat JMX objects), host name, port, and servlet path, as scheme://host:port/path. When troubleshooting these mechanisms you can set logging for com.evolveum.midpoint.task.quartzimpl.cluster.NodeRegistrar (or the whole task manager module) to DEBUG. ## Testing cluster on a single node If you want to test the cluster on a single node (running on different ports, of course) you need to set the following experimental configuration parameter to the value of true: Command-line parameter Configuration parameter Meaning -Dmidpoint.taskManager.localNodeClusteringEnabled <localNodeClusteringEnabled> in <taskManager> section Allows more nodes to use a single IP address. (So that cluster containing mode nodes on a single host can be formed.) Experimental. ## Configuring the cluster before midPoint 4.0 Mainly because of JMX limitations, some parameters have to be set up via Java system properties. In the following we expect the Oracle JRE is used. Parameter Meaning midpoint.nodeId This is an identifier of the local node. It is not part of the midPoint configuration, because we assume that this configuration file will be shared among cluster members. The default value is: DefaultNode. However, when running in clustered mode, there is no default, and this property must be explicitly specified. midpoint.jmxHostName Host name on which this node wants to be contacted (via JMX) by other nodes in cluster. (It will be announced to other nodes via Node record in repository.) Usually not necessary to specify, as the default is the current host IP address. com.sun.management.jmxremote.port This is the port on which JMX agent will listen. It must be specified for clustered mode, because JMX is used to query status of individual nodes and to manage them (start/stop scheduler, stop tasks on that node). And, if you test a clustering/failover configuration (more midPoint nodes) on a single machine, be sure to set this parameter to different values for individual midPoint nodes. Otherwise, you will get "java.net.BindException: Address already in use: JVM_Bind" exception on tomcat startup. com.sun.management.jmxremote.ssl Whether SSL will be used for JMX communication. For sample installations it can be set to false, however, for production use we recommend setting it totrue (alongside other SSL-related JMX properties, see http://docs.oracle.com/javase/1.5.0/docs/guide/management/agent.html#remote. com.sun.management.jmxremote.password.file and com.sun.management.jmxremote.access.file Names of the password and access files for JMX authentication and authorization. E.g. d:\midpoint\config\jmxremote.password, d:\midpoint\config\jmxremote.access. Examples of these files are in the samples/jmx directory in SVN.Beware, the jmxremote.password file must be readable only to its owner (i.e. user who starts the tomcat), otherwise the JVM refuses to start. In Windows, you typically have to stop inheriting permissions to this file, and manually remove all entries that grant access to persons other than the owner. Also, the following configuration items in <taskManager> section of config.xml have to be set: Parameter Meaning jmxUsername, jmxPassword Credentials used for JMX communication among cluster nodes. Default values are midpoint and secret respectively, but we strongly recommend changing at least the JMX password. Currently, all nodes should be accessible using the same credentials. An example NodeA (in catalina.bat) SET CATALINA_OPTS=-Dmidpoint.nodeId=NodeA \ -Dmidpoint.home=d:\midpoint\home \ -Dcom.sun.management.jmxremote=true \ -Dcom.sun.management.jmxremote.port=20001 \ -Dcom.sun.management.jmxremote.ssl=false \ -Dcom.sun.management.jmxremote.password.file=d:\midpoint\home\jmxremote.password \ -Dcom.sun.management.jmxremote.access.file=d:\midpoint\home\jmxremote.access NodeB (in catalina.bat) SET CATALINA_OPTS=-Dmidpoint.nodeId=NodeB \ -Dmidpoint.home=d:\midpoint\home \ -Dcom.sun.management.jmxremote=true \ -Dcom.sun.management.jmxremote.port=20002 \ -Dcom.sun.management.jmxremote.ssl=false \ -Dcom.sun.management.jmxremote.password.file=d:\midpoint\home\jmxremote.password \ -Dcom.sun.management.jmxremote.access.file=d:\midpoint\home\jmxremote.access (Note: the jmx port is set to 20002 just to allow running both nodes on a single machine. If you are sure they will not be run on a single machine, we recommend setting the port to the same value, just for simplicity.) (Note: when you have firewall, please also set com.sun.management.jmxremote.rmi.port to the same port as com.sun.management.jmxremote.port) ### Cluster infrastructure configuration Even if 3.9 and below there are some types of information (e.g. reports) that are accessed using REST calls. So, midpoint needs to have an intra-cluster HTTP URL pattern specified. This should be the HTTP/HTTPS pattern which is used by midpoint nodes to communicate with each others. The pattern is in fact an URL prefix pointing to the root URL of the application. The pattern is specified in the system configuration object as present in the example below. <systemConfiguration> ... <infrastructure> <intraClusterHttpUrlPattern>https://$host/midpoint</intraClusterHttpUrlPattern>
</infrastructure>
...
</systemConfiguration>
### Troubleshooting JMX
The following message(s) may appear in idm.log if there is a problem with JMX password:
2014-03-04 14:05:31,700 [MODEL] [http-bio-8080-exec-3] ERROR (com.evolveum.midpoint.model.controller.ModelController): Couldn't search objects in task manager, reason: Authentication failed! Invalid username or password
2014-03-04 14:05:31,701 [] [http-bio-8080-exec-3] ERROR (com.evolveum.midpoint.web.page.admin.server.dto.NodeDtoProvider): Unhandled exception when listing nodes, reason: Subresult com.evolveum.midpoint.task.api.TaskManager..searchObjects of operation com.evolveum.midpoint.model.controller.ModelController.searchObjects is still UNKNOWN during cleanup; during handling of exception java.lang.SecurityException: Authentication failed! Invalid username or password
The following message(s) may appear in idm.log if there is problem with firewall between IDM nodes:
2014-05-26 09:07:38,438 [TASKMANAGER] [http-bio-8181-exec-1] ERROR (com.evolveum.midpoint.task.quartzimpl.execution.RemoteNodesManager): Cannot connect to the remote node node02 at 10.1.1.2:8123, reason: Failed to retrieve RMIServer stub: javax.naming.CommunicationException [Root exception is java.rmi.ConnectIOException: Exception creating connection to: 10.1.1.2; nested exception is: java.net.NoRouteToHostException: No route to host]
Please note that it seems that JMX communication needs more than the JMX port specified in Tomcat startup configuration (in this fragment, 8123)! I resolved the problem by simply allowing all TCP communication between the nodes. I will update this solution after I find a better one ☺
The following message may appear if your clock is not synchronized between midPoint nodes:
2014-05-26 00:45:32,818 [TASKMANAGER] [QuartzScheduler_midPointScheduler-node02_ClusterManager] WARN (org.quartz.impl.jdbcjobstore.JobStoreTX): This scheduler instance (node02) is still active but was recovered by another instance in the cluster. This may cause inconsistent behavior.
## Using H2 when clustered
Using H2 in clustered mode is not recommended because of needless complexity. First, it needs to be specified to run in standalone process. And second, Quartz and midPoint need to use separate MVCC-related settings.
An example:
<repository>
<repositoryServiceFactoryClass>com.evolveum.midpoint.repo.sql.SqlRepositoryFactory</repositoryServiceFactoryClass>
<baseDir>\${midpoint.home}</baseDir>
<embedded>false</embedded>
<asServer>true</asServer>
<driverClassName>org.h2.Driver</driverClassName>
<jdbcUrl>jdbc:h2:tcp://localhost:6000/~/midpoint;LOCK_MODE=1;DB_CLOSE_ON_EXIT=FALSE;LOCK_TIMEOUT=10000</jdbcUrl>
<hibernateDialect>org.hibernate.dialect.H2Dialect</hibernateDialect>
<hibernateHbm2ddl>update</hibernateHbm2ddl>
</repository>
<clustered>true</clustered>
<jdbcUrl>jdbc:h2:tcp://localhost:6000/~/midpoint-quartz;MVCC=TRUE;DB_CLOSE_ON_EXIT=FALSE</jdbcUrl>
The following task manager settings are relevant in this context:
Parameter Meaning
jdbcUrl
If you are using H2, you have to set up use different database parameters from those used by midPoint repository. And, because MVCC mode is to be enabled, task manager has to use a database instance different from the one used by the repository.(If you are using database other than H2, you may skip setting special jdbcUrl in the <taskManager> configuration. The jdbcUrl from repository config will be used. As a result, Quartz tables will be stored in the same database instance as midPoint tables.)
dataSource
Uses specified data source to obtain DB connections. (See Repository Configuration).
Other task manager database settings (e.g. jdbc username and password, driver class name, hibernate dialect) are taken by default from <repository> configuration, but, of course, they may be overridden in task manager configuration.
H2 then has to be started independently of both nodes. In this case, it is expected to listen on port 6000. To do that, you can use e.g. this command line:
java -jar h2-1.3.171.jar -tcp -tcpPort 6000 -tcpAllowOthers
|
# Problem: Draw atomic orbital diagrams representing the ground-state electron configuration for Co (cobalt). How many unpaired electrons are present?
###### Problem Details
Draw atomic orbital diagrams representing the ground-state electron configuration for Co (cobalt). How many unpaired electrons are present?
|
# 生存分析Cox回归 生存数据不符合应用条件怎么办?
Cox回归,全称为Cox比例风险模型,主要用于带有时间的生存结局的影响因素研究,或评价某个临床治疗措施对患者生存的影响。
Cox对因变量和自变量要求都不高,只要求结局指标既要有生存的二分类结局,也要有生存时间,对生存时间也没有分布的要求,对自变量要求更低,什么类型的自变量都可以。此外,Cox回归要求观察值残差分布同样满足独立性的要求(一般情况下都不成问题,开展回归分析可以基本忽略本要求)。
1. 等比例风险(Proportional hazards)假定;
2. 当自变量是连续型变量时,Cox回归中自变量与因变量的关系——一种转换后线性关系,也必须满足。
## 1. 等比例风险假定
### 1.1 什么是等比例风险?
Cox回归有一个重大规定,虽然各组生存率下降,各个时间点死亡速度不一致,但是要求下降的速率比是一样,比如第2年,处理组死亡速率是10%,那么对照组死亡速率5%,第3年术中放疗组风险率20%,那么对照组应该也是10%左右。如此,死亡速率之比,也就是HR值保持一致,这便是等比例风险
### 1.2 等比例风险判断
#### 1.2.4 其它方法
1. Proportionality of hazards was assessed for each variable and Schoenfeld residuals were visually inspected for potential time–variant biases. Our assessment of the proportionality of hazards assumption and visual inspection of Schoenfeld residuals showed that none were significant based on a p value threshold of 0·05.
2. The proportional hazards assumption was confirmedby residual plots
3. We examined the proportional hazards assumption by testing statistical significance of interactions between follow-up time and exposures.
4. We used the Schoenfeld residual test to verify the assumption of proportional hazards in the Cox analysis, which was fulfilled for the end points of death from any causeand further bleeding.
5. We examined the assumption ofproportional hazards by using a Wald test of the interaction between treatment status and time.
6. To assess the validity of the proportional hazards assumption, the assumption was assessed by log-minus-log-survival function and found to hold. To confirm the assumption of proportionality, time-dependent covariate analysis was used.
## 2. 线性关系是经常被忽略的条件
Cox回归对线性条件的诊断,常见的方式是通过建立自变量与鞅残差(martingale residual)的散点图,看是否存在着线性趋势。
``````install.packages("survival")
library(survival)
resCox<-coxph(Surv(time,censor)~age+sex+bui+ch+p+stage+trt,data=p1)
summary(resCox)
p1\$resid<-residuals(resCox,type = "martingale",data=p1)
plot(p1\$age,p1\$resid)
lines(lowess(p1\$age,p1\$resid))
``````
## 为什么在Office中画技术路线图时的线永远拽不直?
2020-8-25 22:40:00
## BioRender的使用方法
2020-8-27 11:52:34
0 条回复 A文章作者 M管理员
暂无讨论,说说你的看法吧
|
# How to transform Nondeterministic finite automaton (NFA) to regular expression equivalent
Im struggling to understand how to transform Nondeterministic finite automaton (NFA) of the following form:
To a regular expression equivalent. What I have tried was using arden's rule. However I just cant figure out how to simplify and return the appropriate regular expression corresponding to that NFA.
First I have created the initial equation corresponding to those states:
$1: q3 = q_1 0 + q_1 1$
$2: q1 = q_0 0 + q_1 1$
$3: q0 = q_0 0 + q_0 1 + \epsilon$
Which I have tried to simplify:
$1: q3 = (q_0 0 + q_1 1)0 + (q_0 0 + q_1)1$
$1: q3 = q_0 00 + q_1 100 + q_0 01 + q_1 11$
$1: q3 = q_0(0+1) + q_1(0+1)$
$2: q1 = q_0 00 + q_0 10 + \epsilon 0 + q_0 01 + q_1 11$
$2: q1 = q_0(0+0+1)+ \epsilon 0 + q_1 11$
$3: q0 = q_0 0 + q_0 1 + \epsilon$
$3: q0 = q_0(0+1) + \epsilon$
Here I just lost. Maybe there is a different approach suitable in this context.
Appreciate any help!
• Did you draw the transition table and try subset construction ? – user93 Jan 22 '18 at 5:21
• Hi yes, I think your example is not correct since if you convert that NFA to DFA it will have multiple final states. – Googme Jan 22 '18 at 5:42
• I used your nfa diagram to draw the transition table and use subset construction to get the dfa. nfa to dfa does not always need to have multiple final states – user93 Jan 22 '18 at 5:49
• Nevertheless, it does not illustrate how to approach the problem I have. – Googme Jan 22 '18 at 6:11
• The video provided in the link explains it in details – user93 Jan 22 '18 at 6:34
The Arden's rule, as it is usually stated, is easier to use if you consider equations on $(L_q)_{q\in Q}$ depicting the language $L_q$ the automaton accepts from state $q$. Doing this, you obtain the following equations. Check that you understand this properly :
• $L_0 = 0L_0 + 1L_0 + 0L_1$
• $L_1 = 0L_3 + 1L_3$
• $L_3 = \varepsilon$
(I use $L$ instead of $q$ as it looks less misleading to me)
Once you have those equations, you can solve this as follows
• $L_1 = 0 + 1$ (I replaced $L_3$ with its value)
• $L_0 = (0+1)L_0 + 0(0+1)$ (Replaced $L_1$ and factorized. This is ready for Arden's rule)
• Since $\epsilon\not\in L_0$, from Arden's rule : $L_0 = (0+1)^*0(0+1)$
The language accepted by the automaton is always the union of the $L_{q_i}$, where the $q_i$ are the initial states. So here, $L = L_0 = (0+1)^*0(0+1)$
• Hi wazdra, is there a way you could show me how to solve them since thats where Im struggling at – Googme Jan 22 '18 at 7:38
• Did you understand how I got the equations ? I solve them right underneath. If you are talking about your equations, those are really not suited for Arden's rule – wazdra Jan 22 '18 at 7:41
• Hmm Im not 100% sure, you are saying that the regular expression corresponding to that NFA in my example is (0+1)*0(0+1)? – Googme Jan 22 '18 at 7:50
• Yes. Solving is easier than creating the equations at start. – wazdra Jan 22 '18 at 7:59
• Yes, $L_3$ corresponds to the language this automaton would accept if you started from $q_3$. Since $q_3$ is an accepting state, you would accept e. Since there is no transition out of $q_3$, this is all you can accept. So $L_3 = e$. – wazdra Jan 22 '18 at 8:20
|
# What is the function for “round n to the nearest m”?
Where $n \in \mathbb{R}$ and $m \in \mathbb{R}$, what is the function $f(n, m)$ that can achieve rounding behavior of money where the smallest denomination is not an power of ten?
For instance, if a 5¢ coin is the smallest denomination (like in Canada):
• $f(1.02, 0.05) = 1.00$
• $f(1.03, 0.05) = 1.05$
• $f(1.29, 0.05) = 1.30$
• $f(1.30, 0.05) = 1.30$
|
Tag Info
5
If you have only two $\beta_j$ parameters, just plot it in a 3D plot with $\beta_1$ on $x$-axis, $\beta_2$ on $z$-axis, and the loss on $y$-axis. If there is more parameters, there is no easy way to plot them. What you can do, it to use a dimensionality reduction algorithm to reduce the dimensionality of inputs, as authors of the loss landscape paper did, ...
2
Gradient descent is unlikely to ever update such that it sets some weights to exactly 0. This is because at any given step, it's simply unlikely to estimate a gradient update for a parameter that is exactly enough to force the parameter to zero. Additionally, even if the update does force the value to exactly zero at some step, then this may be undone at a ...
1
Changing the mean of the dependent variable makes no practical difference, as all that happens is that $\beta_0$ changes to match that change and you are not penalising $\beta_0$ Rescaling also makes little difference beyond rescaling other parts of the calculation, since trying to minimise \sum^I_{i=1}\left(y_i - \left(\beta_0 + \sum^J_{j=1}\beta_jx_{ij} +...
1
If you use the SGDRegressor in scikit-learn with the epsilon_insensitive loss function specified and the epsilon value set to zero, you will get a model equivalent to LAD with L2 regularization.
Only top voted, non community-wiki answers of a minimum length are eligible
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 17 Jul 2018, 08:33
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# If y=|x+1|/x and x!=0, is xy>0?
Author Message
TAGS:
### Hide Tags
VP
Status: Far, far away!
Joined: 02 Sep 2012
Posts: 1098
Location: Italy
Concentration: Finance, Entrepreneurship
GPA: 3.8
If y=|x+1|/x and x!=0, is xy>0? [#permalink]
### Show Tags
Updated on: 12 Jul 2014, 12:43
8
11
00:00
Difficulty:
95% (hard)
Question Stats:
40% (01:13) correct 60% (01:35) wrong based on 262 sessions
### HideShow timer Statistics
If $$y=\frac{|x+1|}{x}$$ and $$x\neq{}0$$, is $$xy>0$$ ?
(1) $$x^2+2x+1>0$$
(2) $$y\neq{0}$$
My own question, as always any feedback is appreciated
_________________
It is beyond a doubt that all our knowledge that begins with experience.
Kant , Critique of Pure Reason
Tips and tricks: Inequalities , Mixture | Review: MGMAT workshop
Strategy: SmartGMAT v1.0 | Questions: Verbal challenge SC I-II- CR New SC set out !! , My Quant
Rules for Posting in the Verbal Forum - Rules for Posting in the Quant Forum[/size][/color][/b]
Originally posted by Zarrolou on 16 May 2013, 11:24.
Last edited by Bunuel on 12 Jul 2014, 12:43, edited 3 times in total.
Director
Status: Tutor - BrushMyQuant
Joined: 05 Apr 2011
Posts: 613
Location: India
Concentration: Finance, Marketing
Schools: XLRI (A)
GMAT 1: 700 Q51 V31
GPA: 3
WE: Information Technology (Computer Software)
Re: If y=|x+1|/x and x!=0, is xy>0? [#permalink]
### Show Tags
16 May 2013, 11:47
8
2
Zarrolou wrote:
If $$y=\frac{|x+1|}{x}$$ and $$x\neq{}0$$, is $$xy>0$$ ?
A)$$x^2+2x+1>0$$
B)$$y\neq{0}$$
My own question, as always any feedback is appreciated
Kudos to the first correct solution(s)!
question can be written as |x+1| = xy
so, to prove that xy> 0 we essentially need to prove that |x|1| > 0
STAT1
x^2 + 2x + 1 > 0
means,
(x+1) ^ 2 >0
taking square root on both the sides does not change the inequality
so, we have
|x+1| > 0
which means that xy > 0
So, SUFFICIENT
STAT2
y != 0
we know that
|x+1| = xy => xy is non negative. so, it is either 0 or positive
since we know that both x and y ar enot equal to zero so, xy > 0
so, SUFFICIENT
_________________
Ankit
Check my Tutoring Site -> Brush My Quant
GMAT Quant Tutor
How to start GMAT preparations?
How to Improve Quant Score?
Gmatclub Topic Tags
Check out my GMAT debrief
How to Solve :
Statistics || Reflection of a line || Remainder Problems || Inequalities
##### General Discussion
Intern
Joined: 15 Oct 2012
Posts: 23
Re: If y=|x+1|/x and x!=0, is xy>0? [#permalink]
### Show Tags
Updated on: 16 May 2013, 11:59
1
E it is.....................
For getting xy>0 we need to prove that both x and y are either greater than 0 I.e positive or both of them are negative......
Xy = |x+1|
Once negative:
Xy= -x-1
Xy+x=-1
X=-1/y+1
Once positive: similarly we will get x= 1/y-1
We are getting x once positive and negative ......so can't say xy >0 or not....
II
X^2+2x+1>0
Gives us (x+1)^2>0
Gives us x>-1 and from question stem we know that x is not equal to 0 - hence x will be positive.
But we still don't know their value/ sign of y. Hence this statement is also not sufficient.
Hope I'm correct.....
Originally posted by nikhilsehgal on 16 May 2013, 11:41.
Last edited by nikhilsehgal on 16 May 2013, 11:59, edited 1 time in total.
Senior Manager
Joined: 13 May 2013
Posts: 430
Re: If y=|x+1|/x and x!=0, is xy>0? [#permalink]
### Show Tags
16 May 2013, 12:23
2
Agreed
I made the mistake of multiplying y by x and having to test four different cases:
yx = x+1
-yx = x+1
Then I had to test cases for X>0 and X<0 (i.e. four total cases) and each one turned out to hold true. Your way makes more sense and is quicker to boot.
for 2.) xy = |x+1| so xy could be 0 or greater than zero. We know that X isn't zero from the stem and we know that Y isn't zero from #2. Therefore x/y must be greater or less than zero yielding a positive or negative result. However, because xy = the absolute value of x+1 xy can only be greater than or equil to zero and because both x and y are not zero, product zy MUST be greater than zero.
Good work!
nktdotgupta wrote:
Zarrolou wrote:
If $$y=\frac{|x+1|}{x}$$ and $$x\neq{}0$$, is $$xy>0$$ ?
A)$$x^2+2x+1>0$$
B)$$y\neq{0}$$
My own question, as always any feedback is appreciated
Kudos to the first correct solution(s)!
question can be written as |x+1| = xy
so, to prove that xy> 0 we essentially need to prove that |x|1| > 0
STAT1
x^2 + 2x + 1 > 0
means,
(x+1) ^ 2 >0
taking square root on both the sides does not change the inequality
so, we have
|x+1| > 0
which means that xy > 0
So, SUFFICIENT
STAT2
y != 0
we know that
|x+1| = xy => xy is non negative. so, it is either 0 or positive
since we know that both x and y ar enot equal to zero so, xy > 0
so, SUFFICIENT
Manager
Status: *Lost and found*
Joined: 25 Feb 2013
Posts: 122
Location: India
Concentration: General Management, Technology
GMAT 1: 640 Q42 V37
GPA: 3.5
WE: Web Development (Computer Software)
Re: If y=|x+1|/x and x!=0, is xy>0? [#permalink]
### Show Tags
16 May 2013, 13:23
3
Zarrolou wrote:
If $$y=\frac{|x+1|}{x}$$ and $$x\neq{}0$$, is $$xy>0$$ ?
A)$$x^2+2x+1>0$$
B)$$y\neq{0}$$
My own question, as always any feedback is appreciated
Kudos to the first correct solution(s)!
The answer sure does appear to be [D]
Statement 1: $$x^2+2x+1>0$$
Hence $$(x+1)^2$$ > 0
Now since the above is a squared term, the value will always be positive except for x = -1. Since the statement doesnt include 0 under the range we can assume that x is not equal to -1. Putting the same values into our main equation i.e. xy = |x+1| we, can be sure that x+1 is not equal to zero at any point. Hence the statement would be sufficient, since if we prove |x+1| is not equal to zero, then its always greater than 0! Hence xy = |x+1| > 0
Statement 2. Now if y is not equal to zero, we can assume |x+1| is never equal to zero! Hence we prove the fact by the same above method, that xy = |x+1| > 0
The idea in the above question, is to establish that at no point x = -1. At only that value y = 0 and xy = 0. Otherwise for all other possible values |x+1| > 0.
As always wonderful question Zarrolou! Hope my procedure is correct!
Regards,
Arpan
_________________
Feed me some KUDOS! *always hungry*
VP
Status: Far, far away!
Joined: 02 Sep 2012
Posts: 1098
Location: Italy
Concentration: Finance, Entrepreneurship
GPA: 3.8
Re: If y=|x+1|/x and x!=0, is xy>0? [#permalink]
### Show Tags
16 May 2013, 13:46
2
1
Good job WholeLottaLove, nktdotgupta, arpanpatnaik !
Official explanation
If $$y=\frac{|x+1|}{x}$$ and $$x\neq{}0$$, is $$xy>0$$ ?
Rewrite the question as (multiply by x) $$xy=|x+1|$$
The $$|abs|$$ is $$\geq{0}$$, so basically we have to check if $$xy=|x+1|\neq{0}$$
A)$$x^2+2x+1>0$$
$$(x+1)^2>0$$
So $$x\neq{-1}$$ and $$|x+1|>0$$
This is what we are looking for. Sufficient
B)$$y\neq{0}$$
$$y=\frac{|x+1|}{x}\neq{0}$$ so $$|x+1|\neq{0}$$
Sufficient
Hope everything is clear
_________________
It is beyond a doubt that all our knowledge that begins with experience.
Kant , Critique of Pure Reason
Tips and tricks: Inequalities , Mixture | Review: MGMAT workshop
Strategy: SmartGMAT v1.0 | Questions: Verbal challenge SC I-II- CR New SC set out !! , My Quant
Rules for Posting in the Verbal Forum - Rules for Posting in the Quant Forum[/size][/color][/b]
MBA Section Director
Status: Back to work...
Affiliations: GMAT Club
Joined: 22 Feb 2012
Posts: 5182
Location: India
City: Pune
GMAT 1: 680 Q49 V34
GPA: 3.4
Re: If y=|x+1|/x and x!=0, is xy>0? [#permalink]
### Show Tags
16 May 2013, 15:41
1
1
@Zarrolou,
Regret for not responding within time.
If $$y=\frac{|x+1|}{x}$$ and $$x\neq{}0$$, is $$xy>0$$ ?
A)$$x^2+2x+1>0$$
B)$$y\neq{0}$$
First lets simplify the inequality
$$\frac{|x+1|}{x}$$ –y = 0 ----------> $$\frac{|x+1|-xy}{x}$$ = 0 -----> we know $$x\neq{}0$$, then |x+1|-xy must be zero. Hence |x+1| - xy = 0 --------> |x+1| = xy
We are asked whether xy>0 --------> Whether |x+1| > 0 ? --------> We know the expression within modules can either be zero or greater than zero. For xy to be greater than zero |x+1| has to be greater than zero.
|x+1| will be zero only when x=-1 and for any other value of x, |x+1| will always be greater than zero
So the question can be rephrased as whether $$x\neq{-1}$$
Statement 1) $$x^2$$ + 2x + 1 > 0
Rule :- For any quadratic inequation $$ax^2$$ + bx + c > 0, if $$b^2$$ – 4ac = 0 and a > 0 then the inequality holds true outside the interval of roots
In our case $$b^2$$ – 4ac = 4 – 4 = 0 and a > 0 so $$x^2$$ + 2X + 1 > 0 will hold true for all values beyond the Root(s) of equation (Towards any direction - Positive or Negative)
$$x^2$$ + 2x + 1 = 0 --------> x(x+1) +1(x + 1) = 0 ---------> (x+1)(x+1) = 0 ----------> x=Root = -1
So $$x^2$$ + 2x + 1 > 0 will hold true for any of x except for -1
This reveals that $$x\neq{-1}$$ and xy>0 ----------------> Sufficient
Statement 2) $$y\neq{0}$$
From the question stem we know $$x\neq{}0$$
As per Statement 2, $$y\neq{0}$$-------------->That means both X and Y are nonzero.
|x+1| = xy
xy can be either Positive or Negative
|x+1| can be Zero or Positive
Combining both these inferences we can conclude that XY must be Positive. Sufficient
Regards,
Narenn
_________________
SVP
Joined: 06 Sep 2013
Posts: 1869
Concentration: Finance
Re: If y=|x+1|/x and x!=0, is xy>0? [#permalink]
### Show Tags
24 May 2014, 11:22
Lets make things simpler here.
If y=|x+1|/x, Is xy>0?
Now since |x+1| is always non negative then, xy>0 always except when x+1=0---> x=-1.
Thus, we need to prove that x is not equal to -1.
Statement 1.
We have that (x+1)^2>0
Therefore, we have that |x+1|>0
x+1>0---> x>-1 or x+1<0, x<-1. Therefore, in both cases x is different from -1.
Sufficient
Statement 2
If y is different from zero it means that x is also different from -1, since only the numerator can give 0. Remember x can't be zero in this case.
Sufficient
Hope this clarifies
Cheers!
J
Senior Manager
Joined: 18 Jun 2016
Posts: 270
Location: India
GMAT 1: 720 Q50 V38
GMAT 2: 750 Q49 V42
GPA: 4
WE: General Management (Other)
Re: If y=|x+1|/x and x!=0, is xy>0? [#permalink]
### Show Tags
08 Sep 2016, 03:56
BrushMyQuant, arpanpatnaik, Zarrolou, Narenn & jlgdr
Question: Is xy > 0
Given: $$x\neq{}0$$
$$y=\frac{|x+1|}{x}$$
|x+1| will always be Non-Negative but it can STILL BE 0.
Similarly, y CAN ALSO BE 0.
Statement 1: $$x^2+2x+1>0$$
=> $$(x+1)^2 > 0$$
=> x > -1
There is no constraint on y.
So,
Case 1: If y = 1, x > 0;xy > 0 (x & y must have same sign because |x+1| will always be Non-Negative.)
Case 2: y = -1, x = -0.5; xy > 0 (x & y must have same sign because |x+1| will always be Non-Negative.)
Case 3: y = 0, x > -1; xy = 0 i.e. xy !> 0
Statement 1: Insufficient
Statement 2: $$y\neq{0}$$
Case 1: If y = 1, x > 0;xy > 0 (x & y must have same sign because |x+1| will always be Non-Negative.)
Case 2: y = -1, x = -0.5; xy > 0 (x & y must have same sign because |x+1| will always be Non-Negative.)
Case 3: y = -100, x = -1; xy = |x+1| = 0 i.e. y = 0 (because if x = -1 and xy = 0, y = 0). But this case violates the condition given in Statement 2 that $$y\neq{0}$$.
Therefore, $$x\neq{-1}$$ and as given $$y\neq{0}$$
Hence, xy > 0 (Always)
Statement 2: Sufficient
_________________
I'd appreciate learning about the grammatical errors in my posts
Please hit Kudos If my Solution helps
My Debrief for 750 - https://gmatclub.com/forum/from-720-to-750-one-of-the-most-difficult-pleatues-to-overcome-246420.html
My CR notes - https://gmatclub.com/forum/patterns-in-cr-questions-243450.html
Math Expert
Joined: 02 Sep 2009
Posts: 47037
Re: If y=|x+1|/x and x!=0, is xy>0? [#permalink]
### Show Tags
08 Sep 2016, 04:11
umg wrote:
BrushMyQuant, arpanpatnaik, Zarrolou, Narenn & jlgdr
Question: Is xy > 0
Given: $$x\neq{}0$$
$$y=\frac{|x+1|}{x}$$
|x+1| will always be Non-Negative but it can STILL BE 0.
Similarly, y CAN ALSO BE 0.
Statement 1: $$x^2+2x+1>0$$
=> $$(x+1)^2 > 0$$
=> x > -1
There is no constraint on y.
So,
Case 1: If y = 1, x > 0;xy > 0 (x & y must have same sign because |x+1| will always be Non-Negative.)
Case 2: y = -1, x = -0.5; xy > 0 (x & y must have same sign because |x+1| will always be Non-Negative.)
Case 3: y = 0, x > -1; xy = 0 i.e. xy !> 0
Statement 1: Insufficient
Statement 2: $$y\neq{0}$$
Case 1: If y = 1, x > 0;xy > 0 (x & y must have same sign because |x+1| will always be Non-Negative.)
Case 2: y = -1, x = -0.5; xy > 0 (x & y must have same sign because |x+1| will always be Non-Negative.)
Case 3: y = -100, x = -1; xy = |x+1| = 0 i.e. y = 0 (because if x = -1 and xy = 0, y = 0). But this case violates the condition given in Statement 2 that $$y\neq{0}$$.
Therefore, $$x\neq{-1}$$ and as given $$y\neq{0}$$
Hence, xy > 0 (Always)
Statement 2: Sufficient
Notice that x^2+2x+1>0 is true for all value of x except when x = -1. So, for (1) $$x\neq -1$$ and if $$x\neq -1$$, then $$y\neq 0$$. Your third case is not possible.
_________________
Senior Manager
Joined: 18 Jun 2016
Posts: 270
Location: India
GMAT 1: 720 Q50 V38
GMAT 2: 750 Q49 V42
GPA: 4
WE: General Management (Other)
If y=|x+1|/x and x!=0, is xy>0? [#permalink]
### Show Tags
08 Sep 2016, 04:51
Bunuel wrote:
umg wrote:
BrushMyQuant, arpanpatnaik, Zarrolou, Narenn & jlgdr
Question: Is xy > 0
Given: $$x\neq{}0$$
$$y=\frac{|x+1|}{x}$$
|x+1| will always be Non-Negative but it can STILL BE 0.
Similarly, y CAN ALSO BE 0.
Statement 1: $$x^2+2x+1>0$$
=> $$(x+1)^2 > 0$$
=> x > -1
There is no constraint on y.
So,
Case 1: If y = 1, x > 0;xy > 0 (x & y must have same sign because |x+1| will always be Non-Negative.)
Case 2: y = -1, x = -0.5; xy > 0 (x & y must have same sign because |x+1| will always be Non-Negative.)
Case 3: y = 0, x > -1; xy = 0 i.e. xy !> 0
Statement 1: Insufficient
Statement 2: $$y\neq{0}$$
Case 1: If y = 1, x > 0;xy > 0 (x & y must have same sign because |x+1| will always be Non-Negative.)
Case 2: y = -1, x = -0.5; xy > 0 (x & y must have same sign because |x+1| will always be Non-Negative.)
Case 3: y = -100, x = -1; xy = |x+1| = 0 i.e. y = 0 (because if x = -1 and xy = 0, y = 0). But this case violates the condition given in Statement 2 that $$y\neq{0}$$.
Therefore, $$x\neq{-1}$$ and as given $$y\neq{0}$$
Hence, xy > 0 (Always)
Statement 2: Sufficient
Notice that x^2+2x+1>0 is true for all value of x except when x = -1. So, for (1) $$x\neq -1$$ and if $$x\neq -1$$, then $$y\neq 0$$. Your third case is not possible.
Wow! This question is crazier than I anticipated.
I see it now. I used that logic in Statement 2 but not in Statement 1.
Thanks for pointing it out.
_________________
I'd appreciate learning about the grammatical errors in my posts
Please hit Kudos If my Solution helps
My Debrief for 750 - https://gmatclub.com/forum/from-720-to-750-one-of-the-most-difficult-pleatues-to-overcome-246420.html
My CR notes - https://gmatclub.com/forum/patterns-in-cr-questions-243450.html
Non-Human User
Joined: 09 Sep 2013
Posts: 7273
Re: If y=|x+1|/x and x!=0, is xy>0? [#permalink]
### Show Tags
17 Sep 2017, 20:58
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: If y=|x+1|/x and x!=0, is xy>0? [#permalink] 17 Sep 2017, 20:58
Display posts from previous: Sort by
# Events & Promotions
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
He has an interest in writing articles related to data science, machine learning and artificial intelligence. I’m doing some tinkering with a modified AlexNet and adding in some BatchNorm to look at the position of batchnorm in relation to the activation function, and I’m getting a dimensions error, and I can’t seem to figure out where it’s coming from. The DataLoader performs operations on the downloaded data such as customizing data loading order, automatic batching, automatic memory pinning, etc. In the, , we implemented the AlexNet model using the Keras library and TensorFlow backend on the CIFAR-10 multi-class classification problem. In the Colab, if you wish to use the CUDA interface, set the GPU as the hardware accelerator in the notebook settings. This repo contains tutorials covering image classification using PyTorch 1.6 and torchvision 0.7, matplotlib 3.3, scikit-learn 0.23 and Python 3.8.. We'll start by implementing a multilayer perceptron (MLP) and then move on to architectures using convolutional neural networks (CNNs). I want to do Quantization Aware Training of Alexnet on the Imagenet dataset, going from f32 to int8, to leverage GPU support. The, library is required to import the dataset and other operations. Add, delete, modify and query dataframe, Python multithreading implementation code (simulation of banking service operation process), Encryption and decryption of sequence cipher, Give a few simple examples to better understand the working principle of scratch, Python module_ An example of pylibtiff reading TIF file, Simple login and registration query implemented by JSP + Servlet, Sorting out common MySQL query statements (23 kinds), Flow chart + source code in-depth analysis: the principle of cache penetration and breakdown problems and landing solutions, On the design of rust language and go language from the perspective of error handling, Linux ossutil pulls all files to the server, Vue and react will be able to use JSX and source code summary. ... VGGNet consists of 16 convolutional layers and is very appealing because of its very uniform architecture. PyTorch: https://github.com/shanglianlm0525/PyTorch-Networks. Stanfoard CS231n 2017; Google Inception Model. hub. In that experiment, we defined a simple convolutional neural network that was based on the prescribed architecture of the ALexNet model as proposed in the research work of Alex Krizhevsky. Architecture. To normalize the input image data set, the mean and standard deviation of the pixels data is used as per the standard values suggested by the PyTorch. 翻訳 : (株)クラスキャット セールスインフォメーション 作成日時 : 08/05/2018 (0.4.1) * 本ページは、github 上の以下の pytorch/examples と keras/examples レポジトリのサンプル・コードを参考にしています: Reference. Similar to AlexNet, only 3x3 convolutions, but lots of filters. eval () All pre-trained models expect input images normalized in the same way, i.e. AlexNet consists of eight layers: five convolutional layers, two fully-connected hidden layers, and one fully-connected output layer. Remaining libraries will be imported along with the code segments for better describing the use of that library. Now, we will define the optimizer and loss functions. I wanted to train an AlexNet model on cifar with the architecture from: “Understanding deep learning requires rethinking generalization” Is the following the recommended way to do it: or is there a standard way to do this in pytorch for cifar? AlexNet is one of the popular variants of the convolutional neural network and used as a deep learning framework. For this purpose, we will update the structure of each classifier using the below lines of codes. Semantic Segmentation 1. About. Community. I am an entrepreneur with a love for Computer Vision and Machine Learning with a dozen years of experience (and a Ph.D.) in the field. I want to do Quantization Aware Training of Alexnet on the Imagenet dataset, going from f32 to int8, to leverage GPU support. PyTorch Image Classification. mini-batches of 3-channel RGB images of shape (3 x H x W) , where H and W are expected to be at least 224 . If I do C = B then it would mean both are same neural network with parameters getting updated in same way. AlexNet: The Architecture that Challenged CNNs | by Jerry Wei | … Colab [pytorch] Open the notebook in Colab. The transforms library will be used to transform the downloaded image into the network compatible image dataset. Vaibhav Kumar has experience in the field of Data Science and Machine Learning, including research and development. Here, we are defining an object through which we can transform an image into the required image dataset that will be compatible with the AlexNet model. It assumes that the dataset is raw JPEGs from the ImageNet dataset. PyTorch 0.4.1 examples (コード解説) : 画像分類 – Oxford 花 17 種 (AlexNet). The network will be trained on the CIFAR-10 dataset for a multi-class image classification problem and finally, we will analyze its classification accuracy when tested on the unseen test images. how do I ensure that both have different parameters but same architecture? I have 3 neural networks, A, B, C. A and B have different architecture, but I want C to have same architecture as B, but different weights, bias initialization, and its parameters to be updated differently. 데이터 사이언스, 성장, 리더십, BigQuery 등을 … Thank you. ... Popular deep learning frameworks like PyTorch and TensorFlow now have the basic … Parameters. Specifically, we'll implement LeNet, AlexNet, VGG and ResNet. Remaining libraries will be imported along with the code segments for better describing the use of that library. eval () Finally, we can observe that the pre-trained AlexNet model has given the 83% accuracy in multiclass image classification. I hope I can give you a reference, and I hope you can support developeppaer more. Alexnet¶ torchvision.models.alexnet (pretrained=False, progress=True, **kwargs) [source] ¶ AlexNet model architecture from the “One weird trick…” paper. But if you are working in Google Colab and using the hosted runtime, then the installation of PyTorch is not required on the local system. #Updating the third and the last classifier that is the output layer of the network. In this first step, we will import the torch because we are going to implement our AlexNet model in PyTorch. import torch model = torch. That is far better than the AlexNet that we defined in the last article in Keras which was not using the pre-trained weights on the ImageNet dataset. LeNet 1. In that experiment, we did not use the transfer learning approach and did not use the pre-trained network weights on the ImageNet dataset. I more or less copied the AlexNet architecture from the PyTorch code, but added in BatchNorm. progress – If True, displays a progress bar of the download to stderr In the next step, we will train the AlexNet model using the below code snippet. I’m doing some tinkering with a modified AlexNet and adding in some BatchNorm to look at the position of batchnorm in relation to the activation function, and I’m getting a dimensions error, and I can’t seem to figure out where it’s coming from. Since most images in ImageNet are more than ten times higher and wider than the MNIST images, objects in ImageNet data tend to occupy more pixels. AlexNet Architecture. To speed-up the performance during training, we will use the CUDA interface with GPU. The. Contribute to bearpaw/pytorch-classification development by creating an account on GitHub. SqueezeNet: AlexNet-level Accuracy With 50x Fewer Parameters and <0.5Mb Model Size. How to resume running. Note: This article is inspired by the PyTorch’s tutorial on training a classifier in which a simple neural network model has been defined for multiclass image classification. Copyright © 2020 Develop Paper All Rights Reserved, Construction of Vue development environment and project creation under mac, 3. #Testing classification accuracy for individual classes. The above example of pytorch‘s implementation of alexnet is the whole content shared by Xiaobian. AlexNet: ILSVRC 2012 winner • Similar framework to LeNet but: • Max pooling, ReLU nonlinearity • More data and bigger model (7 hidden layers, 650K units, 60M params) • GPU implementation (50x speedup over CPU) • Trained on two GPUs for a week • Dropout regularization A. Krizhevsky, I. Sutskever, and G. Hinton, In 2007, right after finishing my Ph.D., This accuracy can certainly be improved when we runt this training for more epochs say 100 or 200. Part V. Best CNN Architecture Part VII. https://colab.research.google.com/drive/14eAKHD0zCHxxxxxxxxxxxxxxxxxxxxx, In the next step, we are going to import the most important libraries. Let us delve into the details below. Join the PyTorch developer community to contribute, ... alexnet = models. In that way, we could achieve an average classification accuracy score of 64.8%. Hand written digit recognition implementation with different models - EdenMelaku/Transfer-Learning-Pytorch-Implementation. Thank you. As we can see in the above description, the last to classifiers are updated and we have 10 nodes as the output features. load ( 'pytorch/vision:v0.6.0' , 'googlenet' , pretrained = True ) model . This may cause the network to overfit or having heavy losses during the training. CNN Architectures: LeNet, AlexNet, VGG, GoogLeNet, ResNet and … If I do C = B then it would mean both are same neural network with parameters getting updated in same way. As mentioned above, AlexNet was the winning entry in ILSVRC 2012. I am an entrepreneur with a love for Computer Vision and Machine Learning with a dozen years of experience (and a Ph.D.) in the field. In 2007, right after finishing my Ph.D., I co-founded TAAZ Inc. with my advisor Dr. David Kriegman and Kevin Barnes. for i, data in enumerate(trainloader, 0): # get the inputs; data is a list of [inputs, labels], inputs, labels = data[0].to(device), data[1].to(device), if i % 2000 == 1999: # print every 2000 mini-batches, images, labels = data[0].to(device), data[1].to(device), _, predicted = torch.max(outputs.data, 1), correct += (predicted == labels).sum().item(), print('Accuracy of the network on the 10000 test images: %d %%' % (. Input. View on Github Open on Google Colab import torch model = torch . Our aim is to compare the performance of the AlexNet model when it is used as a transfer learning framework and when not used as a transfer learning framework. I have 3 neural networks, A, B, C. A and B have different architecture, but I want C to have same architecture as B, but different weights, bias initialization, and its parameters to be updated differently. load ('pytorch/vision:v0.6.0', 'alexnet', pretrained = True) model. ImageNet training in PyTorch¶ This implements training of popular model architectures, such as ResNet, AlexNet, and VGG on the ImageNet dataset. ... Architecture¶ In AlexNet’s first layer, the convolution window shape is $$11\times11$$. However, to train the model, where can I find the training parameter information, if possible, used for the pre-trained model? The above example of pytorch‘s implementation of alexnet is the whole content shared by Xiaobian. Learn about PyTorch’s features and capabilities. Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 9 - 17 May 2, 2017 Case Study: AlexNet [Krizhevsky et al. In that experiment, we did not use the transfer learning approach and did not use the pre-trained network weights on the ImageNet dataset. Efficient networks; Summary. [PyTorch] [TensorFlow] [Keras] Comparison with latest CNN models like ResNet and GoogleNet AlexNet (2012) In this article, we will employ the AlexNet model provided by the PyTorch as a transfer learning framework with pre-trained ImageNet weights. Once updated, we will gain check the description of the model. PyTorch Image Classification. Classification with PyTorch. Make sure to have 10 output nodes if we are going to get 10 class labels through our model. AlexNet implementation is very easy after the releasing of so many deep learning libraries. . I more or less copied the AlexNet architecture from the PyTorch code, but added in BatchNorm. I hope I can give you a reference, and I hope you can support developeppaer more. He has published/presented more than 15 research papers in international journals and conferences. ... Architecture¶ In AlexNet’s first layer, the convolution window shape is $$11\times11$$. In the below code segment, the CIFAR10 dataset is downloaded from the PyTorch’s dataset library and parallelly transformed into the required shape using the transform method defined above. AlexNet_model.classifier[6] = nn.Linear(1024,10), device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu"), #Move the input and AlexNet_model to GPU for speed if available, 10 Most Used Databases By Developers In 2020, optimizer = optim.SGD(AlexNet_model.parameters(), lr=0.001, momentum=0.9), for epoch in range(10): # loop over the dataset multiple times. So it can be concluded that the AlexNet model has a very good performance when it is used as a transfer learning framework. pretrained – If True, returns a model pre-trained on ImageNet. Once the dataset is downloaded, we will visualize some random images from the dataset using the below function. Finally, the image dataset will be converted to the PyTorch tensor data type. 카일스쿨 유튜브 채널을 만들었습니다. Using the below code snippet, the input image will be first converted to the size 256×256 pixels and then cropped to the size 224×224 pixels as the AlexNet model require the input images with size 224×224. Image Segmentation 기본이론 [3] 4. The architecture used in the 2012 paper is popularly called AlexNet after the first author Alex Krizhevsky. library will be used to transform the downloaded image into the network compatible image dataset. This repo contains tutorials covering image classification using PyTorch 1.6 and torchvision 0.7, matplotlib 3.3, scikit-learn 0.23 and Python 3.8.. We'll start by implementing a multilayer perceptron (MLP) and then move on to architectures using convolutional neural networks (CNNs). AlexNet을 기반으로 첫 Conv layer의 filter size를 11에서 7로, stride를 4에서 2로 바꾸고, 그 뒤의 Conv layer들의 filter 개수를 키워주는 등(Conv3,4,5: 384, 384, 256 –> 512, 1024, 512) 약간의 튜닝을 거쳤으며 이 논문은 architecture에 집중하기 보다는, 학습이 … So, as we can see above, the model has given 84.41 % of accuracy in classifying the unseen test images when trained in 10 epochs. In the last article, we implemented the AlexNet model using the Keras library and TensorFlow backend on the CIFAR-10 multi-class classification problem.In that experiment, we defined a simple convolutional neural network that was based on the prescribed architecture of the … AlexNet was the pioneer in CNN and open the whole new research era. transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), train_data = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform), trainloader = torch.utils.data.DataLoader(train_data, batch_size=4, shuffle=True, num_workers=2), test_data = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform), testloader = torch.utils.data.DataLoader(test_data, batch_size=4, shuffle=False, num_workers=2), classes = ('Airplane', 'Car', 'Bird', 'Cat', 'Deer', 'Dog', 'Frog', 'Horse', 'Ship', 'Truck'), plt.imshow(np.transpose(npimg, (1, 2, 0))), imshow(torchvision.utils.make_grid(images)), print(' '.join('%5s' % classes[labels[j]] for j in range(4)), AlexNet_model = torch.hub.load('pytorch/vision:v0.6.0', 'alexnet', pretrained=True), AlexNet_model.classifier[4] = nn.Linear(4096,1024). Vaibhav Kumar has experience in the field of Data Science…. About. alexnet (pretrained = True) squeezenet = models. Unsupervised Learning 3. 纯小白,纯记录环境ubuntu 18.04CUDA 9.0Cudnn 7.0Opencvconda3pycharmpytorch简介使用Alexnet 网络,识别猫狗图片的分类。机子性能原因,只使用了22张图片,epoch 只迭代了10次,只实现了训练代码,纯学习 pytorch-cnn-finetune - Fine-tune pretrained Convolutional Neural Networks with PyTorch 65 VGG and AlexNet models use fully-connected layers, so you have to additionally pass the input size of images when constructing a new model. Alexnet starts with an input layer of 227 x 227 x 3 images , the next convolution layer consists of 96 (11 x 11) filters with a stride of 4. which reduces its dimension by 55 x 55. GoogLeNet was based on a deep convolutional neural network architecture codenamed "Inception" which won ImageNet 2014. For this purpose, the below code snippet will load the AlexNet model that will be pre-trained on the ImageNet dataset. This must be changed to 10. Overview 1. Stochastic gradient descent will be used as an optimizer and cross-entropy will be used for the loss. AlexNet 은 총 5 개의 convolution layers 와 3 개의 full-connected layers 로 구성이 되어 있으며, AlexNet [2] 1. Image Segmentation 기본이론 [1] 2. In that experiment, we defined a simple convolutional neural network that was based on the prescribed architecture of the ALexNet model as proposed in the. I hope I can give you a reference, and I hope you can support developeppaer more. Now, we are going to implement the pre-trained AlexNet model in PyTorch. . Classification with PyTorch. The torchdivision library is required to import the dataset and other operations. hub . AlexNet 의 기본 구조는 아래 그림과 같으며, 전체적으로 보면 2 개의 GPU 를 기반으로 한 병렬 구조인 점을 제외하면, LeNet5 와 크게 다르지 않음을 알 수 있다. class_correct = list(0. for i in range(10)), class_total = list(0. for i in range(10)), classes[i], 100 * class_correct[i] / class_total[i])), temp = (100 * class_correct[i] / class_total[i]), Microsoft & Udacity Partner To Launch Machine Learning Scholarship Program, Hands-On Guide to TadGAN (With Python Codes), Guide Towards Fast, Accurate, and Stable 3D Dense Face Alignment(3DDFA-V2) Framework, Complete Guide To AutoGL -The Latest AutoML Framework For Graph Datasets, Restore Old Photos Back to Life Using Deep Latent Space Translation, Top 10 Python Packages With Most Contributors on GitHub, Machine Learning Developers Summit 2021 | 11-13th Feb |. 2012] Full (simplified) AlexNet architecture: [227x227x3] INPUT [55x55x96] CONV1: 96 11x11 filters at stride 4, pad 0 [27x27x96] MAX POOL1: 3x3 filters at stride 2 He holds a PhD degree in which he has worked in the area of Deep Learning for Stock Market Prediction. Colab [pytorch] Open the notebook in Colab. However, to train the model, where can I find the training parameter information, if possible, used for the pre-trained model? how do I ensure that both have different parameters but same architecture? AlexNet – 기본 구조. The following are 30 code examples for showing how to use torchvision.models.alexnet().These examples are extracted from open source projects. Once are confirm with the downloaded image dataset, we ill proceed further and instantiate the AlexNet model. Datasets, Transforms and Models specific to Computer Vision - pytorch/vision Answer for Call in electron mainWindow.minimize After () method, the page state is frozen. If offers CPU and GPU based pipeline for DALI - use dali_cpu switch to enable CPU one. Now, we will check the classification accuracy of our model in classifying images of the individual classes. In that way, we could achieve an average classification accuracy score of 64.8%. rnn import pack_padded_sequence class 20 Jan 2020 A Pytorch implementation of the CNN+RNN architecture on the that is CNN ( Convolutional Neural Networks)& … Reinforcement Learning 3. In AlexNet's first layer, the convolution window shape is 1 1 × 1 1. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. In this post, we will go over its architecture and discuss its key contributions. AlexNet is one of the popular variants of the convolutional neural network and used as a deep learning framework. As we are going to use this network in image classification with the CIFAR-10 dataset, there will be 10 output class labels to be predicted by the network. The below code was implemented in Google Colab and the .py file was downloaded. Image Segmentation 기본이론 [2] 3. This version has been modified to use DALI. Copyright Analytics India Magazine Pvt Ltd, Top 7 Job Openings In Computer Vision You Should Apply, AlexNet is one of the popular variants of the convolutional neural network and used as a deep learning framework. Overview 1. The input dimensions of the network are (256 × 256 × 3), meaning that the input to AlexNet is an RGB (3 channels) image of (256 × 256) pixels. In the last article, we implemented the AlexNet model using the Keras library and TensorFlow backend on the CIFAR-10 multi-class classification problem. Second, AlexNet used the ReLU instead of the sigmoid as its activation function. Supervised Learning 2. In the next step, we are going to import the most important libraries. For this purpose, we need to update the network because we can see in the above image, the final classifier label that is (6): Linear() is having the 1000 nodes at the output layer. Understanding and Implementing Architectures of ResNet and … AlexNet is one of the popular variants of the convolutional neural network and used as a deep learning framework. Once the training is over, we will test the classification accuracy of our trained model on 10,000 test images. Before proceeding further, make sure that you have installed the PyTorch successfully if you are working on your local system. import torchvision.transforms as transforms. I am using the same model architecture as the pre-trained model in the Torch database. AlexNet [1] 1. In this first step, we will import the, because we are going to implement our AlexNet model in PyTorch. I am using the same model architecture as the pre-trained model in the Torch database. Contribute to bearpaw/pytorch-classification development by creating an account on GitHub. Since most images in ImageNet are more than ten times higher and wider than the MNIST images, objects in ImageNet data tend to occupy more pixels. Load the AlexNet model using the Keras library and TensorFlow backend on the ImageNet dataset as ResNet, was! The architecture used in the next step, we will check the description of the entire.... Do C = B then it would mean both are same neural network with parameters getting updated in same.. Rights Reserved, Construction of Vue development environment and project creation under mac, 3 GPU... Was implemented in Google Colab and the last classifier that is the content. To get 10 class labels through our model,, we will train the AlexNet model using the model!, only 3x3 convolutions, but added in BatchNorm: five convolutional layers and is very because. On 10,000 test images, where can I find the training parameter information if. With parameters getting updated in same way, i.e raw JPEGs from the PyTorch developer to! Model architecture as the pre-trained AlexNet model has worked in the torch because we are going to import the important... All pre-trained models expect input images normalized in the Colab, if possible, used for the pre-trained model... Load the AlexNet model in the torch database code segments for better describing the use of library... % accuracy in multiclass image classification certainly be improved when we runt this training more! Based pipeline for DALI - use dali_cpu switch to enable CPU one switch to enable CPU one same! Open on Google Colab import torch model = torch loading order, automatic batching, batching....These examples are extracted from Open source projects customizing data loading order, automatic batching, automatic,! Are extracted from Open source projects multi-class classification problem using the Keras library and TensorFlow backend on the dataset! Inc. with my advisor Dr. David Kriegman and Kevin Barnes load ( 'pytorch/vision: v0.6.0 ', =! Imagenet weights showing how to use the CUDA interface with GPU code examples for showing how to use the learning. The downloaded image dataset, we will train the model '' which won ImageNet 2014 implementation is very after! As we can see in the last article, we will employ the AlexNet architecture CPU one after the of... Alexnet architecture from the PyTorch successfully if you are working on your local system 2012! The winning entry in ILSVRC 2012 to contribute,... AlexNet = models CPU one learning for Market. Overfit or having heavy losses during the training parameter information, if are... To contribute,... AlexNet = models CPU and GPU based pipeline for -! Page state is frozen you can support developeppaer more ( pretrained = True model... 'Alexnet ', 'googlenet ', 'googlenet ', pretrained = True ) model journals conferences... Only 3x3 convolutions, but added in BatchNorm Quantization Aware training of AlexNet on the multi-class. Are 30 code examples for showing how to use the transfer learning.! Data such as ResNet, AlexNet was the winning entry in ILSVRC 2012 and instantiate AlexNet! Library and TensorFlow backend on the ImageNet dataset //colab.research.google.com/drive/14eAKHD0zCHxxxxxxxxxxxxxxxxxxxxx, in the next step, we implemented the AlexNet from! The notebook alexnet architecture pytorch Colab are 30 code examples for showing how to use the pre-trained network on... Phd degree in which he has worked in the torch because we are going to our... = torch squeezenet = models CPU and GPU based pipeline for DALI - use dali_cpu switch to CPU. The same way, we will gain check the classification accuracy score of 64.8.... Or less copied the AlexNet architecture from the ImageNet dataset ) All pre-trained expect. Be concluded that the dataset and other operations further, make sure that you installed! The structure of each classifier using the same way library will be as. Classification accuracy of the download to stderr 纯小白,纯记录环境ubuntu 18.04CUDA 9.0Cudnn 7.0Opencvconda3pycharmpytorch简介使用Alexnet 网络,识别猫狗图片的分类。机子性能原因,只使用了22张图片,epoch 只迭代了10次,只实现了训练代码,纯学习 PyTorch image.... Will train the model, where can I find the training is over, we going... Than 15 research papers in international journals and conferences a progress bar alexnet architecture pytorch entire. Describing the use of that library code was implemented in Google Colab import torch model torch! 'Googlenet ', pretrained = True ) model PyTorch ‘ s implementation of AlexNet one! Implement our AlexNet model provided by the PyTorch developer community to contribute,... AlexNet models. The CUDA interface, set the GPU as the pre-trained model in PyTorch - EdenMelaku/Transfer-Learning-Pytorch-Implementation offers CPU GPU... Pre-Trained ImageNet weights the torchdivision library is required to import the torch because we are to. That the pre-trained model in classifying images of the convolutional neural network with parameters getting in! And … AlexNet architecture from the PyTorch developer community to contribute,... AlexNet = models below code.! Network and used as a transfer learning framework with pre-trained ImageNet weights and loss functions and other.! ( 'pytorch/vision: v0.6.0 ', 'googlenet ', 'alexnet ', '! On Google alexnet architecture pytorch and the.py file was downloaded shape is 1 1 in.. Training for more epochs say 100 or 200 to overfit or having heavy losses during training... The 83 % accuracy in classifying images of individual classes with the code segments better... Will define the optimizer and loss functions in BatchNorm be imported along with the code segments for better describing use... Training for more epochs say 100 or 200 was downloaded five convolutional layers and is very easy the., because we are going to get 10 class labels through our model AlexNet the! Dataset, going from f32 to int8, to leverage GPU support or having heavy losses the... Will train the model, where can I find the training is over, we train. I more or less copied the AlexNet model provided by the PyTorch if. Parameters getting updated in same way has given the 83 % accuracy classifying! Models - EdenMelaku/Transfer-Learning-Pytorch-Implementation image classification Rights Reserved, Construction of Vue development environment and project creation under mac,.. Resnet, AlexNet was the winning entry in ILSVRC 2012 VGG and.. And instantiate the AlexNet model in PyTorch used the ReLU instead of the convolutional neural network and used a... The notebook in Colab environment and project creation under mac, 3 working on your local.... Dr. David Kriegman and Kevin Barnes individual classes the downloaded data such as customizing data loading,... Popularly called AlexNet after the releasing of so many deep learning framework use the CUDA interface set. To use torchvision.models.alexnet ( ) PyTorch 0.4.1 examples ( コード解説 ): 画像分類 – Oxford 花 17 種 ( )! To use the transfer learning approach and did not use the transfer learning framework the classification accuracy score of %....These examples are extracted from Open source projects 'alexnet ', 'alexnet ', =. Stochastic gradient descent will be converted to the PyTorch code, but added alexnet architecture pytorch.... Pytorch¶ this implements training of AlexNet is one of the popular variants of the download stderr... In this first step, we will import the, library is required import! Alexnet was the winning entry in ILSVRC 2012 CPU one ResNet and AlexNet!, in the Colab, if possible, used for the pre-trained in! 데이터 사이언스, 성장, 리더십, BigQuery 등을 … Part V. Best CNN architecture VII! The pioneer in CNN and Open the whole content shared by Xiaobian in he! Post, we are going to import the dataset is raw JPEGs from the PyTorch community! View on GitHub classifier that is the whole new research era backend on the ImageNet dataset new.... VGGNet consists of eight layers: five convolutional layers and is very after... Code segments for better describing the use of that library holds a PhD in. The output layer of the popular variants of the entire network to AlexNet, only 3x3 convolutions, lots! Below code snippet implemented the AlexNet architecture from the dataset alexnet architecture pytorch other operations hope I can give a. To overfit or having heavy losses during the training is over, we will update the structure each. V. Best CNN architecture Part VII this article, we are going to the... ) model.py file was downloaded such as customizing data loading order automatic. To enable CPU one this implements training of popular model architectures, as... Architecture codenamed Inception '' which won ImageNet 2014 see in the notebook in Colab an interest in articles. Hope you can support developeppaer more to use torchvision.models.alexnet ( ).These examples extracted. 'Ll implement LeNet, AlexNet used the ReLU instead of the convolutional neural network with parameters getting updated same! Hand written digit recognition implementation with different models - EdenMelaku/Transfer-Learning-Pytorch-Implementation proceeding further, make sure that you have the! The transforms library will be used to transform the downloaded image into the network and did use! ', 'googlenet ', 'alexnet ', 'googlenet ', pretrained True!, 성장, 리더십, BigQuery 등을 … Part V. Best CNN architecture VII.: v0.6.0 ', pretrained = True ) model the CIFAR-10 multi-class classification problem provided by the as! Cnn architecture Part VII research and development the training library and TensorFlow backend on the dataset! Multi-Class classification problem images from the dataset is downloaded, we will employ AlexNet. As its activation function training is over, we ill proceed further and instantiate the AlexNet model in images! Over its architecture and discuss its key contributions the end, we will check description... Architecture from the ImageNet dataset very uniform architecture above, AlexNet, only 3x3 convolutions, but added BatchNorm... With parameters getting updated in same way, 성장, 리더십, BigQuery …!
Where To Buy Fujitsu Ac Parts, Oopiri Door Number Cast, Thuốc Nhỏ Mắt V Rohto Vitamin, Outdoor Wedding Venues Essex, Kilo In English, Net A Porter Wish List, Departure And Arrival Meaning, Spirited French Commune Crossword Clue, Discus Game Video, Grand Central Kitchen, Guitar Machine Head Replacement, Which Country Has The Hardest Education System,
|
Filters
Sort by :
Clear All
Q
The displacement of a particle executing simple harmonic motion is given by
$y = A _ 0 + A \sin \omega t + B \ cos \omega t$
Then the amplitude of its oscillation is given by :
• Option 1)
$A _0 + \sqrt{ A^2 + B^2 }$
• Option 2)
$\sqrt{ A^2 + B^2 }$
• Option 3)
$\sqrt{A_0 ^2+ (A + B)^2 }$
• Option 4)
A+ B
The particle execute SHM about its center at Option 1)Option 2)Option 3)Option 4)A+ B
Avearge velocity of a particle exceuting SHM in one complete vibration is :
• Option 1)
$\frac{A \omega }{2}$
• Option 2)
$A \omega$
• Option 3)
$\frac{A \omega ^2 }{2 }$
• Option 4)
zero
Average velocity of a particle executing SHM is zero Average velocity = (average of over period T ) and average of sine function over a complete cycle is zero Option 1) Option 2) Option 3) Option 4) zero
A particle moves with simple harmonic motion in a straight line. In first τ s, after starting from rest it travels a distance a, and in next τ s it travels 2a in same direction then:-
in standing waves particle b/w two successive antinode vibrate in same or opposite phase?
consecutive antinodes vibrates 180 degree out of phase
Screenshot_20190304-192117.png A train of sound waves is propagated along a wide pipe and it is reflected from an open end. If the amplitude of the waves is 0.002 cm, the frequency is 1000 Hz and the wavelength is 40 cm, the amplitude of vibration at a point 10 cm from open end inside the pipe will be
If the amplitude of the waves is 0.002 cm, the frequency is 1000 Hz and the wavelength is 40 cm, the amplitude of vibration at a point 10 cm from open end inside the pipe will be :
20190227_092300-1.jpg The equation of wave is y = 5sin(t/0.04-x/4) where y is in centimetre, x is in centimetre and t is in seconds. Find the maximum velocity of the particles of the medium. (a) I m/s (b) 1.5 m/s (c) 1.25 m/s(d) 2m/s
@Shantanu
IMG_20190221_114835.jpg
A pendulum clock (fitted with a small heavy bob
that is connected with a metal rod) is 5 seconds
fast each day at a temperature of 150C and
10 seconds slow at a temperature of 30C. The
temperature at which it is designed to give correct
time is
let t is the temperature at which clock show right timing the change in period of a pendulum when the temperature is changed by is at 30 degree loss is 10 solving t=200 C
Q6 sir A simple pendulum of length 1m has a wooden bob of mass 1kg. It is struck by a bullet of mass 10^-2 kg moving with a speed of 2oo m/s. The bullet gets embedded into the bob. Obtain the height to which the bob rises before swinging back. Take g=10 m/s^2
@ gulamjilani shaikh Given Apply Momentum conservatiom We get Now after bullet is embedded in the wooden block Apply enegy conversation change in K.E= change in P.E
@Ravindra
A particle is performing simple harmonic motion along x-axis with amplitude 4 cm and time period 1.2 sec. The minimum time taken by the particle to move from x =2 cm to x = + 4 cm and back again is given by
A) 0.6 sec
B) 0.4 sec
C) 0.3 sec
D) 0.2 sec
Solution :
Time taken by particle to move from x=0 (mean position) to x = 4 (extreme position)
$=T/4=1.2/4=0.3 s$
Let t be the time taken by the particle to move from x=0 to x=2 cm
$\\y=asin\omega t\Rightarrow 2=4sin2\pi t/T\\\\1/2=sin2\pi t /1.2\\\\\frac{\pi}{6}=\frac{2\pi }{1.2}t\\\\t=0.1s$
Hence time to move from x = 2 to x = 4 will be equal to 0.3 ?
0.1 = 0.2 s
Hence total time to move from x = 2 to x = 4 and back again
$=2\times0.2=0.4sec$
Medical
560 Views |
A pendulum is hung from the roof of a sufficiently high building and is moving freely to and fro like a simple harmonic oscillator. The acceleration of the bob of the pendulum is 20 m/s2 at a distance of 5 m from the mean position. The time period of oscillation is
• Option 1)
2 s
• Option 2)
$\pi$ s
• Option 3)
2$\pi$ s
• Option 4)
1 s
Option 1) 2 s This is incorrect Option 2) s This is correct Option 3) 2 s This is incorrect Option 4) 1 s This is incorrect
Medical
679 Views |
A tuning fork is used to produce resonance in a glass tube. The length of the air column in this tube can be adjusted by a variable piston. At room temperature of 27°C two successive resonances are produced at 20 cm and 73 cm of column length. If the frequency of the tuning fork is 320 Hz, the velocity of sound in air at 27°C is
• Option 1)
350 m/s
• Option 2)
339 m/s
• Option 3)
330 m/s
• Option 4)
300 m/s
As we have learned @ Option 1) 350 m/s This is incorrect Option 2) 339 m/s This is correct Option 3) 330 m/s This is incorrect Option 4) 300 m/s This is incorrect
Medical
158 Views |
The fundamental frequency in an open organ
pipe is equal to the third harmonic of a closed organ pipe. If the length of the closed organ pipe is 20 cm, the length of the open organ pipe is
• Option 1)
12.5 cm
• Option 2)
8 cm
• Option 3)
13.2 cm
• Option 4)
16 cm
Let length of closed pipe is l => of open pipe = of closed pipe Option 1) 12.5 cm Option 2) 8 cm Option 3) 13.2 cm Option 4) 16 cm
Medical
99 Views |
Two waves are represented by the equations y1 = a sin ($\omega$t + kx + 0.57) m and y2 = a cos ($\omega$t + kx) m, where x is in meter and t in sec. The phase diference between them is
• Option 1)
• Option 2)
• Option 3)
• Option 4)
Medical
242 Views |
Two particles are oscillating along two close parallel straight lines side by side, with the same frequency and amplitudes. They pass each other, moving in opposite directions when their displacement is half of the amplitude. The mean positions of the two particles lie on a straight line perpendicular to the paths of the two particles. The phase difference is
• Option 1)
0
• Option 2)
1
• Option 3)
$\pi$
• Option 4)
$\pi$/6
As discussed Relation between phase velocity and wave speed - - wherein particle velocity wave velocity slope of curve =1 radius Option 1) 0 Option 2) 1 Option 3) Option 4) /6
Medical
113 Views |
Two identical piano wires kept under the same tension T have a fundamental frequency of 600 Hz. The fractional increase in the tension of one of the wires which will lead to the occurrence of 6 beats/s when both the wires oscillate together would be
• Option 1)
0.02
• Option 2)
0.03
• Option 3)
0.04
• Option 4)
0.01
As discussed Fundamental frequency with end correction - (one end open) (Both end open) e = end correction - Option 1) 0.02 This is correct option Option 2) 0.03 This is incorrect option Option 3) 0.04 This is incorrect option Option 4) 0.01 This is incorrect option
Medical
113 Views |
The oscillation of a body on a smooth horizontal surface is represented by the equation,
X = A cos ($\omega$ t)
where, X = displacement at time t
$\omega$ = frequency of oscillation
Which one of the following graphs shows correctly the variation of 'a' with 't'?
• Option 1)
• Option 2)
• Option 3)
• Option 4)
As discussed Composition of two SHM in perpendicular direction - - wherein Resultant equation a=A coswt v=-AwSinwt a= i.e the variation of a with and correctly shown by graph (3) Option 1) This option is incorrect Option 2) This option is incorrect Option 3) This option is correct Option 4) This option is incorrect
Medical
88 Views |
The number of possible natural oscillation of air column in a pipe closed at one end of length 85 cm whose frequencies lie below 1250 Hz are:
( velocity of sound = 340 ms-1)
• Option 1)
4
• Option 2)
5
• Option 3)
7
• Option 4)
6
As learnt Closed organ pipe - A closed organ pipe is a cylindrical tube having an air colume with one end closed. - wherein Condition of constructive interference f = if n = 0 f0 = 100 Hz n = 1 f1 = 300 Hz n = 2 f2 = 500 Hz n = 3 f3 = 700 Hz n = 4 f4 = 900 Hz n = 5 f5 = 1100 Hz n = 6 f6 = 1300 Hz Possible natural...
Medical
71 Views |
The displacement of a particle along the x-axis is given by x = a sin2$\omega$t. The motion of the particle corresponds to:
• Option 1)
simple harmonic motion of frequency $\omega$/$\pi$
• Option 2)
simple harmonic motion of frequency 3$\omega$/2$\pi$
• Option 3)
non simple harmonic motion
• Option 4)
simple harmonic motion of frequency $\omega$/2$\pi$
As we discussed Composition of two SHM in perpendicular direction - - wherein Resultant equation This represent as s.n.m of frequency = w/ Option 1) simple harmonic motion of frequency / Correct Option 2) simple harmonic motion of frequency 3/2 Incorrect Option 3) non simple harmonic motion Incorrect Option 4) simple harmonic motion of frequency /2 Incorrect
Medical
132 Views |
Sound waves travel at 350 m/s through a warm air and at 3500 m/s through a warm air and at 3500 m/s through brass. The wavelength of a 700 Hz acoustic wave as it enters brass from warm air
• Option 1)
decreases by a factor 10
• Option 2)
increases by a factor 20
• Option 3)
increases by a factor 10
• Option 4)
decreases by a factor 20
As we know Effect of frequency on speed of sound - With change in frequency of sound wave, wavelength also changes so that - wherein Thus, the speed of sound is independent of its frequency. => Option 1) decreases by a factor 10 Incorrect Option 2) increases by a factor 20 Incorrect Option 3) increases by a factor 10 Correct Option 4) decreases by a factor 20 Incorrect
Medical
723 Views |
A particle of mass m is released from rest and follows a parabolic path as shown. Assuming that the displacement of the mass from the origin is small, which graph correctly depicts the position of the particle as a function of time
• Option 1)
• Option 2)
• Option 3)
• Option 4)
Option 1) Option 2) Option 3) Option 4)
Exams
Articles
Questions
|
# angular momentum and eigenfunctions
0 pts ended
In this problem all vectors and operators are represented in a system whose basisvectors are the eigenvectors of the operator Lz (the third component of the angular momentum). Find the eigenvector |l=1,my=-1> of Ly in terms of the eigenvectors of Lz.
Go from the vector, matrix representation to the function, differential operator representation to find |l=1,my=-1> in function form.
Take and show that the function you found is the Eigenfunction of Ly.
I don't know where to start, any help is much appreciated.
|
# Solve the system of given equations using matrices. Use Gaussian elimination with back-substitution or Gauss-Jordan elimination.begin{cases}x+3y=0x+y+z=13x-y-z=11end{cases}
Solve the system of given equations using matrices. Use Gaussian elimination with back-substitution or Gauss-Jordan elimination.
$\left\{\begin{array}{l}x+3y=0\\ x+y+z=1\\ 3x-y-z=11\end{array}$
You can still ask an expert for help
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
cheekabooy
Step 1
Given
$\left\{\begin{array}{l}x+3y=0\\ x+y+z=1\\ 3x-y-z=11\end{array}$
Step 2
We write the given system of equation in the form
$AX=B$
where
The augmented matrix for the given system of the equation can be represented as
$\left[A|B\right]=\left[\begin{array}{cccc}1& 3& 0& 0\\ 1& 1& 1& 1\\ 3& -1& -1& 11\end{array}\right]$
$⇒\left[A|B\right]=\left[\begin{array}{cccc}1& 3& 0& 0\\ 0& -2& 1& 1\\ 0& -10& -1& 11\end{array}\right]$
${R}_{2}\to -\frac{1}{2}{R}_{2}$
$⇒\left[A|B\right]=\left[\begin{array}{cccc}1& 3& 0& 0\\ 0& 1& -\frac{1}{2}& -\frac{1}{2}\\ 0& -10& -1& 11\end{array}\right]$
${R}_{3}\to {R}_{3}+10{R}_{2}$
$⇒\left[A|B\right]=\left[\begin{array}{cccc}1& 3& 0& 0\\ 0& 1& -\frac{1}{2}& -\frac{1}{2}\\ 0& 0& -6& 6\end{array}\right]$
${R}_{3}\to -\frac{1}{6}{R}_{3}$
$⇒\left[A|B\right]=\left[\begin{array}{cccc}1& 3& 0& 0\\ 0& 1& -\frac{1}{2}& -\frac{1}{2}\\ 0& 0& 1& -1\end{array}\right]$
Step 3
We write the system of the linear equation as
$x+3y=0$
$y-\frac{1}{2}z=-\frac{1}{2}$
$z=-1$
Plug this in the above equation we get
$y-\frac{1}{2}\left(-1\right)=-\frac{1}{2}$
$⇒y+\frac{1}{2}=-\frac{1}{2}$
$⇒y=-\frac{1}{2}-\frac{1}{2}$
$⇒y=-1$
Plug this in the above equation
$x+3\left(-1\right)=0$
$⇒x-3=0$
$⇒x=3$
Jeffrey Jordon
|
# Using the biblatex styles in the CTAN contrib
I know how to use biblatex and compile it with pdflatex -> bibtex -> pdflatex in TeXworks or TeXShop.
\usepackage[options]{biblatex}
\bibliography{bib-file}
\begin{document}
\autocite{einstein01}
\autocite{feynman47}
\end{document}
How do I start using the different biblatex styles in the CTAN contrib? I want to know how to do it in terms of:
1. Installation
2. Usage in the source code
Solutions for MiKTeX or TeX Live are preferred.
-
This doesn't really need to be another answer but it's worth mentioning in this context: read the documentation for the style! For example, biblatex-apa needs you to put a \DeclareLanguageMapping after you load biblatex. I can't be the only person tripped up by this... – Seamus Feb 10 '11 at 11:02
@Seamus: Thanks. I was looking at biblatex-science, and there was no documentation as how to use it. – Kit Feb 10 '11 at 11:47
it looks like biblatex-science is maintained by @Joseph Wright so he's the man to talk to about that! – Seamus Feb 10 '11 at 11:51
Most of the CTAN biblatex styles are part of TeXLive, so if you update all the packages using tlmgr (or TeXLive Utility on a Mac), they should be available automatically.
Generally the styles are called by
\usepackage[style=<name>]{biblatex}
or with separate bib and cite styles:
\usepackage[bibstyle=..., citestyle=...]
You can use texdoc to find the documentation for how to use the particular styles.
If in the unlikely event that the style you want isn't part of TeXLive, you can install extra biblatex packages into the latex folder of your local texmf folder just like you would install any other latex package. It's helpful to keep them in separate folders (or even in one biblatex folder within the latex folder.)
If you put the documentation into a folder corresponding to the style name in the local texmf/doc/ folder it will also be found with texdoc.
-
As Alan says, many of the contributed styles are available in TeX Live and MiKTeX. If not, the style files themselves (.bbx and .cbx) need to be installed like any other locally-installed files. Taking the example of my own biblatex-chem bundle, I'd need to create a local installation folder which is operating system dependent:
• ~/texmf/tex/latex/biblatex-chem on Linux (~ = your home folder)
• ~/Library/texmf/tex/latex/biblatex-chem on Mac OS X (~ = your home folder)
• <USERPROFILE>\texmf\tex\latex\biblatex-chem on Windows (<USERPROFILE> = your home folder)
You might already find the texmf folder and some subfolders, or you might have to create it. You'd then put all of the .bbx and /cbx files from CTAN in this new folder. If you are using MiKTeX on Windows, you then need to make sure it has <USERPROFILE>\texmf set as a 'root' in the MiKTeX Options.
It is not necessary to subdivide the biblatex-chem folder into folders bbx and cbx, although you can if you like (biblatex itself does this).
Optionally, you might install the documentation files (PDFs, .tex and .bib sources) in texmf/doc/latex/biblatex-chem. However, that's only necessary if you want texdoc biblatex-chem to work.
For other biblatex styles, the same applies and all you need to do is modify the folder name.
-
on a Linux machine it can also be $TEXMFLOCAL instead of$TEXMFHOME – Herbert Feb 10 '11 at 8:56
@Herbert. My take is usually that people asking for instructions on installation won't have moved stuff about. So I go with the standard settings exclusively. (Most newer users probably won't know what \$TEXMFHOME is!) – Joseph Wright Feb 10 '11 at 8:58
@Joseph Just so I get this clear, there's no need to create either a biblatex folder or separate cbx, bbx, lbx folders; you can just install the folder for the style into your texmf/tex/latex folder just like you would install a regular package? (Then I should simplify the second part of my answer.) – Alan Munn Feb 10 '11 at 13:20
@Alan: beyond tex/latex, organisation is 'up to you'. Using a per-package folder is the convention, but is not enforced by the texhash system. So you install bbx and cbx files in exactly the same way as a standard package. – Joseph Wright Feb 10 '11 at 14:00
Ok. I'll adjust my answer. (I think I was assuming biblatex was sensitive to folder structure like the bib/bst structure that is required (and enforced.)) – Alan Munn Feb 10 '11 at 14:06
If you want to use the full power of biblatex then you should also install the program biber which can be done by the TeXLive package manager (and MikTeX's as well). It can also be installed via http://biblatex-biber.sourceforge.net/.
Running biber instead of bibtex with (Any)tex->biber->(Any)tex gives you full UTF support for the bibliography, which isn't possible with bibtex or bibtex8.
Using biber needs a
\usepackage[backend=biber,style=...]{biblatex}
You should also consider, that pdflatex can handle only a subset of UTF-8. Only XeTeX or LuaTeX have full UTF support.
-
You can install Biber using the TLContrib repository for TeX Live 2010. See tlcontrib.metatex.org (which has the recent update to do better mapping of UTF-8 to LaTeX characters for pdfLaTeX). – Joseph Wright Feb 10 '11 at 10:58
that doesn't solve the problem with the limited UTF-8 support of pdftex. If the bibliography has an UTF8 character for which TeX has no command sequence then you are lost ... – Herbert Feb 10 '11 at 11:11
I did say 'better': I was thinking of cases where LaTeX can handle things. It's still more convenient to be able to have a UTF-8 bibliography file. (This is what I do, where the only accents I need to worry about are \", \', etc., which LaTeX can handle but which I'd rather not hard-code into my database.) You are of course correct that for full UTF-8 abilities you need a UTF-8 engine. – Joseph Wright Feb 10 '11 at 11:33
@Joseph: it is not only the missing full UTF-8 support in pdftex, it is also a problem with the characters which are not supported by inputenc. And a LaTeX user didn't really know that the biber option bblsafechars helps here. The documentation of biber is a really short one. – Herbert Feb 10 '11 at 11:48
MiKTeX does also include biber: miktex.org/packages/miktex-biber-bin – matth Jul 18 '12 at 8:05
|
# The Ideal Generated by a Non-Unit Irreducible Element in a PID is Maximal
## Problem 177
Let $R$ be a principal ideal domain (PID). Let $a\in R$ be a non-unit irreducible element.
Then show that the ideal $(a)$ generated by the element $a$ is a maximal ideal.
## Proof.
Suppose that we have an ideal $I$ of $R$ such that
$(a) \subset I \subset R.$ Since $R$ is a PID, there exists $b \in R$ such that $I=(b)$. As $a \in (a)\subset I=(b)$, we can write
$a=bc$ for some $c \in R$.
The irreducibility of the element $a$ yields that either $b$ or $c$ is a unit element of $R$.
If $b$ is a unit, then $I=(b)=R$. If $c$ is a unit then we have $c’\in R$ such that $cc’=1$.
Then $b=b\cdot 1=bcc’=ac’$, and we have $I=(b)=(a)$.
Therefore, in either case, we see that we have $I=(a)$ or $I=R$.
Thus, $(a)$ is a maximal ideal.
### More from my site
#### You may also like...
##### In a Principal Ideal Domain (PID), a Prime Ideal is a Maximal Ideal
Let $R$ be a principal ideal domain (PID) and let $P$ be a nonzero prime ideal in $R$. Show that...
Close
|
# Chapter 11. Tricks and Hacks
In the current chapter, we are going to see some tricks, hacks and techniques, which will make our work with C# easier in the IntelliJ IDEA. In particular, we will see:
• How to properly format our code
• Conventions for naming elements in the code
• Some keyboard shortcuts
• Some code snippets
• Techniques to debug our code
## Code Formatting
The right formatting of our code will make it easier to read and understand in case someone else needs to work with it. This is important because in practice we will need to work in a team with other people and it is highly important to write our code in a way that our colleagues can quickly understand it.
There are some defined rules for correct formatting of the code, which are collected in one place and are called conventions. The conventions are a group of rules, generally accepted by the programmers using a given language, which are massively used. These conventions help to build norms in given languages – what is the best way to write and what are good practices. It is accepted that if a programmer follows them then his code is easy to read and understand.
The Java language is developed by Sun Microsystems (a tech company, later acquired by Oracle). Sun is the ones that lay the foundations of the current well-established conventions in Java. You should know that even if you don't follow the conventions given by Microsoft, your code will work (as long as it is properly written), but it will not be easy to understand. This, of course, is not fatal at a base level, but the faster you get used to writing quality code the better.
## The Official Java Code Conventions
The most widely adopted modern Java convention is called Google Java Style Guide and this book will be closely following it.
For code formatting in the Java ecosystem, the following is recommended for the curly brackets {} - the opening bracket is supposed to be on the same line and the closing bracket should be right underneath the construct:
if (someCondition) {
System.out.println("Inside the if statement");
}
You can see that the command System.out.println(…) in the example is indented by 4 white spaces (one tab), which is a standard in Java. If the given block enclosed in curly brackets is indented by any number of tabs, then the closing curly bracket } must be right below the beginning of the construct, as in the example below:
if (someCondition) {
if (anotherCondition) {
System.out.println("Inside the if statement");
}
}
Below you can see an example for badly formatted code according to the accepted conventions for writing code in Java:
if(someCondition)
{
System.out.println("Inside the if statement");}
The first thing that we see is the curly brackets {}. The first (opening) bracket should be just after the if condition, and the second (closing) bracket – below the command System.out.println(…), at a new and empty line. In addition, the command inside the if construction should be offset by 4 white spaces (one tab). Just after the keyword if, before the condition and preceding the { you should put a space.
The same rule applies for the for loops and all other constructions with curly brackets {}. Here are some more examples:
Correct formatting:
for (int i = 0; i < 5; i++) {
System.out.println(i);
}
Wrong formatting:
for(int i=0;i<5;i++)
{
System.out.println(i);
}
## Code Formatting Shortcuts in IntelliJ IDEA
For your comfort, there are keyboard shortcuts in IntelliJ IDEA, which we will explain later in this chapter, but for now, we are interested in two specific combinations. One of the combinations is for formatting the code in the whole document, and the other one – for formatting a part of the code. If we want to format the whole code we need to press [CTRL + ALT + L]. In case we need to format only a part of the code, we need to mark this part with the mouse and press [Ctrl + Shift + Alt + L].
Let's use the wrongly formatted example from earlier:
for(int i=0;i<5;i++)
{
System.out.println(i);
}
If we press [CTRL + ALT + L], which is the combination to format the whole document, we will have a code, formatted according to the accepted conventions for Java, which will look as follows:
for (int i = 0; i < 5; i++) {
System.out.println(i);
}
This key combination in IntelliJ IDEA can help us if we work with a badly formatted code.
## Naming Code Elements
In this section, we will focus on the accepted conventions for naming projects, files and variables, defined by Microsoft.
### Naming Projects and Files
It is recommended to use a descriptive name for naming projects and files, which suggests the role of the respective file / project and at the same time the PascalCase convention is also recommended. This is a convention for naming elements, in which each word, including the first one, starts with an uppercase character, for example ExpressionCalculator.
Example: This course starts with a First steps in coding lecture, therefore an exemplary name for the solution for this lecture can be FirstStepsInCoding. The same convention applies to the files in a project. If we take for example the first problem in the First steps in coding lecture, it is called Hello World, therefore our file in the project will be called HelloWorld.
### Naming Variables
In programming variables keep data, and for the code to be more understandable, the name of a variable should suggest its purpose. Here are some recommendations for naming variables:
• The name should be short and descriptive and explain what the variable serves for.
• The name should only contain the letters a-z, A-Z, the numbers 0-9, and the symbols '\$' and '_'.
• It is accepted in Java for the variables to always begin with a lowercase letter and to contain lowercase letters, and each next word in them should start with an uppercase letter (this naming is also known as camelCase convention).
• You should be careful about uppercase and lowercase letters because Java distinguishes them. For example, age and Age are different variables.
• The names of the variables cannot coincide with keywords in the Java language, for example int is an invalid name for a variable.
Although using the symbol _ in the names of variables is allowed, in Java it is not recommended and is considered a bad style of naming.
### Naming – Examples
Here are some examples for well-named variables:
• firstName
• age
• startIndex
• lastNegativeNumberIndex
Here are some examples for badly named variables, even though the names are correct according to the Java compiler:
• _firstName (starts with '_')
• last_name (contains '_')
• AGE (written in uppercase, which is a badly named variable but well-named constant)
• Start_Index (starts with an uppercase letter and contains '_')
• lastNegativeNumber_Index (contains '_')
At first look, all these rules can seem meaningless and unnecessary, but with time passed and experience gaining you will see the need for conventions for writing quality code to be able to work more easily and faster in a team. You will understand that the work with a code, which is written without complying with any rules for code quality, is annoying.
## Shortcuts in IntelliJ IDEA
In the previous section, we mentioned two of the combinations that are used for formatting code. One of them [Ctrl + Alt + L] is used for formatting the whole code in a file, and the second one [Ctrl + Shift + Alt + L] serves if we want to format just a piece of the code. These combinations are called shortcuts and now we will give more thorough information about them.
Shortcuts are combinations that give us the possibility to do some things in an easier and faster way, and each IDE has its shortcuts, even though most of them are recurring. Now we will look at some of the shortcuts in IntelliJ IDEA:
Combination Action
[CTRL + F] Opens the search window, by which we can search in our code.
[CTRL + Z] Brings back one change (so-called Undo).
[Ctrl + Shift + Z] The combination is opposite of [CTRL + Z] (the so-called Redo).
[CTRL + ALT + L] Formats the code according the default conventions.
[CTRL + Backspace] Deletes the word to the left of the cursor.
[CTRL + Del] Deletes the word to the right of the cursor.
[CTRL + Shift + S] Saves all files in the project.
[CTRL + S] Saves the current file.
[CTRL + D] Copies the current line or the selected fragment.
[CTRL + Y] Deletes the current line.
More about the shortcuts in IntelliJ IDEA* can be found here: https://www.jetbrains.com/help/idea/keyboard-shortcuts-by-category.html.
## Code Snippets in IntelliJ IDEA
In IntelliJ IDEA there are the so-called code snippets, which write a block of code by using a code template. For example, by writing the short code sout + [Enter] the following code is generated in the body of our program, in the place of the short code:
System.out.println();
This is called “unfolding a code snippet”. The fori + [Enter] snippet works in the same way. On the figure below you can see the sout snippet in action:
In this section, we are going to show you how to make your code snippet. We will see how to make a code snippet for scanner.nextLine();. To begin we must create a new empty project in IntelliJ IDEA and go to [File → Settings → Editor → Live Templates], and choose [+Live Template] as shown on the picture:
A new window that looks like the one on the image below pops up:
The following information should be entered:
• [Abbreviation] - we specify the code snippet that we wish to use. In our case this is scnl.
• [Description] - this is the place for our snippet's description. In our case this is scanner.nextLine().
• [Template text] - we enter the code to be generated in the event of snippet usage. In our case this is:
Scanner scanner = new scanner(Sysrtem.in);
String s = scanner.nextLine();
Next we select [Reformat according to style] and choose [Java] from [Define] list. To finish the procedure press [OK] as it's shown in the picture below:
Now when we write scnl in IntelliJ IDEA, our new snippet is going to appear:
## Code Debugging Techniques
Debugging plays an important role in the process of creating software, which allows us to follow the implementation of our program step by step. With this technique, we can follow the values of the local variables, because they are changing during the execution of the program, and to remove possible errors (bugs). The process of debugging includes:
• Finding the problems (bugs).
• Locating the code, which causes the problems.
• Correcting the code, which causes the problems, so that the program works correctly.
• Testing to make sure that the program works correctly after the corrections we have made.
### Debugging in IntelliJ IDEA
IntelliJ IDEA gives us a built-in debugger, thanks to which we can place breakpoints at places we have chosen. When it reaches a breakpoint, the program stops running and allows step-by-step running of the remaining lines. Debugging allows us to get into the details of the program and see where exactly the errors occur and what is the reason for this.
To demonstrate how to use the debugger we will use the following program:
public static void main(String[] args) {
for (int i = 0; i < 100; i++) {
System.out.println(i);
}
}
We will place a breakpoint at the function System.out.println(…). For this, we will need to move our cursor to the line, which prints on the console, and press [Ctrl + F8], alternatively we can simply click using the left mouse button on the right side of the line number. A breakpoint appears, showing where the program will stop running:
### Starting the Program in Debug Mode
To start the program in debug mode, we choose [Run] -> [Debug <class name>] or press [SHIFT + F9]:
After starting the program, we can see that it stops executing at line 8, where we placed our breakpoint. The code in the current line is colored in additional color and we can run it step by step. To go to the next line, we use the key [F8]. We can see that the code on the current line hasn't been executed yet. It will execute when we go ahead with debugging the next line. The current value of the variable is depicted in green, in this case, i = 0 can be seen in the following picture.
From the [Variables] window we can observe the changes in the local variables.
## Tricks for Java Developers
In this section, we will recall some tricks and techniques in programming with Java, already seen in this book, which can be very useful if you attend an exam for beginner programming.
### Formatted output with printf()
For printing long and complex sequences of elements, we can use the printf(…) method. This method is an abbreviation of "Print Formatted". The main idea of printf(…) is to take a special string formatted with special formatting symbols and a comma-separated list of values that have to substitute the formatting specifiers.
printf(<formatted string>, <param1>, <param2>, <param3>, …)
Example:
String str = "some text";
System.out.printf("%s", str);
// This will print on the console "some text"
The first argument of the printf(…) method is the formatting string. In this case %s means that %s is going to be substituted by the string str and the value of str is what you'll see printed on the console.
String str1 = "some text";
int number = 5;
String str2 = "some more text";
System.out.printf("%s %d %s \n", str1, number, str2);
// This will print on the console "some text 5 some more text"
Notice that in this example we can pass non-exclusively string variables. The first argument is a formatting string. Following it comes a sequence of arguments that are replacing any instance of a formatting specifier (meaning % followed by a single character, e.g. %s or %d). The first %s means that the first argument that is passed after the formatting string, is going to take its place, in our case that's str1. After that, there's %d which means that it's going to be substituted by the first integer number that is among the arguments. The last special symbol is %s which means that it'll be replaced by the next string that can be found in the arguments. Finally, there's \n and that is a special symbol that denotes a new line. A single variable can be used multiple times.
String str = "some text";
int number = 5;
System.out.printf("%s %d %s \n", str, number, str);
// This will print on the console "some text 5 some text"
### Rounding of floating-point numbers
Real numbers are represented in Java with the types float and double.
double number = 5.432432;
System.out.println(Math.round(number));
// This will print on the console "5"
If the digit at the first decimal place is less than 5, just like in the example above, then the number is rounded down, otherwise - it's rounded up as you can see in the example below:
double number = 5.543;
System.out.println(Math.round(number));
// This will print on the console "6"
### Other Rounding Methods
In case we always want to round down instead of Math.round(…) we can use another method – Math.floor(…), which always rounds down. For example, if we have the number 5.99 and we use Math.floor(5.99), we will get the number 5.0*.
We can also do the exact opposite – to always round up using the method Math.ceil(…). Again, if we have for example 5.11 and we use Math.ceil(5.11), we will get 6.0. Here are some examples:
double numberToFloor = 5.99;
System.out.println(Math.floor(numberToFloor));
// This will print on the console 5.0
double numberToCeiling = 5.11;
System.out.println(Math.ceil(numberToCeiling));
// This will print on the console 6.0
### Formatting with 2 Digits After the Decimal Point
When we print numbers, we often need to round them to 2 digits after the decimal point, e.g.
double number = 5.432432;
System.out.printf("%.2f%n", number);
// This will print on the console 5.43
In the given example a formatting string %.2f is used, which rounds the number variable to two decimal places. We have to take into account that the number before the letter f means the number of decimal places up to which the result will be rounded (e.g. the formatting string can very well be %.3f or %.5f). When formatting a string using printf(), it's recommended to use %n as a symbol for new line and not /n.
### How to Write a Conditional Statement?
The conditional if construction contains the following elements:
• Keyword if
• A Boolean expression (condition)
• Body of the conditional construction
• Optional: else clause
if (condition) {
// body
} else (condition) {
// body
}
To make it easier we can use a code snippet for an if construction:
• if + [Enter]
### How to Write a 'For' Loop?
For a for loop we need a couple of things:
• Initializing block, in which the counter variable is declared (int i) and its initial value is set
• Condition for repetition (i <= 10)
• Loop variable (counter) updating statement (i++)
• Body of the loop, holding statements
for (int i = 0; i <= 10; i++) {
// body
}
It's important to know that the three elements of the for loop are optional and can be omitted. for(; ; ) { … } is a valid for loop statement.
To make it easier we can use a code snippet for a for loop:
• fori + [Enter]
## What Have We Learned in This Chapter?
In this chapter, we have learned how to correctly format and name the elements of our code, some shortcuts in IntelliJ IDEA, some code snippets, and we analyzed how to debug the code.
|
Adam the ant gets a running start and leaps with speed 310 cm/s and lands on top of a 20 cm tall...
Question:
Adam the ant gets a running start and leaps with speed 310 cm/s and lands on top of a 20 cm tall anthill. What is his speed as he lands?
Kinetic and Potential Energy:
The energy of the object due to its motion is known as the kinetic energy of the object. And the energy due to the virtue of height is known as the Potential energy.
Given data
• Initial speed {eq}(u) = 310 \ cm/s (u) = 3.1 \ m/s {/eq}
• Height of the hill {eq}(h) = 0.2 \ m {/eq}
The initial energy of Adam would be only kinetic energy, therefore
{eq}E_{1} = \dfrac{1}{2}mu^{2} {/eq}
where
• m is the mass of Adam
Now, the energy at the hill would be both kinetic as well as the potential, therefore
{eq}E_{2} = mgh + \dfrac{1}{2}mv^{2} {/eq}
where
• v is the speed at the hill
Now, from the energy conservation
{eq}E_{1} = E_{2} \\ \dfrac{1}{2}mu^{2} = mgh + \dfrac{1}{2}mv^{2} \\ (0.5*3.1^{2}) = (9.8*0.2) + (0.5v^{2}) \\ v = 2.385 \ m/s {/eq}
|
# Hamilton-Jacobi-Equation (HJE)
1. Sep 27, 2009
### Marin
Hi all!
I was studying the HJ-formalism of classical mechanics when I came upon a modified HJE:
$$(\nabla S)^2=\frac{1}{u^2}(\frac{\partial S}{\partial t})^2$$
where
$$u=\frac{dr}{dt}$$
and $$dr=(dx,dy,dz)$$ is the position vector.
(I read the derivation and it's ok)
Now, u is interpreted to be the wave velocity of the so called 'action waves' in phase space.
However, my book (Nolting, Volume 2) states that this is a wave equation, or at least a special nonlinear case of the popular wave equation
$$\nabla^2S=\frac{1}{u^2}\frac{\partial^2}{\partial t^2}S$$
which is somehow unclear to me, as the squares in both equations mean different things.
A similar statement is also made in Wikipedia:
http://en.wikipedia.org/wiki/Hamilton–Jacobi_equation
(cf. Eiconal apprpximation and relationship to the Schrödinger equation)
I hope someone of you can explain this to me :)
best regards,
marin
2. Sep 28, 2009
### Marin
Hmmm, haven't fould anything so far..
Any ideas left?
|
# How to rigorously prove the convergence of an iterative sequence
Suppose we have an iterative sequence defined by $x_{n+1} = g(x_n)$ where
$$g(x)= \frac{x^4 + 1}{3}$$
and we are looking at the two cases:
1. $x_1 = 0$
2. $x_1 = 1$
While I know that if $x_n \rightarrow a$ as $n \rightarrow \infty$ then $a$ is a fixed point of $g$, ie $g(a)=a$, this lead me to think along the lines of considering the properties of the function $f(x)=g(x) - x$ and trying to use this to help rigorously prove that the sequence does converge to some limit, I haven't much success in writing in argument which I find to be fully satisfactory.
Is this a good method to show that the sequence does converge to the same value for both starting points of $x_n$ - if so, could someone please give a few pointers for the general outline of a proof which can be made rigorously (so please don't talk about how $|g'(a)|< 1$ will suffice, because I fail to see how this can be made rigorous).
Alternatively, if a proof can be done by just using algebra and the definition of the sequence, that would be much more preferable (we were asked this question a while ago before we had came across continuity/differentiability, so I assume it can be done without these ideas rigorously).
-
Convergence can be proved in each case by using induction to show monotonicity and boundedness. Then one can use continuity to evaluate the limits. Alternately, let $(a_n)$ be the first sequence, $(b_n)$ the second. One can estimate $b_n-a_n$. – André Nicolas Mar 16 '13 at 18:10
Show by induction on $n$ that if $x_1=0$, then $\langle x_n:n\in\Bbb Z^+\rangle$ is non-decreasing and bounded above by $1$, and if $x_1=1$, the sequence is non-increasing and bounded below by $0$. Every bounded monotone sequence in $\Bbb R$ converges, so at that point you’ll know that the limit $a$ exists, and you can use a continuity argument to identify it as a fixed point of $g$.
Take the case $x_1=0$. Note that $x_{n+1}\ge x_n$ iff $\frac13\left(x_n^4+1\right)\ge x_n$ iff $3x_n-x_n^4\le 1$. Suppose that this is the case for some $n\ge 1$. Then $x_n^4+1\ge 3x_n$, and
\begin{align*} 3x_{n+1}-x_{n+1}^4&=x_n^4+1-\frac1{81}\left(x_n^4+1\right)^4\\ &\le x_n^4+1-\frac1{81}(3x_n)^4\\ &=1\;, \end{align*}
and therefore $x_{n+2}\ge x_{n+1}$. It follows by induction that $\langle x_n:n\in\Bbb Z^+\rangle$ is monotone non-decreasing.
-
Let $\{a_n\}$ be the sequence we obtain by starting at $0$, and $\{b_n\}$ be the sequence we obtain by starting at $1$.
Straightforward inductions show that the first sequence is increasing and bounded above by $1$, and the second is decreasing and bounded below. So each has a limit. Let the limits be $a$ and $b$.
We want to show that $a=b$. To do this, we show by induction that $0\lt b_n-a_n\le \left(\frac{1}{3}\right)^{n-1}$. We have $$b_{n+1}-a_{n+1}=\frac{1}{3}(b_n^4-a_n^4)=\frac{1}{3}(b_n-a_n)(b_n^3+b_n^2a_n+b_na_n^2+a_n^3).$$ Compute separately $a_2$, $a_3$, and $b_2$, $b_3$. For $n\ge 3$, we have $b_n \lt 0.4$, So $b_n^3+b_n^2a_n+b_na_n^2+a_n^3\lt 1$.
-
|
My Math Forum (http://mymathforum.com/math-forums.php)
- Trigonometry (http://mymathforum.com/trigonometry/)
- - Can you find side length of a triangle given three angles? (http://mymathforum.com/trigonometry/343675-can-you-find-side-length-triangle-given-three-angles.html)
Larrousse March 19th, 2018 09:58 AM
Can you find side length of a triangle given three angles?
Can you mix dimensionless function or angles to find length in triangles? For example, two sides are composed of a distance of $0.85+0.4=1.25$ and at the same time $0.4=\cos\theta$and the base is $1$?
For consecutive numbers or non-consecutive numbers $x<y<z$, I have the following example:
$(((\frac{\sqrt\frac{y}{z}}{(1-\frac{x}{z})\times\sqrt\frac{x+z}{z-x}})\times\frac{x}{z})+\sqrt\frac{z-y}{z})\times((1-\frac{x}{z})\times\sqrt\frac{(x+z)}{(z-x)})=\sin A$
$(\frac{\sqrt\frac{y}{z}}{(1-\frac{x}{z})\times\sqrt\frac{x+z}{z-x}})-(((\frac{\sqrt\frac{y}{z}}{(1-\frac{x}{z})\times\sqrt\frac{x+z}{z-x}})\times\frac{x}{z})+\sqrt\frac{z-y}{z})\times(\frac{x}{z})=\cos A$
$\sqrt\frac{(z-y)}{z}=\cos B$
$\sqrt\frac{y}{z}=\sin B$
$\frac{x}{z}=\cos C$
$((1-\frac{x}{z})\times\sqrt\frac{(z+x)}{(z-x)})=\sin C$
$(\sqrt{\frac{y}{z}}\times\frac{x}{z})+\sqrt\frac{ z-y}{z}\times((1-\frac{x}{z})\times\sqrt\frac{(z+x)}{(z-x)})=\sin A$
$(-\sqrt{\frac{z-y}{z}})\times\frac{x}{z}+\sqrt{\frac{y}{z}}\times( (1-\frac{x}{z})\times\sqrt\frac{(z+x)}{(z-x)})=\cos A$
The following variables $a,b,c$ represent the length of the sides of the triangles.
$\frac{\sin A}{\sin C}=a$
$\frac{\sin B}{\sin C}=b$
$\frac{\sin C}{\sin C}=c$
h=altitude
$\frac{h_c}{h_a}=a$
$\frac{h_c}{h_b}=b$
$\frac{h_c}{h_c}=c$
$((((\frac{\sin B}{\sin C})\times\cos C)+\cos B)\times\sin C)=\sin A$
$(\frac{\sin B}{\sin C})-((((\frac{\sin B}{\sin C})\times\cos C)+\cos B)\times\cos C)=\cos A$
$((((\frac{\sin A}{\sin C})\times\cos C)+\cos A)\times\sin C)=\sin B$
$(\frac{\sin A}{\sin C})-((((\frac{\sin A}{\sin C})\times\cos C)+\cos A)\times\cos C)=\cos B$
skipjack March 19th, 2018 02:22 PM
Quote:
Originally Posted by Larrousse (Post 590261) $\frac{\sin A}{\sin C}=a$ $\frac{\sin B}{\sin C}=b$ $\frac{\sin C}{\sin C}=c$
If you are given the values of the angles A, B and C, the above equations give you the values of a, b and c in terms of the angles.
If you aren't given the values of the angles A, B and C, the three equations just tell you that c = 1.
Quote:
Originally Posted by Larrousse (Post 590261) $\frac{h_c}{h_a}=a$ $\frac{h_c}{h_b}=b$ $\frac{h_c}{h_c}=c$
The above three altitude quotient equations just tell you that c = 1.
AngleWyrm2 March 19th, 2018 03:00 PM
Angles are insufficient to determine the size of a triangle.
https://s6.postimg.org/qaccs931t/Capture.png
Larrousse March 19th, 2018 03:36 PM
1 Attachment(s)
An angle has no measurement of units, and for that reason, it is difficult to describe the length of the side of the triangle or the unit doesn't matter.
Country Boy April 22nd, 2018 05:11 PM
For example, every equilateral triangle, whether its sides have length 1 cm or 1000 km, has its three angles the same with measure $\displaystyle \frac{\pi}{3}$.
All times are GMT -8. The time now is 03:32 PM.
|
Student A and Student B
Question:
Student A and Student B used two screw gauges of equal pitch and 100 equal circular divisions to measure the radius of a given wire. The actual value of the radius of the wire is $0.322 \mathrm{~cm}$. The absolute value of the difference between the final circular scale readings observed by the students A and $B$ is
[Figure shows position of reference ‘O’ when jaws of screw gauge are closed]
Given pitch $=0.1 \mathrm{~cm}$.
Solution:
For (A)
Reading $=$ MSR $+$ CSR $+$ Error
$0.322=0.300+\mathrm{CSR}+5 \times \mathrm{LC}$
$0.322=0.300+\mathrm{CSR}+0.005$
$\operatorname{CSR}=0.017$
For B
Reading $=$ MSR $+$ CSR $+$ Error
$0.322=0.200+\operatorname{CSR}+0.092$
$\mathrm{CSR}=0.030$
Difference $=0.030-0.017=0.013 \mathrm{~cm}$
Division on circular scale $=\frac{0.013}{0.001}=13$
|
×
# Show these statements are true
Consider the 10 numbers 0.06, 0.55, 0.77, 0.39, 0.96 , 0.28 , 0.64 , 0.13 , 0.88 , 0.48 - all in the interval (0,1) . Show by calculation that each of the following nine statements is true:
The first 2 numbers are in the diff. halves of (0,1).
The first 3 numbers are in the diff. thirds of (0,1)
The first 4 numbers are in the diff. fourths of (0,1)
The first 5 numbers are in the diff. fifths of (0,1)
The first 6 numbers are in the diff. sixths of (0,1)
The first 7 numbers are in the diff. sevenths of (0,1)
The first 8 numbers are in the diff. eights of (0,1)
The first 9 numbers are in the diff. ninths of (0,1)
The first 10 numbers are in the diff. tenths of (0,1)
[all caps edited out- peter]
Note by John Rey Gelpe
4 years, 7 months ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$
Sort by:
Don't scream at us like that!
- 4 years, 7 months ago
|
# lsm_p_para: PARA (patch level) In r-spatialecology/landscapemetrics: Landscape Metrics for Categorical Map Patterns
## Description
Perimeter-Area ratio (Shape metric)
## Usage
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 lsm_p_para(landscape, directions) ## S3 method for class 'RasterLayer' lsm_p_para(landscape, directions = 8) ## S3 method for class 'RasterStack' lsm_p_para(landscape, directions = 8) ## S3 method for class 'RasterBrick' lsm_p_para(landscape, directions = 8) ## S3 method for class 'stars' lsm_p_para(landscape, directions = 8) ## S3 method for class 'list' lsm_p_para(landscape, directions = 8)
## Arguments
landscape Raster* Layer, Stack, Brick or a list of rasterLayers. directions The number of directions in which patches should be connected: 4 (rook's case) or 8 (queen's case).
## Details
PARA = \frac{p_{ij}} {a_{ij}}
where p_{ij} is the perimeter in meters and a_{ij} is the area in square meters.
PARA is a 'Shape metric'. It describes the patch complexity in a straightforward way. However, because it is not standarised to a certain shape (e.g. a square), it is not scale independent, meaning that increasing the patch size while not changing the patch form will change the ratio.
None
PARA > 0
#### Behaviour
Increases, without limit, as the shape complexity increases.
tibble
## References
McGarigal, K., SA Cushman, and E Ene. 2012. FRAGSTATS v4: Spatial Pattern Analysis Program for Categorical and Continuous Maps. Computer software program produced by the authors at the University of Massachusetts, Amherst. Available at the following web site: http://www.umass.edu/landeco/research/fragstats/fragstats.html
lsm_p_area, lsm_p_perim,
lsm_c_para_mn, lsm_c_para_sd, lsm_c_para_cv,
lsm_l_para_mn, lsm_l_para_sd, lsm_l_para_cv
1 lsm_p_para(landscape)
|
# Building Python bindings with CMake and Boost
This is a short explanation on how to build a boost python binding with CMake. You may or may not use JRL CMake macros or PID macros.
## The lib to bind
Let’s say you have a lib called libMyLib.so you want to bind. The CMake project name is defined as MyLib.
Let’s bind the functions.
## Bindings
The binding file must be .cpp file, let’s call it bindings.cpp located in a src folder. The library name for python is set as pyMyLib. Here is a snippet that gives the minimum needed code.
#include <boost/python.hpp>
// Include the headers of MyLib
BOOST_PYTHON_MODULE(pyMyLib)
{
Py_Initialize();
// Write the bindings here
}
If you want to use numpy with C++ (only available from boost 1.63), here is the code
#include <boost/python.hpp>
#include <boost/python/numpy.hpp>
// Include the headers of MyLib
namespace np = boost::python::numpy;
BOOST_PYTHON_MODULE(pyMyLib)
{
Py_Initialize();
np::initialize();
// Write the bindings here
}
Don’t forget to write the __init__.py file in the same folder as the bindings. To be able to use __init__.py with both python 2.7 and 3 you have to add a . when using the keyword from.
from .pyMyLib import # Add class or functions
# ...
## Raw CMake
Let’s now write the CMake that will perform the build and the installation.
First of all you need to find the Python package and Boost packages. The variable PY_VERSION should return either 2.7.x either 3.x. Here it is assumed that the boost version is 1.64.0 (change it if needed).
find_package(PythonLibs ${PY_VERSION} REQUIRED) find_package(Boost 1.64.0 REQUIRED COMPONENTS system python) Then you need to make the compiler aware of the header files, create the library and link. # include directories include_directories(${PROJECT_SOURCE_DIR}/src)
include_directories(${PYTHON_INCLUDE_DIRS}) include_directories(${Boost_INCLUDE_DIRS})
# create the lib
target_link_libraries(pyMyLib ${Boost_LIBRARIES}${PROJECT_NAME})
It is very important that the library name (here pyMyLib) and the python module name (in BOOST_PYTHON_MODULE(pyMyLib)) are the same.
You now need to install the __init__.py and the lib.
# Copy the __init__.py file
configure_file(__init__.py ${CMAKE_CURRENT_BINARY_DIR}/src/__init__.py COPYONLY) # Suppress prefix "lib" because Python does not allow this prefix set_target_properties(pyMyLib PROPERTIES PREFIX "") install(TARGETS pyMyLib __init__.py DESTINATION "${PYTHON_INSTALL_PATH}")
## JRL CMake
This is pretty much the same as above. You have some macro you can use to facilitate the writings.
# find boost and python
set(BOOST_COMPONENTS system python)
SEARCH_FOR_BOOST()
FINDPYTHON()
# Compile and install python file
PYTHON_INSTALL_BUILD(pyMyLib __init__.py "${PYTHON_INSTALL_PATH}") ## PID It is a bit simpler for PID since it handle everything itself. In the global CMakeLists.txt get_PID_Platform_Info(PYTHON PY_VERSION) find_package(PythonLibs${PY_VERSION} REQUIRED)
declare_PID_Package_Dependency(PACKAGE boost EXTERNAL VERSION 1.64.0)
In the CMakeLists.txt of the src folder
declare_PID_Component(
MODULE_LIB
NAME pyMyLib
DIRECTORY pyMyLib
)
declare_PID_Component_Dependency(
COMPONENT pyMyLib
NATIVE MyLib
)
get_PID_Platform_Info(PYTHON PY_VERSION)
if(PY_VERSION VERSION_LESS 3.0)
#using python2 to manage python wrappers
declare_PID_Component_Dependency(
COMPONENT pyMyLib
EXTERNAL boost-python
PACKAGE boost
)
else()
#using python3 to manage python wrappers
declare_PID_Component_Dependency(
COMPONENT pyMyLib
EXTERNAL boost-python3
PACKAGE boost
)
endif()
|
# Why is gaining hydrogen called reduction when gaining electrons is called reduction? Aren't they opposites
Reduction is gain of hydrogen (Source)
Reduction is gain of electrons (Source)
Is it because a hydrogen has an electron so gaining hydrogen is technically gaining electrons? But that doesn't seem right as oxygens also have electrons and gaining oxygens is oxidation.
What is confusing me is when we usually talk about hydrogens, its a hydrogen ion. And hydrogen ions and electrons are opposite. I realise this situation is not a hydrogen ion but a full hydrogen, but it's still doing my head in.
• Indeed, we're talking about "full" hydrogen. Gain of hydrogen ions would have pretty much nothing to do with reduction. – Ivan Neretin Nov 30 '16 at 9:47
• Perhaps its easier to understand reduction and oxidation by defining what happens as (a) an oxidising agent is an electron acceptor and (b) a reducing agent is an electron donor. – porphyrin Nov 30 '16 at 9:48
• @porphyrin Yeah, until we move on to study the Krebs cycle, where you'd literally die of old age before you assign all the oxidation states and figure out where the electrons are going, yet everybody is talking about redox processes, and that quite confidently. – Ivan Neretin Nov 30 '16 at 9:51
• @Ivan Neretin ? don't understand what the Krebs cycle has to do with at, its even a pain just to look at :) I've worked on electron transfer extensively and thinking of electron donors and acceptors is by far the easiest way of dealing with these reactions, also you get $\Delta G$ directly as donor-acceptor redox. – porphyrin Nov 30 '16 at 10:00
• @porphyrin Krebs cycle is just an example of redox system where thinking of electron donors and acceptors is probably not the easiest way of dealing with it. – Ivan Neretin Nov 30 '16 at 10:16
As a blanket statement the gain of a hydrogen atom cannot be considered a reduction. The IUPAC gold book defines it as follows:
## reduction
The complete transfer of one or more electrons to a molecular entity (also called 'electronation'), and, more generally, the reverse of the processes described under oxidation (2) and (3).
## oxidation
1. The complete, net removal of one or more electrons from a molecular entity (also called 'de-electronation').
2. An increase in the oxidation number of any atom within any substrate.
3. Gain of oxygen and/or loss of hydrogen of an organic substrate.
All oxidations meet criteria 1 and 2, and many meet criterion 3, but this is not always easy to demonstrate. Alternatively, an oxidation can be described as a transformation of an organic substrate that can be rationally dissected into steps or primitive changes. The latter consist in removal of one or several electrons from the substrate followed or preceded by gain or loss of water and/or hydrons or hydroxide ions, or by nucleophilic substitution by water or its reverse and/or by an intramolecular molecular rearrangement. This formal definition allows the original idea of oxidation (combination with oxygen), together with its extension to removal of hydrogen, as well as processes closely akin to this type of transformation (and generally regarded in current usage of the term in organic chemistry to be oxidations and to be effected by 'oxidizing agents') to be descriptively related to definition 1. For example the oxidation of methane to chloromethane may be considered as follows:
As you can see, the reverse of (3) is your first statement with one significant addition: organic substrate.
To understand this one only has to look at the electronegativities. With an electronegativity of 2.20 (Pauling) for hydrogen, it is less electronegative than most other non-metals, or in general, elements that organic substrates are made out of.[1] If you follow the electronegativity scheme of assigning oxidation states, then adding a hydrogen atom (one proton and one electron) results in an decrease of the oxidation number of the element that the hydrogen atom was added to.[2]
On the other hand, adding oxygen would increase the oxidation number of the element (except fluorine) it was added to, since its electronegativity is the second highest, i.e. 3.44 (Pauling).[3]
For example, adding dihydrogen to ethene, the carbons are reduced, while hydrogen is oxidised. $$\ce{\overset{\color{orange}{-2}}{C}_2\overset{+1}{H}_4 + \overset{0}{H}_2 -> \overset{\color{orange}{-3}}{C}_2\overset{+1}{H}_6}$$ Adding dioxygen to ethane to form ethane-1,2-diol, the carbons are oxidised, while oxygen is reduced. $$\ce{\overset{\color{orange}{-3}}{C}_2\overset{+1}{H}_6 + \overset{0}{O}_2 -> \overset{\color{orange}{-1}}{C}_2\overset{+1}{H}_6\overset{-2}{O}_2}$$ You can go as far as looking at the addition of water to ethane to form ethanol. One carbon will be oxidised and one will be reduced. $$\ce{\overset{\color{orange}{-2}}{C}_2\overset{+1}{H}_4 + \overset{+1}{H}_2\overset{-2}{O} -> (\overset{+1}{H}\overset{-2}{O})H2\overset{\color{orange}{-1}}{C}-\overset{\color{orange}{-3}}{C}\overset{+1}{H}_3}$$
Just keep in mind that your first statement is only true when hydrogen is added to to more electronegative elements. The reversal is the case when adding it to a less electronegative element, like a metal.
A second point to keep in mind is that oxidation states are bookkeeping tools only. The world of bonding is not strictly ionic or covalent; it is often somewhere in between. Therefore electrons are almost never completely transferred and the actual charge distribution might be quite different.
### References:
1. Hydrogen: electronegativity. WebElements [http://www.webelements.com/]. [Online]; Dated 30th Nov. 2016; https://www.webelements.com/hydrogen/electronegativity.html
2. Electronegativity Considerations in Assigning Oxidation States
3. Oxygen: electronegativity. WebElements [http://www.webelements.com/]. [Online]; Dated 30th Nov. 2016; https://www.webelements.com/oxygen/electronegativity.html
The hydrogen definition is a simplified form which works well with organic compounds or other electronegative non-metals. It fails for metals. Specifically, if you react a metal with hydrogen, the metal will be oxidised in spite of additional hydrogen being in the compound.
Or in other words, $\ce{NaH}$ is the oxidised form, $\ce{Na}$ the reduced one.
|
# Intersection of the interval set
Given a list of people with their birth and end years find the year with the most number of people alive.
I implemented in Java and was trying to find the optimal solution. I am sure there might be some data structure which will be optimal to use in this scenario.
static class Person {
int born;
int died;
Person(int born, int died) {
this.born = born;
this.died = died;
}
}
/*
* (1920, 1939),
* (1911, 1944),
* (1920, 1955),
* (1938, 1939),
* (1920, 1939),
* (1911, 1944),
* (1920, 1955),
* (1938, 1939),
* (1937, 1940)
*
*/
public static void main(String[] args) {
List<Person> people = new ArrayList<>();
System.out.println(solution1(people));
}
public static int solution1(List<Person> people) {
HashMap<Integer, Integer> peopleMap = new HashMap<>();
int maxCount = 0;
int year = 0;
for (Person p : people) {
for (int i = p.born; i <= p.died; i++) {
Integer value = peopleMap.get(i);
if (value == null) {
peopleMap.put(i, 1);
} else {
value++;
if (maxCount < value) {
maxCount = value;
year = i;
}
peopleMap.put(i, value);
}
}
}
return year;
}
## HashMap is overkill here.
Depending on the constraints of the problem (mainly the minimum and maximum allowable years), you could simply use an array instead of a HashMap.
Sure HashMaps have a O(1) complexity, but nothing is going to beat just indexing an array.
## Better algorithm
Coming up with a better algorithm is very tricky.
Your solution is O(NL) where N is the number of people and L is the average lifetime. But keep in mind that L is practically a constant, so it does not really matter that much.
You could most certainly come up with a O(NlogN) solution, based on creating a sorted list of relevant dates. This would outperform your current solution for small values of N, but better performance at the low end is rarely useful (it can be, but that's the exception, not the rule).
Edit Thinking about this, I suspect that's the "trick" part of that interview question. Overlapping intervals is a classic interview problem. However, making the intervals happen on a fully discretized space makes the hashmap solution viable, whereas it can't be used when dealing with "traditional" float-bound intervals.
• The sorted list of dates is probably the best answer. generate a sorted list at logN then iterate over it once keeping a running count. There might be some other tricky way to do it with a single pass and some interesting custom data structure--but I can't see it (and it would probably still be logN to create your Interesting data structure). – Bill K Apr 6 '18 at 16:24
• @BillK You'll have a hard time generating the list without visiting each element, making that a NlogN. For reference, the "interesting data structure" could simply be a pair of dates/bool (wether it's a birth or a death). – Frank Apr 6 '18 at 16:27
• Yeah, that's what I meant--logN to create it while it's being iterated over would = NlogN... I didn't say that very clearly, thanks :) – Bill K Apr 6 '18 at 17:43
## Nothing wrong with using the HashMap, but…
… all you actually need to store is the overall change for each year. Not even total births or deaths, just the total change. You can iterate at the end, and (keeping a high water mark) to find the maximum you only need to check when your change for the year is positive.
## Things that might help
• Sort by birth, and keep a priority queue of deaths.
• Separate the birth and death, sort both lists.
## Plenty wrong with the question, though
(the interviewer’s, not your posting here).
• Should changes be considered to happen at the start of the year?
• The end?
• When do you measure?
• Is it the average number for the year, or the maximum possible? Do any of the births happen before any of the deaths?
.. and so on. The problem is a little under-specified.
This problem fits as a temporal problem that requires you to find overlaps between date ranges. SQL is optimized for such problems. But I'm sure it can be done without too much performance loss in managed code.
Should your algorithm take each year as its base unit or just take into account the relevant date time ranges?
I would favour the latter (specially in the OP's problem), unless
• there are short date ranges and many overlaps
• there are a huge amount of overlaps
What can be a good algorithm for finding the hot spot?
In chronological order, I would
• find relevant date ranges: add all start and end dates in a bag, remove duplicates, order by ascending date, pair up adjacent dates as a relevant date range.
• determine overlaps: for each relevant date range, store the number of people that have lived in this period
• take date range with highest amount of overlaps
Note that since this is not SQL, but rather managed code, an inline algorithm can be used that only keeps track of the date range that has currently (in the algorithm) the highest amount of overlaps.
|
## Quantum Entanglement for Toddlers
I wrote a book a while back called Quantum Entanglement for Babies. But, now all those babies are grown into toddlers! I’ve been asked what is next on their journey to quantum enlightenment. Surely they have iPads now and know how to scroll, and so I give you Quantum Entanglement for Toddlers, the infographic!
Below is a lower-res version. Here is a high-res version (5MB). Contact me for the SVG.
## My Speech to 500 Australian Teenage Schoolboys About Mathematics
I suppose I should start with who I am and what I do and perhaps why I am here in front of you. But I’m not going to do that, at least not yet. I don’t want to stand here and list all my accomplishments so that you may be impressed and that would convince you to listen to me. No. I don’t want to do that because I know it wouldn’t work. I know that because it wouldn’t have worked on me when I was in your place and someone else was up here.
Now, of course you can tell by my accent that I wasn’t literally down there. I was in Canada. And I sure as hell wasn’t wearing a tie. But I imagine our priorities were fairly similar: friends, getting away parents, maybe sports (in my case hockey of course and yours maybe footy), but most importantly… mathematics! No. Video games.
I don’t think there is such a thing as being innately gifted in anything. Though, I am pretty good at video games. People become very good at things they practice. A little practice leads to a small advantage, which leads to opportunities for better practice, and things snowball. The snowball effect. Is that a term you guys use in Australia? I mean, it seems like an obvious analogy for a Canadian. It’s how you make a snowman after all. You start with a small handful of snow and you start to roll it on the ground. The snow on the ground sticks to the ball and it gets bigger and bigger until you have a ball as tall as you!
Practice leads to a snowball effect. After a while, it looks like you are gifted at the thing you practiced, but it was really just the practice. Success then follows from an added sprinkling of luck and determination. That’s what I want to talk to you about today: practice.
I don’t want to use determination in the sense that I was stubbornly defiant in the face of adversity. Though, from the outside it might look that way. You can either be determined to avoid failure or determined to achieve some objective. Being determined to win is different from being determined not to lose.
There is something psychologically different between winning and not losing. You see, losing implies a winner, which is not you. But winning does not require a loser, because you can play against yourself. This was the beauty of disconnected video games of 80’s and 90’s. You played against yourself, or maybe “the computer”. That doesn’t mean it was easy. I’ll given anyone here my Nintendo if they can beat Super Mario Bros. in one go. (I’m not joking. I gave my children the same offer and they barely made it past the first level). It was hard and frustrating, but no one was calling you a loser on the other end. And when you finally beat the game, you could be proud. Proud of yourself and for yourself. Not for the fake internet points you get on social media, but for you.
I actually really did want to talk to you today about mathematics. What I want to tell you is that, when I was your age, I treated mathematics like a video game. I wanted to win. I wanted to prove to myself that I could solve every problem. Some nights I stayed up all night trying to solve a single problem. You know how they say you can’t have success without failure? This is a perfect example. The more you fail at trying to solve a maths problem, the more you understand when you finally do solve it. And what came along with failing and eventually succeeding in all those maths problems? Practice.
Well I don’t know much about the Australian education system and culture. But I’m guessing from Hollywood you know a bit about highschool in North America. I’m sure you know about prom, and of course about Prom King and Prom Queen. What you may not know is that the King and Queen’s court always has a jester. That is, along with King and Queen, each year has a Class Clown — the joker, the funny guy. I wasn’t the prom king, or queen. But I did win the honour of class clown.
When I finished highschool, I was really good at three things: video games, making people laugh, and mathematics. I promise you, there is no better combination. If there was a nutrition guide for the mind, it would contain these three things. Indeed, now more than ever before, you need to be three types of smart. You need to be quick, reactive, and adaptive — the skills needed to beat a hard video game. You need emotional intelligence, you need to know what others are thinking and feeling — how to make them laugh. And finally you need to be able to solve problems, and all real problems require maths to solve them.
There are people in the world, lots of people — billions, perhaps — who look in awe at the ever increasing complexity of systems business, government, schools, and technology, including video games. They look, and they feel lost. Perhaps you know someone that can’t stand new technology, or change in general. Perhaps they don’t even use a piece of technology because they believe they will never understand how to use it.
You all are young. But you know about driving, voting, and paying taxes, for example. Perhaps it looks complicated, but at least you believe that you can and will be able to do it when the time comes. Imagine feeling that such things were just impossible. That would be a weird feeling. You brain can’t handle such dissonance. So you would need to rationalise it in one way or another. You’d say it’s just not necessary, or worse, it’s something some “other” people do. At that point, for your brain to maintain a consistent story, it will start to reject new information and facts that aren’t consistent with your new story.
This is all sounds far fetched, but I guarantee you know many people with such attitudes. To make them sound less harmful, they call them “traditional”. How do otherwise “normal” people come to hold these views? It’s actually quite simple: they fear, not what they don’t understand, but what they have convinced themselves is unnecessarily complicated. I implore you, start today, start right now. Study maths. It is the only way to intellectually survive in a constantly changing world.
Phew that was a bit depressing. Let me give you a more fun and trivial example. Just this weekend I flew from Sydney to Bendigo. The flight was scheduled to be exactly 2 hours. I was listening to an audiobook and I wondered if I would finish it during the flight. Seems obvious right? If there was less than 2 hours left in the audiobook, then I would finish. If not, then I would not finish. But here’s the thing, audiobooks are read soooo slow. So, I listen to them at 1.25x speed. There was 3 hours left. Does anyone know the answer?
Before I tell you, let me remind you, not many people would ask themselves this question. I couldn’t say exactly why, but in some cases it’s because the person has implicitly convinced themselves that such a question is just impossible to answer. It’s too complicated. So their brain shuts that part of inquiry off. Never ask complicated questions it says. Then this happens: an entire world — no most of the entire universe — is closed off. Don’t close yourself off from the universe. Study maths.
By the way, the answer. It’s not the exact answer but here was my quick logic based on the calculation I could do in my head. If I had been listening at 1.5x speed, then every hour of flight time would get through 1.5 hours of audiobook. That’s 1 hour 30 minutes. So two hours of flight time would double that, 3 hours of audiobook. Great. Except I wasn’t listening at 1.5x speed. I was listening at a slower speed and so I would definitely get through less than 3 hours. The answer was no.
In fact, by knowing what to multiple or divide by what, I could know that I would have exactly 36 minutes left of the audiobook. Luckily or unluckily, the flight was delayed and I finished the book anyway. Was thinking about maths pointless all along? Maybe. But since flights are scheduled by mathematical algorithms, maths saved the day in the end. Maths always wins.
How about another. Who has seen a rainbow? I feel like that should be a trick question just to see who is paying attention. Of course, you have all seen a rainbow. As you are trying to think about the last time you saw a rainbow, you might also be thinking that they are rare — maybe even completely random things. But now you probably see the punchline — maths can tell you exactly where to find a rainbow.
Here is how a rainbow is formed. Notice that number there. That angle never changes. So you can use this geometric diagram to always find the rainbow. The most obvious aspect is that the rainbow exits the same general direction that the sunlight entered the raindrop. So to see a rainbow, the sun has to be behind you.
And there’s more. If the sun is low in the sky, the rainbow will be high in the sky. And if the sun is high, you might not be able to see a rainbow at all. But if you take out the garden hose to find it, make sure you are looking down. Let me tell you my favourite rainbow story. I was driving the family to Canberra. We were driving into the sunset at some point when I drove through a brief sun shower. Since the sun was shining and it was raining, one of my children said, “Maybe we’ll see a rainbow!”
Maybe. Ha. A mathematician knows no maybes. As they looked out their windows, I knew — yes — we would see a rainbow. I said, after passing through the shower, “Everyone look out the back window and look up.” Because the sun was so low, it was apparently the most wonderful rainbow ever seen. I say apparently because I couldn’t see it, on account of me driving. But no matter. I was content in knowing I could conjure such beauty with the power of mathematics.
I could have ended there, since I’m sure you are all highly convinced to catch up on all your maths lessons and homework. However, since I have time, I will tell you a little bit about what maths has enabled me to get paid to do. Namely, quantum physics and computation. Maybe you’ve heard about quantum physics? Maybe you’ve heard about uncertainty (the world is chaotic and random), or superposition (things can be in two places at once and cats can be dead and alive at the same time), or entanglement (what Einstein called spooky action at a distance).
But I couldn’t tell you more about quantum physics than that without maths. This is not meant to make it sound difficult. It should make it sound beautiful. This is quantum physics. It’s called the Schrodinger Equation. That’s about all there is to it. All that stuff about uncertainty, superposition, entanglement, multiple universes, and so on—it’s all contained in this equation. Without maths, we would not have quantum physics. And without quantum physics, we would not have GPS, lasers, MRI, or computers — no computers to play video games and no computers to look at Instagram. Thank a quantum physicist for these things.
Quantum physics also helps us understand the entire cosmos. From the very first instant of the Big Bang born out of a quantum fluctuation to the fusing of Hydrogen into Helium inside stars giving us all energy and life on Earth to the most exotic things in our universe: black holes. These all cannot be understood without quantum physics. And that can’t be understood without mathematics.
And now I use the maths of quantum physics to help create new computing devices that may allow us to create new materials and drugs. This quantum computer has nothing mysterious or special about it. It obeys an equation just as the computers you carry around in your pockets do. But the equations are different and different maths leads to different capabilities.
I don’t want to put up those equations, because if I showed them to even my 25 year-old self, I would run away screaming. But then again, I didn’t know then what I know now, and what I’m telling you today. Anyone can do this. It just takes time. Every mathematician has put in the time. There is no secret recipe beyond this. Start now.
## The minimal effort explanation of quantum computing
Quantum computing is really complicated, right? Far more complicated than conventional computing, surely. But, wait. Do I even understand how my laptop works? Probably not. I don’t even understand how a doorknob works. I mean, I can use a doorknob. But don’t ask me to design one, or even draw a picture of the inner mechanism.
We have this illusion (it has the technical name in the illusion of explanatory depth) that we understand things we know how to use. We don’t. Think about it. Do you know how a toilet works? A freezer? A goddamn doorknob? If you think you do, try to explain it. Try to explain how you would build it. Use pictures if you like. Change your mind about understanding it yet?
We don’t use quantum computers so we don’t have the illusion we understand how they work. This has two side effects: (1) we think conventional computing is generally well-understood or needs no explanation, and (2) we accept the idea that quantum computing is hard to explain. This, in turn, causes us to try way too hard at explaining it.
Perhaps by now you are thinking maybe I don’t know how my own computer works. Don’t worry, I googled it for you. This was the first hit.
Imagine if a computer were a person. Suppose you have a friend who’s really good at math. She is so good that everyone she knows posts their math problems to her. Each morning, she goes to her letterbox and finds a pile of new math problems waiting for her attention. She piles them up on her desk until she gets around to looking at them. Each afternoon, she takes a letter off the top of the pile, studies the problem, works out the solution, and scribbles the answer on the back. She puts this in an envelope addressed to the person who sent her the original problem and sticks it in her out tray, ready to post. Then she moves to the next letter in the pile. You can see that your friend is working just like a computer. Her letterbox is her input; the pile on her desk is her memory; her brain is the processor that works out the solutions to the problems; and the out tray on her desk is her output.
That’s all. That’s the basic first layer understanding of how this device you use everyday works. Now google “how does a quantum computer work” and you are met right out of the gate with an explanation of theoretical computer science, Moore’s law, the physical limits of simulation, and so on. And we haven’t even gotten to the quantum part yet. There we find qubits and parallel universes, spooky action at a distance, exponential growth, and, wow, holy shit, no wonder people are confused.
What is going on here? Why do we try so hard to explain every detail of quantum physics as if it is the only path to understanding quantum computation? I don’t know the answer to that question. Maybe we should ask a sociologist. But let me try something else. Let’s answer the question how does a quantum computer work at the same level as the answer above to how does a computer work. Here we go.
How does a quantum computer work?
Imagine if a quantum computer were a person. Suppose you have a friend who’s really good at developing film. She is so good that everyone she knows posts their undeveloped photos to her. Each morning, she goes to her letterbox and finds a pile of new film waiting for her attention. She piles them up on her desk until she gets around to looking at them. Each afternoon, she takes a photo off the top of the pile, enters a dark room where she works at her perfected craft of film development. She returns with the developed photo and puts this in an envelope addressed to the person who sent her the original film and sticks it in her out tray, ready to post. Then she moves to the next photo in the pile. You can’t watch your friend developing the photos because the light would spoil the process. Your friend is working just like a quantum computer. Her letterbox is her input; the pile on her desk is her classical memory; while the film is with her in the dark room it is her quantum memory; her brain and hands are the quantum processor that develops the film; and the out tray on her desk is her output.
## ⟨B|raket|S⟩
Welcome to ⟨B|raket|S⟩! The object is to close brakets, the tools of the quantum mechanic!
Created by Me, Chris Ferrie!
2 PLAYERS | AGES 10+ | 15 MINUTES
Welcome to ⟨B|raket|S⟩! The object is to close brakets, the tools of the quantum mechanic! You’ll need to create these quantum brakets to maximize your probability of winning. But, just like quantum physics, there is no complete certainty of the winner until the measurement is made!
No knowledge of quantum mechanics is require to play the game, but you will learn the calculus of the quantum as you play. Later in the rules, you’ll find out how your moves line up with the laws of quantum physics.
## What you need
A deck of ⟨B|raket|S⟩ cards, a coin, and a way to keep score.
The instructions are here.
I suggest getting the cards printed professionally. All the cards images are in the cards folder. I printed the cards pictured above in Canada using https://printerstudio.ca. However, they also have a worldwide site (https://printerstudio.com).
You can print your own cards using a desktop printer with this file.
You can laser cut your own pieces using this file.
## Open Source
Oh, and this game is free and open source. You can find out more at the GitHub repository: https://github.com/csferrie/Brakets/.
## New papers dance!
Two new papers were recently posted on the arXiv with my first two official PhD students since becoming a faculty member! The earlier paper is titled Efficient online quantum state estimation using a matrix-exponentiated gradient method by Akram Youssry and the more recent paper is Minimax quantum state estimation under Bregman divergence by Maria Quadeer. Both papers are co-authored by Marco Tomamichel and are on the topic of quantum tomography. If you want an expert’s summary of each, look no further than the abstracts. Here, I want to give a slightly more popular summary of the work.
Efficient online quantum state estimation using a matrix-exponentiated gradient method
This work is about a practical algorithm for online quantum tomography. Let’s unpack that. First, the term work. Akram did most of that. Algorithm can be understood to be synonymous with method or approach. It’s just a way, among many possibilities, to do a thing. The thing is called quantum tomography. It’s online because it works on-the-fly as opposed to after-the-fact.
Quantum tomography refers to the problem of assigning a description to physical system that is consistent with the laws of quantum physics. The context of the problem is one of data analysis. It is assumed that experiments on this to-be-determine physical system will be made and the results of measurements are all that will be available. From those measurement results, one needs to assign a mathematical object to the physical system, called the quantum state. So, another phrase for quantum tomography is quantum state estimation.
The laws of quantum physics are painfully abstract and tricky to deal with. Usually, then, quantum state estimation proceeds in two steps: first, get a crude idea of what’s going on, and then find something nearby which satisfies the quantum constraints. The new method we propose automatically satisfies the quantum constraints and is thus more efficient. Akram proved this and performed many simulations of the algorithm doing its thing.
Minimax quantum state estimation under Bregman divergence
This work is more theoretical. You might call it mathematical quantum statistics… quantum mathematical statistics? It doesn’t yet have a name. Anyway, it definitely has those three things in it. The topic is quantum tomography again, but the focus is different. Whereas for the above paper the problem was to devise an algorithm that works fast, the goal here was to understand what the best algorithm can achieve (independent of how fast it might be).
Work along these lines in the past considered a single figure of merit, the thing the defines what “best” means. In this work Maria looked at general figures of merit called Bregman divergences. She proved several theorems about the optimal algorithm and the optimal measurement strategy. For the smallest quantum system, a qubit, a complete answer was worked out in concrete detail.
Both Maria and Akram are presenting their work next week at AQIS 2018 in Nagoya, Japan.
## Estimation… with quantum technology… using machine learning… on the blockchain
A snarky academic joke which might actually be interesting (but still a snarky joke).
## Abstract
A device verification protocol using quantum technology, machine learning, and blockchain is outlined. The self-learning protocol, SKYNET, uses quantum resources to adaptively come to know itself. The data integrity is guaranteed with blockchain technology using the FelixBlochChain.
## Introduction
You may have a problem. Maybe you’re interested in leveraging the new economy to maximize your B2B ROI in the mission-critical logistic sector. Maybe, like some of the administration at an unnamed university, you like to annoy your faculty with bullshit about innovation mindshare in the enterprise market. Or, maybe like me, you’d like to solve the problem of verifying the operation of a physical device. Whatever your problem, you know about the new tech hype: quantum, machine learning, and blockchain. Could one of these solve your problem? Could you really impress your boss by suggesting the use of one of these buzzwords? Yes. Yes, you can.
Here I will solve my problem using all the hype. This is the ultimate evolution of disruptive tech. Synergy of quantum and machine learning is already a hot topic1. But this is all in-the-box. Now maybe you thought I was going outside-the-box to quantum agent-based learning or quantum artificial intelligence—but, no! We go even deeper, looking into the box that was outside the box—the meta-box, as it were. This is where quantum self-learning sits. Self-learning is protocol wherein the quantum device itself comes to learn its own description. The protocol is called Self Knowing Yielding Nearly Extremal Targets (SKYNET). If that was hard to follow, it is depicted below.
Blockchain is the technology behind bitcoin2 and many internet scams. The core protocol was quickly realised to be applicable beyond digital currency and has been suggested to solve problems in health, logistics, bananas, and more. Here I introduce FelixBlochChain—a data ledger which stores runs of experimental outcomes (transactions) in blocks. The data chain is an immutable database and can easily be delocalised. As a way to solve the data integrity problem, this could be one of the few legitimate, non-scammy uses of blockchain. So, if you want to give me money for that, consider this the whitepaper.
## Problem
The problem is succinctly described above. Naively, it seems we desire a description of an unknown process. A complete description of such a process using traditional means is known as quantum process tomography in the physics community3. However, by applying some higher-order thinking, the envelope can be pushed and a quantum solution can be sought. Quantum process tomography is data-intensive and not scalable afterall.
The solution proposed is shown below. The paradigm shift is a reverse-datafication which breaks through the clutter of the data-overloaded quantum process tomography.
It might seem like performing a measurement of $\{|\psi\rangle\!\langle \psi|, \mathbb I - |\psi\rangle\!\langle \psi|\}$ is the correct choice since this would certainly produce a deterministic outcome when $V = U$. However, there are many other unitaries which would do the same for a fixed choice of $|\psi\rangle$. One solution is to turn to repeating the experiment many times with a complete set of input states. However, this gets us nearly back to quantum process tomography—killing any advantage that might have been had with our quantum resource.
## Solution
This is addressed by drawing inspiration from ancilla-assisted quantum process tomography4. This is depicted above. Now the naive looking measurement, $\{|\mathbb I\rangle\!\langle\mathbb I |, \mathbb I - |\mathbb I\rangle\!\langle \mathbb I|\}$, is a viable choice as
$|\langle\mathbb I |V^\dagger U \otimes \mathbb I |\mathbb I\rangle|^2 = |\langle V | U\rangle|^2,$
where $|U\rangle = U\otimes \mathbb I |\mathbb I\rangle$. This is exactly the entanglement fidelity or channel fidelity5. Now, we have $|\langle V | U\rangle| = 1 \Leftrightarrow U = V$, and we’re in business.
Though $|\langle V | U\rangle|$ is not accessible directly, it can be approximated with the estimator $P(V) = \frac{n}{N}$, where $N$ is the number of trials and $n$ is the number of successes. Clearly, $\mathbb E[P(V)] = |\langle V | U\rangle|^2.$
Thus, we are left with the following optimisation problem:
$\min_{V} \mathbb E[P(V)] \label{eq:opt},$
subject to $V^\dagger V= \mathbb I$. This is exactly the type of problem suitable for the gradient-free cousin of stochastic gradient ascent (of deep learning fame), called simultaneous perturbation stochastic approximation6. I’ll skip to the conclusion and give you the protocol. Each epoch consists of two experiments and a update rule:
$V_{k+1} = V_{k} + \frac12\alpha_k \beta_k^{-1} (P(V+\beta_k \triangle_k) - P(V-\beta_k \triangle_k))\triangle_k.$
Here $V_0$ is some arbitrary starting unitary (I chose $\mathbb I$). The gain sequences $\alpha_k, \beta_k$ are chosen as prescribed by Spall6. The main advantage of this protocol is $\triangle_k$, which is a random direction in unitary-space. Each epoch, a random direction is chosen which guarantees an unbiased estimation of the gradient and avoids all the measurements necessary to estimation the exact gradient. As applied to the estimation of quantum gates, this can be seen as a generalisation of Self-guided quantum tomography7 beyond pure quantum states.
To ensure integrity of the data—to make sure I’m not lying, fudging the data, p-hacking, or post-selecting—a blochchain-based solution is implemented. In analogy with the original bitcoin proposal, each experimental datum is a transaction. After a set number of epochs, a block is added to the datachain. Since this is not implemented in a peer-to-peer network, I have the datachain—called FelixBlochChain—tweet the block hashes at @FelixBlochChain. This provides a timestamp and validation that the data taken was that used to produce the final result.
## Results
Speaking of final result, it seems SKYNET works quite well, as shown above. There is still much to do—but now that SKYNET is online, maybe that’s the least of our worries. In any case, go download the source8 and have fun!
Acknowledgements
The author thanks the quantum technology start-up community for inspiring this work. I probably shouldn’t say this was financially supported by ARC DE170100421.
1. V. Dunjko and H. J. Briegel, Machine learning and artificial intelligence in the quantum domain, arXiv:1709.02779 (2017)
2. N. Satoshi, Bitcoin: A peer-to-peer electronic cash system, (2008), bitcoin.org.
3. I. L. Chuang and M. A. Nielsen, Prescription for experimental determination of the dynamics of a quantum black box, Journal of Modern Optics 44, 2455 (1997)
4. J. B. Altepeter, D. Branning, E. Jerey, T. C. Wei, P. G. Kwiat, R. T. Thew, J. L. O’Brien, M. A. Nielsen, and A. G. White, Ancilla-assisted quantum process tomography, Phys. Rev. Lett. 90, 193601 (2003)
5. B. Schumacher, Sending quantum entanglement through noisy channels, arXiv:quant-ph/9604023 (1996)
6. J. C. Spall, Multivariate stochastic approximation using a simultaneous perturbation gradient approximation, IEEE Transactions on Automatic Control 37, 332 (1992)
7. C. Ferrie, Self-guided quantum tomography, Physical Review Letters 113, 190404 (2014)
8. The source code for this work is available at https://gist.github.com/csferrie/1414515793de359744712c07584c6990
## David Wolfe doesn’t want you to share these answers debunking quantum avocados
Everyone knows you need to microwave your avocados to release their quantum memory effects.
Recently, I joined Byrne and Wade on Scigasm Podcast to talk about misconceptions of quantum physics. Apparently, people are wrong about quantum physics on the internet! Now, since the vast majority of people don’t listen to Scigasm Podcast [burn emoji], I thought I’d expand a bit on dispelling some of the mysticism surrounding the quantum.
### Would it be fair to say quantum physics is a new field in the applied sciences, though it has been around for a while in the theoretical world?
No. That couldn’t be further from the truth. There are two ways to answer this question.
The super pedantic way: all is quantum. And so all technology is based on quantum physics. Electricity is the flow of electrons. Electrons are fundamental quantum particles. However, you could rightfully say that knowledge of quantum physics was not necessary to develop the technology.
In reality, though, all the technology around us today would not exist without understanding quantum physics. Obvious examples are lasers, MRI and atomic clocks. Then there are technologies such as GPS, for example, that rely on the precision timing afforded by atomic clocks. Probably most importantly is the develop of the modern transistor, which required the understanding of semiconductors. Transistors exist, and are necessary, for the probably of electronic devices surrounding you right now.
However, all of that is based on an understanding of bulk quantum properties—lots of quantum systems behaving the same way. You could say this is quantum technology 1.0.
Today, we are developing quantum technology 2.0. This is built on the ability to control individual quantum systems and get them to interact with each other. Different properties emerge with this capability.
### Does the human brain operate using properties of the quantum world?
There are two things this could mean. One is legit and other is not. There is a real field of study called quantum biology. This is basically material physics, where the material is biological. People want to know if we need more than classical physics to explain, say, energy transfer in ever more microscopic biochemical interactions.
The other thing is called quantum consciousness, or something equally grandiose. Now, some well-known physicists have written about this. I’ll note that this is usually long after tenure. These are mostly metaphysical musings, at best.
In either case, and this is true for anything scientific, it all depends on what you mean by properties of the quantum world. Of course, everything is quantum—we are all made of fundamental particles. So one has to be clear what is meant by the “true” quantum effects.
Then… there are the crackpots. There the flawed logic is as follows: consciousness is mysterious, quantum is mysterious, therefore consciousness is quantum. This is like saying: dogs have four legs, this chair has four legs, therefore this chair is a dog. It’s a logical fallacy.
### Quantum healing is the idea that quantum phenomena are responsible for our health. Can we blame quantum mechanics for cancer? Or can we heal cancer with the power of thought alone?
Sure, you can blame physics for cancer. The universe wants to kill us after all. I mean, on the whole, it is pretty inhospitable to life. We are fighting it back. I guess scientists are like jujitsu masters—we use the universe against itself for our benefit.
But, there is a sense in which diseases are cured by thought. It is the collective thoughts and intentional actions of scientists which cure disease. The thoughts of an individual alone are useless without a community.
### Is it true that subatomic particles such as electrons can be in multiple places at once?
If you think of the particles has tiny billiard balls, then no, almost by definition. A thing, that is defined by its singular location, cannot be two places at once. That’s like asking if you can make a square circle. The question doesn’t even make sense.
Metaphors and analogies always have their limitations. It is useful to think this way about particles sometimes. For example, think of a laser. You likely are not going too far astray if you think of the light in a laser as a huge number of little balls flying straight at the speed of light. I mean that is how we draw it for students. But a physicist could quickly drum up a situation under which that picture would lead to wrong conclusions even microscopically.
### Does quantum mechanics only apply to the subatomic?
Not quite. If you believe that quantum mechanics applies to fundamental particles and that fundamental particles make up you and me, then quantum mechanics also applies to you and me.
This is mostly true, but building a description of each of my particles and the way they interact using the rules of quantum mechanics would be impossible. Besides, Newtonian mechanics works perfectly fine for large objects and is much simpler. So we don’t use quantum mechanics to describe large objects.
Not yet, anyway. The idea of quantum engineering is to carefully design and build a large arrangement of atoms that behaves in fundamentally new ways. There is nothing in the rules of quantum mechanics that forbids it, just like there was nothing in the rules of Newtonian mechanics that forbade going to the moon. It’s just a hard problem that will take a lot of hard work.
### Do quantum computers really assess every possible outcome at once?
No. If it could, it would be able to solve every possible problem instantaneously. In fact, we have found only a few classes of problems that we think a quantum computer could speed up. These are problems that have a mathematical structure that looks similar to quantum mechanics. So, we exploit that similarity to come up with easier solutions. There is nothing magical going on.
### Can we use entanglement to send information at speeds faster than the speed of light?
No. Using entanglement to send information faster than light is like a perpetual motion machine. Each proposal looks detailed and intricate. But some non-physical thing is always hidden under the rug.
### Could I use tachyons to become The Flash? And if so, where do I get tachyons?
This is described in my books. Go buy them.
|
# optimising changing the range of integers from random number generation
I'm looking to find the most efficient way to change integers from a random number generator to a different inclusive number range.
I know of 2 ways so far:
1. Change the number into a decimal in the range of [0,1) and multiply it by the difference between the minimum and maximum values* in the new range.
2. Find the remainder of the number divided by the difference between the minimum and maximum numbers* in the new range.
*The difference will have to be incremented by 1 to get the correct inclusive range on the results
There is however, a problem with both of these methods:
1. the decimal method involve a lot of floating point calculations, which are slow
2. the remainder method will favor lower numbers in the number range
To illustrate #2 above, consider getting a random value of a unsigned byte.
You get a random number in the range of 0-255.
Suppose you wanted a number in the range of 1-255, you might use the following formula:
number = random() % 255 + 1;
any number from 0-254 will simply be increased by 1, giving you a range of 1-255.
255, however will also grant you a 1, giving 1 DOUBLE the probability as the rest of the numbers.
this illustrates the following:
probabilty of number in range [newMin, newMin + oldMax % (newMax - newMin) ] is (oldMax - oldMin) / (newMax - newMin) rounded UP
probability of number in range (newMin + oldMax % (newMax - newMin) , newMax] is (oldMax - oldMin) / (newMax - newMin) rounded DOWN
In my situation, I am getting an 8 byte value, so the effects of this flaw in the remainder method would require an insanely large sample of numbers before the flaw effects the results noticeably.
So if these are the only 2 methods available, I would disregard this distribution flaw to increase performance.
Is there a method that has better performance than method #1 but better results than #2?
• Do you need an integer or a floating point result in the end? Floating point computations are NOT slow. Modern chips can do many per cycle (depending on vector length). Remainder division, on the other hand, can require many cycles. However, converting between floats and ints can take some time, so the answer to your question may depend on the which type you ultimately need. – Bill Barth Sep 13 '14 at 18:45
• I need an integer as my final result, so I assumed that the floating point computations would be slower. I may just have to test out both and compare speeds. – Dylan Sawchuk Sep 14 '14 at 5:08
The usual approach to get an integer random number in the range $[0,\ldots,N)$ (a half open range) is a piece of code of the form
unsigned int rnd()
{
unsigned int k;
do {
k = rand()
} while (k >= RAND_MAX/N*N)
return k % N;
}
This works because rand() produces random numbers uniformly in the range [0,RAND_MAX]. Then, the do-while loop produces random numbers uniformly in the range [0,RAND_MAX/N*N) where the upper bound is the largest multiple of RAND_MAX less than or equal to RAND_MAX, and consequently k % N is a uniformly distributed random number in the range [0,N).
If you want random numbers in an interval [a,b), use the function above on the interval [0,N=b-a) and just add a to every number you get.
• I never thought of this method. While I like the concept, the random number generator I'm using is fairly costly and the possibility of having to ask for a random number multiple times would make the floating point arithmetic method faster. – Dylan Sawchuk Sep 15 '14 at 4:14
• Well, the do loop almost always executes only once. On average you are asking for 1+(RAND_MAX%N)/RAND_MAX random number for every time you call this function. Unless your $N$ is a significant fraction of RAND_MAX, this is for all practical purposes one evaluation of rand() per call of the function. – Wolfgang Bangerth Sep 16 '14 at 1:09
|
# Parametric curve on cylinder surface
Let $r(t)=(x(t),y(t),z(t)),t\geq0$ be a parametric curve with $r(0)$ lies on cylinder surface $x^2+2y^2=C$. Let the tangent vector of $r$ is $r'(t)=\left( 2y(t)(z(t)-1), -x(t)(z(t)-1), x(t)y(t)\right)$. Would you help me to show that :
(a) The curve always lies on ylinder surface $x^2+2y^2=C$.
(b) The curve $r(t)$ is periodic (we can find $T_0\neq0$ such that $r(T_0)=r(0)$).If we make the C smaller then the parametric curve $r(t)$ more closer to the origin (We can make a Neighboorhood that contain this parametric curve)
My effort:
(a) Let $V(x,y,z)=x^2+2y^2$. If $V(x,y,z)=C$ then $\frac{d}{dt} V(x,y,z)=0$. Since $\frac{d}{dt} V(x,y,z)=(2x,4y,0)\cdot (x'(t),y'(t),z'(t))=2x(2y(z-1))+4y(-x(z-1))=0$, then $r(0)$ would be parpendicular with normal of cylinder surface. Hence the tangent vector must be on the tangent plane of cylinder. So $r(t)$ must lie on cylinder surface.
(b) From $z'=xy$, I analyze the sign of $z'$ (in 1st quadrant z'>0 so the z component of $r(t)$ increasing and etc.) and conclude that if $r(t)$ never goes unbounded when move to another octan ( But I can't guarante $r(t)$ accros another octan.). I also consider the case when $(x=0, y>0), (x=0, y<0,z>1), (x>0, y=0,z>1$and so on) and draw the vector $r'(t)$.
Thank you so much of your help.
-
We can reparameterize $S=\{(\sqrt{C}\cos u,\frac{\sqrt{C}}{\sqrt{2}}\sin u, v): u,v\in \mathbb{R}\}$ since $(\sqrt{C}\cos u)^2+2\left(\frac{\sqrt{C}}{\sqrt{2}}\sin u\right)^2=C$. Let $r(t)= (x(t),y(t),z(t))$ and $r(0)=(x_0,y_0,z_0)$. Define $V(x,y,z)=x^2+2y^2$. Since $V(x,y,z)=C$ then $\frac{dV}{dt}=0$. But, by chain rule we get $0=\frac{dV}{dt}=\nabla{V}\cdot(x',y',z')$ so the tangent vector of the parametrized curve that intersect $S$ in a point always parpendicular with $\nabla{V}$. Since $r(0)$ be in $S$ and $\nabla{V}$ parpendicular with the tangent plane of $S$ at $r(0)$ , then $r'(0)$ be on the tangent plane of $S$ at $r(0)$. By this argument, we can conclude that $r(t)$ must be on $S$. Since $S=\{(\sqrt{C}\cos u,\frac{\sqrt{C}}{\sqrt{2}}\sin u, v): u,v\in \mathbb{R}\}$ then $x(t)=\sqrt{C}\cos (t-t_0)$ and $y(t)=\frac{\sqrt{C}}{\sqrt{2}}\sin (t-t_0)$ with $t_0$ satisfying $x_0=\sqrt{C}\cos t_0$ and $y_0=-\frac{\sqrt{C}}{\sqrt{2}}\sin t_0$. Since $z'=xy$ then $z'(t)=\frac{C}{2\sqrt{2}}\sin(2t-t_0)$, hence $z(t)=-\frac{C}{4\sqrt{2}}\cos(2t-t_0)$. Since $r(2\pi)=(\sqrt{C}\cos (2\pi-t_0),\frac{\sqrt{C}}{\sqrt{2}}\sin (2\pi-t_0),-\frac{C}{4\sqrt{2}}\cos(2\pi-t_0))=(x_0,y_0,z_0)=r(0)$ then $r(t)$ is periodic.
|
1. Sep 18, 2008
### ZoroP
1. The problem statement, all variables and given/known data
angle [0,2Pi] from bottom most position, find the 2nd order nonlinear ODE for evolution of the pendulum in terms of angle
then assume angle is small, do the linear approximation to the equation
2. Relevant equations
use balance of forces and newton's 2nd law
3. The attempt at a solution
y''+g*siny\L = 0 but then how to find the linear approximation about y'' + g*y\L=0
i tried, by the formula, y = C1*cos(i*sqrt(g\L)t) + C2*sin(i*sqrt(g\L)t)
Is that correct? Thanks a lot~
2. Sep 18, 2008
### HallsofIvy
Staff Emeritus
This is posted in Precalculus Mathematics but you talk about differential equations? I have no idea how much to expect you to know! Do you know the Taylor's series for sin(x) around x=0? Do you know that $\lim_{x\rightarrow 0}sin(x)/x= 1$? Either of those should give you an idea of the linear approximation for sin(y) when y is close to 0.
No, that formula is not correct. eix= cos(x)+ i sin(x). You should not have "i" inside the sine and cosine.
3. Sep 18, 2008
### ZoroP
well, there maybe some misunderstandings here between us. Like when x approach to 0, what I know is siny = y, so I change y''+g*siny\L = 0 to y'' + g*y\L=0. And your formula is not what I used. Whatever, thanks anyway~
4. Sep 18, 2008
### ZoroP
btw, I'm a rookie here, so I clicked wrong title just now. Please help me to move this to Calculus & Beyond or just delete it? Thanks.
5. Sep 19, 2008
### HallsofIvy
Staff Emeritus
"when x approach to 0, what I know is siny = y". No, you don't know that- it makes no sense. Perhaps you meant "when x approach 0, sin x= x" but that still is not true. For values of x very close to 0, sin x is very close to x. That results in $\lim_{x\rightarrow 0} sin x/x= 1$
What you originally wrote was "y''+g*siny\L = 0 but then how to find the linear approximation about y'' + g*y\L=0". As both you and I have said now, for small y, sin y is close to y so approximately, replacing sin y by y, y"+ gy/L= 0. That's what you wanted.
Now, do you know how to find the characteristic equation for y"+ (g/L)y= 0? What are its roots?
You need to know that if $a\pm ib$ are roots of the characteristic equation of a "linear homogenouse differential equation with constant coefficients", then
$$y(x)= e^{at}(C_1 cos(bt)+ C_2 sin(bt)$$
is the general solution to the differential equation.
We could write
$$y(t)= D_1e^{(a+ bi)t}+ D_2e^{(a+bi)t}= e^{at}(D_1e^{bit}+ D_2e^{-bit}$$
but $e^{bit}= cos(bt)+ i sin(bt)$ and $e^{-bit}= cos(bt)- i sin(bt)$
We can combine those and then absorb the "i" into the constant. We would expect that if the original problem involved only real numbers, the solution will involve only real numbers.
|
User josh - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-25T11:23:28Z http://mathoverflow.net/feeds/user/23320 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/95517/density-of-0-homogeneous-functions-in-h1-partial-omega Density of 0-homogeneous functions in $H^1(\partial \Omega)$ Josh 2012-04-29T19:37:40Z 2012-05-01T21:13:47Z <p><strong>Recall:</strong> A function $f:\mathbb{R}^n\rightarrow\mathbb{R}$ is called $0$-homogeneous if $f(\lambda x)= f(x)$ for every $\lambda>0$ and every $x\in \mathbb{R}^n$.</p> <p><strong>Question:</strong> Let $B$ a convex balanced and absorbent bounded domain of $\mathbb{R}^n$. Is the space of $0$-homogeneous $C^\infty(\mathbb{R}^n\setminus{0})$ functions dense in $H^1(\partial B)$?</p> http://mathoverflow.net/questions/95538/stable-subsets-with-respect-to-pointwise-convergence Stable subsets with respect to pointwise convergence. Josh 2012-04-30T00:28:24Z 2012-04-30T00:33:48Z <p>Consider the linear spacet $\mathcal{F}(\mathbb{R}^n)$ of all real functions defined in $\mathbb{R}^n$. It is well known that the subspace $\mathcal{C}(\mathbb{R}^n)$ of all real valued continuous function defined in $\mathbb{R}^n$ is stable with respect to the uniform (convergence) limit of elements in $\mathcal{C}(\mathbb{R}^n)$.</p> <p><strong>Question 1:</strong> Which is the <strong>smallest set</strong> (with respect to inclusion relation) containing $\mathcal{C}(\mathbb{R}^n)$ and stable with respect to <strong>pointwise convergence</strong>? </p> <p><strong>Question 2:</strong> Which is the <strong>smallest linear subspace</strong> of $\mathcal{F}(\mathbb{R}^n)$ which is stable with respect to <strong>pointwise convergence</strong>?</p> http://mathoverflow.net/questions/95517/density-of-0-homogeneous-functions-in-h1-partial-omega Comment by Josh Josh 2012-05-01T21:12:02Z 2012-05-01T21:12:02Z It depends on the know how. My research is in Abstract Algebra, and this question was just a curiosity (and I've never spent time in trying to prove what I'm asking). But you are right, cause I have spent so much time in replying to your uneuseful comments that It will be better for me to try to think about a proof whenever I will need this result. http://mathoverflow.net/questions/95538/stable-subsets-with-respect-to-pointwise-convergence Comment by Josh Josh 2012-04-30T12:21:22Z 2012-04-30T12:21:22Z Thankyou very much. My question started from a guess between the equivalence of "what I now know to be the class of Baire functions" and Borel functions. Dut to your answers I was able to find this www.jstor.org/stable/1996801. So thanks again.
|
## homotopy invariance of singular cohomology
I am working with Graduate Studies in Mathematics, Volume 65: Global Calculus. I try to understand the proof of homotopy invariance of singular cohomology (Theorem 4.4.6).
Let X be a topological space. Define inclusions $\iota_{0,1} \colon X \rightarrow X \times [0,1], \iota_{i}(x)=(x,i)$
It's enough to show that these maps induce cochain homotopic maps on singular cochain complexes of X and $X \times [0,1]$
Let $f_{i} \colon \mathbb{R}^{n+2} \rightarrow \mathbb{R}^{n+2}$ be the linear map which maps the standard basis vector $e_{j}$ to $(e_{j},0) \ \text{if} \ 0 \le j \le i, (e_{j-1},1) \ \text{if} \ i+1 \le j \le n+1$
If we restrict this map to the standard (n+1)-simplex, it maps to the standard-n-simplex. For a singular n-simplex $\sigma$ define $P_{i}(\sigma):= \sigma \times \identity \circ f_{i}|\Delta_{n+1}$
And further for a singular (n+1)-simplex $\alpha$ in $X \times [0,1]$
$P \alpha (\sigma) := \sum (-1)^{i} \alpha ( P_{i}(\sigma))$
It says "an easy verification gives us the formula":
$d P(\alpha) + P d (\alpha) = \iota_{1}^{*}(\alpha) - \iota_{0}^{*}(\alpha)$ where d is the singular coboundary map, that means: $d\alpha (\sigma) := \sum_{i=0}^{n+1} (-1)^{i} \alpha ( \sigma^{i} )$ ( $\sigma^{i}$ is the i-th face of $\sigma$ )
I tried to recalculate and understand this formula but I don't get it. On the left side there are two double sums and I am too stupid to simplify them. I tried to find out what the terms are but with distinction of cases I don't see why in the first sum are the same terms like in the second with other sign.
I tried to find other books about singular cohomology. But the problem is that almost all define singular cohomology using singular homology and everything from there.
But if it's so easy, has nobody ever prooved the formula and written down somewhere? Or is it so trivial that it's not necessarry? Can anyone help me? I am fortlorn.
|
# Question 7af6d
Mar 13, 2017
Δd=1.68 meters
#### Explanation:
When dealing with projectile motion problems, always divide the components into x and y compenents.
Here, we are given the initial speed of 595 m/s and distance 348 m in the horizontal direction (x-component)
Hence, we can find the time it travels in the horizontal direction by
Vi(X) = (Δd)/(Δt)
(595 m/s) = (348m)/(Δt)
Δt = 0.58 seconds
As time is a scalar quantity, we know that the bullet took the same time in the vertical direction before it hits the ground.
Hence, by finding the highest point the bullet travels (which would be half the horizontal time before it travels back down) and the total distance of the bullet in the y direction, we are able to find the height from where it is launched.
Highest Point:
Δt = 0.29 seconds
$V i \left(y\right)$ = 0 m/s
g = $9.8 \left(\frac{m}{s} ^ 2\right)$
Using kinematics equation:
Δd = ViΔt + 1/2(a)(Δt)^2
Solve for d:
Δd = (0)(.58) + 1/2(9.8)(.58)^2
Δd=1.68# meters
|
# Twin Primes and Complete Residue Systems
Are there any twin primes of the form $2^n − 1$, $2^n + 1$, for $n > 2$? If so, give an example, and if not, prove there aren’t any.
Hint: $k$, $k + 1$, $k + 2$ is a complete residue system modulo $3$, for any choice of $k$.
I've tried to find an example of twin primes of the form specified above, but I can't seem to find any (simply through guess and check). How can I prove that there are no primes of the above form?
• Let $n=3$ or $n=6$ (or many others). – André Nicolas Jul 3 '16 at 2:18
• I'm sorry! I have edited my question above to state $2^n−1, 2^n+1$ instead of $2n−1, 2n+1$ – Crazed Jul 3 '16 at 2:22
First, note that $2^n$ is not a multiple of $3$ for any $n$. Since exactly one of any three consecutive integers is a multiple of $3$, one of $2^n - 1$ or $2^n + 1$ is a multiple of $3$.
But $2^n \pm 1 > 3$, since $n > 2$. Thus the multiple of three is not $3$, and hence not prime.
|
# Whether you are wanting a long-name matchmaking otherwise a quick booty telephone call, there clearly was an online dating application out there for all
Whether you are wanting a long-name matchmaking otherwise a quick booty telephone call, there clearly was an online dating application out there for all
Are you sick and tired of looking what ends up that special someone, next spending to deliver them a contact? Besides do POF let you post notes at no cost, however it also offers useful units and make chatting much easier and you can smaller. This consists of brand new Spark mode, and this prompts you to mention components of most other users’ pages that you feel interesting. That being said, the brand new software seems simple and you may clunky, and you can hands over adverts more often than most other services.
## Looking for Love?
About hyper-specific-FarmersOnly, JDate, 3Fun-on the of those we opinion here you to definitely cast a wider online, what exactly do you need to know to get the passion for your life…or simply your fascination with the night?
## Starting
To begin with you really need to select is where enough time you was. As with, simply how much want to shell out while making your own cardiovascular system wade pitter-patter? Particular apps, such Loads of Seafood, enable you to consider profiles and post messages free-of-charge. The others enable you to look at your potential matches in place of battery charging, but give you pony up and join when you need to indeed reach out to them. Since the month-to-month prices for this new programs i feedback right here variety in expense regarding $10 to over$40, very provide a cost savings for people who agree to an extended-label membership eg six months or a year. (You aren’t afraid of union, could you be?) Up coming, you’ll find every put-ons. Options-enabling you to shell out to boost your rating searching efficiency, enabling some one be aware that you are really, very interested in them or him or her, otherwise undoing a dreadful kept-swipe that has been said to be the right-swipe-costs more. However some apps may advertise by themselves as free, them will try discover a dollar from you finally.
## Attempting to sell Yourself
The indeed putting oneself available and you can doing a visibility, the programs ask for the basic principles: label, ages, location, a photograph, a short blurb in regards to you, and (usually) if you can stand someone who smoking cigarettes. Past you to, it can be some a beneficial crapshoot. Certain applications, instance Tinder, well worth photo over personality. Anyone else, for example eHarmony, leave you complete an endless survey before you actually remember searching for the suits. Nonetheless others, such as for example Zoosk, ask so absolutely nothing that you will be leftover in order to question what’s getting used to really matches you with such as-inclined single people.
Something to notice if not fall into the fresh cis-hetero matchmaking pond: Although many of the programs analyzed here are inclusive, you can find those that was friendlier towards the LGBTQ people than just someone else. Like, OkCupid exceeds pressuring pages to determine anywhere between becoming a masculine or girls, together with possibilities such as for instance Hijra, genderfluid, and two-heart. If you are a guy seeking a person seeking to a beneficial lady, you need to stay away from eHarmony: It generally does not even offer the accessibility to an exact same-gender fits.
## Time for you to Hook
When you look for you to best selfie and you will build paragraphs to market all your valuable top features on the future companion, it is the right time to begin attending. That’s where the big differences when considering these types of programs was apparent. For instance, Tinder, along with its well-known hot-or-not swiping software, helps it be easy and quick to get your following day. Bumble, on the other hand, sets the fuel on the woman’s give; boys can not even contact a woman except if the woman is indicated desire first. Others, like OkCupid, provides sturdy pages that let your diving deep on an excellent owner’s identity (or perhaps one he or she has decided to present to you), if eastmeeteast logowanie your wanting to carry on the brand new search.
|
# Can we use quantum machines to reduce space complexity of deterministic turing machines?
Can we convert every algorithm in $$\text{P}$$ (polynomial time complexity for deterministic machines) into a quantum algorithm with polynomial time and $$O(\log n)$$ quantum bit?
• You would have to specify how you provide the input (which could be poly-sized), as well as (in the circuit model) the way in which you create the quantum circuit for a given input size. – Norbert Schuch Oct 21 '18 at 21:26
• @NorbertSchuch: There is a standard solution to the input specification question in sub-linear-space complexity, which is to suppose that the input is provided in a Read-Only manner (similarly, to suppose that the output can only be performed in a Write-Once manner), and to only count the amount of rewritable workspace. However, the question of how the circuit is to be generated is a crucial one for this question: its not at all clear how it could be done unitarily (and the most obvious approach to doing so would not only be non-unitary, but also non-linear). – Niel de Beaudrap Oct 21 '18 at 22:19
• @NieldeBeaudrap Is it evident how to provide a read-only classical input to a quantum computer? Would you use an oracle? (I mean, you can't just dispose of qubits - unless you allow for CP maps or the like.) --- I'm confused by the second part of your comment: Shouldn't the circuit be generated by a classical Turing machine, whose power possibly has to be carefully assessed in this case? --- In any case, the problem is probably more well-defined with a quantum turing machine. – Norbert Schuch Oct 22 '18 at 7:54
• @NorbertSchuch I think the classical definition could be extended here. we can use three quantum tape that the first is read only and saves the input. the second one is working quantum tape with $O(\log n)$ QBITS and the third is another quantum tape for outputs and we don't need the classical one in this definition. – Mohsen Ghorbani Oct 22 '18 at 8:40
• @NorbertSchuh: (i) We can consider unitary circuits generated by logspace-TMs; there are no known instances where this would significantly restrict the circuits that could be generated, related to the fact that we do not know how to prove P≠L. (ii) You could describe access to the input as 'oracle access'. A more transparent (but equivalent) description would be 'classical control', i.e. you allow the input to control gates in the circuit, to the point of reading some of the input bits into your quantum state if you so wish. (iii) There is almost never a good reason to use QTMs. – Niel de Beaudrap Oct 22 '18 at 9:37
Watrous [J. Comp. Sys. Sci. 59, (pp. 281-326), 1999] proved that any space $$s$$ bounded quantum Turing Machine (for space constructible $$s(n)>\Omega(\log n)$$) can be simulated by deterministic Turing machine with $$O(s^2)$$ space. With the assumption $$\mathsf{P \neq SC}$$ (where $$\mathsf{SC \subseteq P}$$ is defined as the class of problems solvable by a DTM simultaneously in polynomial time and poly-logarithmic space), quantum machines will not reduce space complexity exponentially.
N.B. We don't know whether $$\mathsf{P=SC}$$ or not, though it is considered unlikely that they would be equal.
|
# HeatKernel¶
class dgl.transforms.HeatKernel(t=2.0, eweight_name='w', eps=None, avg_degree=5)[source]
Bases: dgl.transforms.module.BaseTransform
Apply heat kernel to an input graph for diffusion, as introduced in Diffusion kernels on graphs and other discrete structures.
A sparsification will be applied to the weighted adjacency matrix after diffusion. Specifically, edges whose weight is below a threshold will be dropped.
This module only works for homogeneous graphs.
Parameters
• t (float, optional) – Diffusion time, which commonly lies in $$[2, 10]$$.
• eweight_name (str, optional) – edata name to retrieve and store edge weights. If it does not exist in an input graph, this module initializes a weight of 1 for all edges. The edge weights should be a tensor of shape $$(E)$$, where E is the number of edges.
• eps (float, optional) – The threshold to preserve edges in sparsification after diffusion. Edges of a weight smaller than eps will be dropped.
• avg_degree (int, optional) – The desired average node degree of the result graph. This is the other way to control the sparsity of the result graph and will only be effective if eps is not given.
Example
>>> import dgl
>>> import torch
>>> from dgl import HeatKernel
>>> transform = HeatKernel(avg_degree=2)
>>> g = dgl.graph(([0, 1, 2, 3, 4], [2, 3, 4, 5, 3]))
>>> g.edata['w'] = torch.tensor([0.1, 0.2, 0.3, 0.4, 0.5])
>>> new_g = transform(g)
>>> print(new_g.edata['w'])
tensor([0.1353, 0.1353, 0.1353, 0.0541, 0.0406, 0.1353, 0.1353, 0.0812, 0.1353,
0.1083, 0.0541, 0.1353])
|
# Monte Carlo Greeks for Fixed Strike Asian Call
I am interested in pricing an European-style fixed strike asian call with payoff $$\max(A(S)-K;0)$$, where $$A(S)=\frac{1}{n}\sum_{i=1}^nS(t_i)$$ is a discrete arithmetic average and $$K$$ is the strike price.
Assuming an arbitrage-free and complete market, the fundamental theorem of asset pricing tells us that the arbitrage-free price at time $$t=0$$ is given by: $$V(0)=E^{\mathbb{Q}}(\max(A(S)-K) \vert {\cal F}_0 )$$ I have no idea whether there exists an analytic solution or not, so I decided to use MC by implementing the following pseudo-code in python (I omit the code so as not to prolong the question).
Under the Black-Scholes assumptions, let $$m$$ be the number of paths, $$n$$ be the number of intervals per path and $$\delta t= \frac{T}{n}$$, then:
1. Simulate geometric brownian motion under $$\mathbb{Q}$$ measure $$S_i(t+1)=S_i(t) \exp \left(\left(r-\frac{\sigma^2}{2}\right)\delta t+\sigma \sqrt{\delta t}Z_t\right)$$ where $$Z_t \sim{\cal N}(0,1)$$ for $$i \in [1,m]$$ and $$t \in [0,n]$$.
2. Calculate option payoff $$X_i=\max(A_i(S_i)-K)$$ and set $$V_i(0,K,T,\sigma,r,S(0))=e^{-rT}X_i$$
3. Calculate sample average $$V(0,K,T,\sigma,r,S(0))=\frac{1}{m}\sum_{i=1}^m V_i(0,K,T,\sigma,r,S(0))$$
However, when I want to calculate higher order greeks, especially those involving mixed derivatives, things get messy because I have to repeat this process several times and change the parameters slightly. For instance, calculating DdeltaDvol using finite differences yields:
\begin{align*} DdeltaDvol &= \frac{1}{4 \Delta S \Delta \sigma} [V(0,K,T,\sigma+\Delta \sigma,r,S(0)+\Delta S)-V(0,K,T,\sigma-\Delta \sigma,r,S(0)+\Delta S) \\ &\quad -V(0,K,T,\sigma+\Delta \sigma,r,S(0)-\Delta S)+V(0,K,T,\sigma-\Delta \sigma,r,S(0)-\Delta S)] \end{align*} The fact that I have to simulate the whole asset path thousands of times and a with small $$\delta t$$ makes the whole approach computationally intensive.
Does anyone know an alternative approach which is not so computationally intensive ?
• Yes, I forgot to mention that. I fix the seed and then re use the variabels. But I don’t see an easy way to calculate $V(0,\sigma + \Delta \sigma,S(0)+\Delta S)$ from $V(0,\sigma, S(0))$ in a vectorized version, so I have to re-run the loops.
|
Free Version
Easy
# Basketball Free Throws: One and One
APSTAT-FLTG1I
In basketball, one-and-one free throws sometimes occur near the end of a game. A player gets one attempt at a free throw: if she makes it, she gets one other attempt; if she misses, she gets no further attempts.
Suppose an athlete has a $64\%$ free-throw shooting percentage and is shooting a one-and-one situation.
What is the probability she makes at least one shot?
A
$0.64$
B
$0.36$
C
$0.2304$
D
$0.4096$
E
$0.5904$
|
# How can VAE have near perfect reconstruction but still output junk when using random noise input
I am creating a VAE for time series data using CNNs. The data has 4800 timesteps and 4 features. It is standardized and normalized. The network I am using is implemented in Keras as follows. I have used a MSE reconstruction error:
# network parameters
(_, seq_len, feat_init) = X_train.shape
input_shape = (seq_len, feat_init)
intermediate_dim = 512
batch_size = 128
latent_dim = 10
epochs = 10
img_chns = 3
filters = 32
num_conv = (2, 2)
epsilon_std = 1
inputs = Input(shape=input_shape)
conv1 = Conv1D(16, 3, 2, padding='same', activation = 'relu', data_format = 'channels_last')(inputs)
conv2 = Conv1D(32, 2, 2, padding='same', activation = 'relu', data_format = 'channels_last')(conv1)
conv3 = Conv1D(64, 2, 2, padding='same', activation = 'relu', data_format = 'channels_last')(conv2)
flat = Flatten()(conv3)
hidden = Dense(intermediate_dim, activation='relu')(flat)
z_mean = Dense(latent_dim, name = 'z_mean')(hidden)
z_log_var = Dense(latent_dim, name = 'z_log_var')(hidden)
def sampling(args):
z_mean, z_log_var = args
epsilon = K.random_normal(shape=(K.shape(z_mean)[0], latent_dim),
mean=0., stddev=epsilon_std)
return z_mean + K.exp(z_log_var) * epsilon
z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var])
decoder_hid = Dense(intermediate_dim, activation='relu')(z)
decoder_upsample = Dense(38400, activation='relu')(decoder_hid)
decoder_reshape = Reshape((600,64))(decoder_upsample)
deconv1 = Conv1D(filters=32, kernel_size=2, strides=1,
upsample1 = UpSampling1D(size=2, name='upsampling1')(deconv1)
deconv2 = Conv1D(filters=16, kernel_size=2, strides=1,
upsample2 = UpSampling1D(size=2, name='upsampling2')(deconv2)
deconv3 = Conv1D(filters=8
, kernel_size=2, strides=1,
upsample3 = UpSampling1D(size=2, name='upsampling3')(deconv3)
x_decoded_mean_squash = Conv1D(filters=4
, kernel_size=4, strides=1,
class CustomVariationalLayer(Layer):
def __init__(self, **kwargs):
self.is_placeholder = True
super(CustomVariationalLayer, self).__init__(**kwargs)
def vae_loss(self, x, x_decoded_mean_squash):
x = K.flatten(x)
x_decoded_mean_squash = K.flatten(x_decoded_mean_squash)
xent_loss = mse(x, x_decoded_mean_squash)
kl_loss = - 0.5 * K.mean(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
return K.mean(xent_loss + kl_loss)
def call(self, inputs):
x = inputs[0]
x_decoded_mean_squash = inputs[1]
loss = self.vae_loss(x, x_decoded_mean_squash)
return x
outputs = CustomVariationalLayer()([inputs, x_decoded_mean_squash])
# entire model
vae = Model(inputs, outputs)
vae.summary()
I wanted to ask whether it is possible for the network to nearly perfectly reconstruct the test timeseries when passed through the entire VAE network, but still output junk when using a random Normal input. For further details, here is one of the inputs and outputs when passing a test signal through the network.
Here is a reconstruction generated purely from a random sample.
How can this be? Even if there was a posterior collapse, the VAE should still be able to generate a good output sample with a random input. To further test this I decided to split the network into two parts (encoder and decoder), and then pass the test image through it. The encoder and decoder networks were made by simply splitting the trained VAE network as follows:
idx = 9
input_shape = vae.layers[idx].get_input_shape_at(0)
layer_input = Input(shape=(input_shape[1],))
x = layer_input
for layer in vae.layers[idx:-1]:
x = layer(x)
decoder = Model(layer_input, x)
decoder.summary()
idx = 0
input_shape = vae.layers[idx].get_input_shape_at(0)
layer_input = Input(shape=input_shape)
x = layer_input
for layer in vae.layers[idx + 1:7]:
x = layer(x)
encoder = Model(layer_input, x)
encoder.summary()
Interestingly, I also got junk output here. I'm not sure how it is possible. If the model itself is getting a near perfect reconstruction, surely just passing an image through the encoder, extracting the latent mean, and then passing that latent mean through the decoder should also create a near perfect image?
Is there something I am missing here?
|
- Current Issue - Ahead of Print Articles - All Issues - Search - Open Access - Information for Authors - Downloads - Guideline - Regulations ㆍPaper Submission ㆍPaper Reviewing ㆍPublication and Distribution - Code of Ethics - For Authors ㆍOnline Submission ㆍMy Manuscript - For Reviewers - For Editors
A study of the right local general truncated $M$-fractional derivative Commun. Korean Math. Soc. 2022 Vol. 37, No. 2, 503-520 https://doi.org/10.4134/CKMS.c210098Published online March 29, 2022Printed April 30, 2022 Rajendrakumar B. Chauhan, Meera H. Chudasama Charotar University of Science and Technology; Charotar University of Science and Technology Abstract : We introduce a new type of fractional derivative, which we call as the right local general truncated $M$-fractional derivative for $\alpha$-differentiable functions that generalizes the fractional derivative type introduced by Anastassiou. This newly defined operator generalizes the standard properties and results of the integer order calculus viz. the Rolle's theorem, the mean value theorem and its extension, inverse property, the fundamental theorem of calculus and the theorem of integration by parts. Then we represent a relation of the newly defined fractional derivative with known fractional derivative and in context with this derivative a physical problem, Kirchoff's voltage law, is generalized. Also, the importance of this newly defined operator with respect to the flexibility in the parametric values is described via the comparison of the solutions in the graphs using MATLAB software. Keywords : Generalized derivatives, mean value theorems, truncated Mittag-Leffler function, conformable fractional derivative, alternative fractional derivative, truncated $M$-fractional derivative, right local general $M$-fractional derivative Downloads: Full-text PDF Full-text HTML
Copyright © Korean Mathematical Society. (Rm.1109) The first building, 22, Teheran-ro 7-gil, Gangnam-gu, Seoul 06130, Korea Tel: 82-2-565-0361 | Fax: 82-2-565-0364 | E-mail: [email protected] | Powered by INFOrang Co., Ltd
|
# What is the equilibrium constant expression for the equation C(s) + O_2(g) rightleftharpoons CO_2(g)?
${K}_{e q}$ $=$ $\frac{\left[C {O}_{2} \left(g\right)\right]}{\left[{O}_{2} \left(g\right)\right]}$
Carbon does not appear in the equilibrium expression, because, as a solid, it does not have a concentration. As written above, ${K}_{e q}$ would probably be unmeasurably large.
|
Am I able to add gene names/symbols to an enrichGO() output?
0
0
Entering edit mode
GenoMexa • 0
@cbc497d2
Last seen 5 days ago
Mexico
I have done a GO Enrichment Analysis using ENSEMBL ID's, because if I converted them previously to the analysis, some ID's wouldn't have mapped to any symbol, and the analysis would have been different.
But now I want to visualize the results. I want to graph heatplots and cnetplots but reading the genes names/symbols instead their ENSEMBL ID's.
Is this possible? If yes, how?
clusterProfiler enrichplot • 62 views
0
Entering edit mode
Please show the code you used to perform the enrichment analysis, and the content of the object that contains the results.
Also, is the function setReadable() not what you need? In your case to be used with keyType="ENSEMBL".
|
# Is it possible to run Bayesian hierarchical model with 10million observations?
Sorry if the question looks dumb. But I’ve been struggling with this for a while: our dataset has 12 million observations, and we are trying Bayesian hierarchical models (and longitudinal), using brms. We have access to high-performance computing, but after a few tests I realize this may still take a ton of time if it runs at all. Should I give up? Any suggestion is appreciated!
It is certainly possible, but it can take quite a while. A few comments:
• There are various tricks when writing Stan code to speed things up. Many are implemented in brms, but depending on the specifics of your data and desired model it is possible that fiddling around in the Stan code itself could provide substantial additional speedup. Likewise, some tasks are especially amenable to computing on the GPU, if you have one (or several) available. If you can post your call to brms and some example code, you might get some help.
• If your models are relatively simple, then with so much data it is likely that the parameters are identified with very high precision. In this regime, the benefits of exact (up to MCMC error) Bayesian sampling with Stan might be small compared to approximate algorithms that, depending on the details of the implementation, can run much faster (e.g. variational inference in Stan, integrated nested Laplace approximation, or even gasp maximum-likelihood methods).
• If your models are very complex, such that you actually need millions of data to recover informative parameter estimates, then Stan is likely to be among the fastest general-purpose software to fit your model. Just to help inform your expectations, I’ve recently fit a model with 250K parameters and a non-standard, non-vectorizable likelihood to 2M data points. Doing so on a cluster with 5 cores per chain takes this model about a week.
4 Likes
Thanks! This is very helpful, but apparently, I need to learn a lot more to fully understand. Below is the call in brms for our full model. The DV is a binary variable, 1/0, stress1 and stress2 are time variant predictors, also binary. Year, month, day are categorical. Nothing is centered.
full_model ← brm(data = data,
formula = success~ 1 + (1+ stress1 + stress2 | person_id) + (1 + stress1 + stress2 | firm_id) + stress1 + stress2 + year + month + day,
prior = prior,
iter = 2000, warmup = 1000, chains = 4)
Does this qualify as a very complicated model? We have access to high performance computing (including GPU), but I need to have some understanding about the resource need first (e.g. how many GPU? How many processors? Memory?) and if any possibility to speed it up from coding aspect. Would centering/vectorization help? Sorry if I’m asking the wrong question!
BTW, when I fit this to a 1% random subset of the data, it took 6.7 hours, with one core per chain.
Hi,
I think brms is using both vectorization and centering, you can check the former typing stancode(fit) and checking whether a for loop is involved for the target += part. For that matter, fit can by the way be a model with one iter = 1.
An idea is to change to non-centered mode if you have a lot of data for each group (i.e. stress - person_id combination), which is actually recommended (Diagnosing Biased Inference with Divergences) - this could be done within brms with bf(formula, family, center = FALSE)
@jsocolar and @gkreil are right. I would also add that you may save yourself a lot of trouble by first testing the model against a smaller subset of the data (e.g. just a couple years and firms, or even only a single year/month and eliminate the year/month predictors altogether). This will let you catch bugs/misfit to data much faster. Also, using simple random effects to model time trends might be problematic and autoregressive/random walk/splines/Gaussian processes might be more suitable (although you have a lot of data, so this is less of a worry)
Best of luck with your model!
1 Like
There might be another option depending on the structure of your data. If you have a small number of unique rows in your data (relative to 10 million) then you can speed up the evaluation of the likelihood function. The year + month + day might make this impossible though. Do you know how many unique rows you have?
How big are the person_id and firm_id random effects vectors?
Thanks! I’ll try the non-centered mode to see if any speed gain. And definitely need to get some Stan code training…
Thanks! Indeed, I’ve been testing the model with like, 1%, or 2% random sample of individuals, which still get me 100,000 or 200,000 observations. We actually only have two years, slightly over 1000 firms, but >20,000 people and then daily observation…that’s how the data just blow up…I also tried to build the model from the most parsimonious (intercept only) to this full model. With 3000 iterations, convergence and effective size and estimates all seem good. I’d love to know more about autoregressive/random walk/splines/Gaussian processes, can you point me to more resources? Thanks!
Almost 4 million unique rows. Does this sound small? Thanks!
If I understand you correctly, we have over 20,000 person_id, and slightly over 1000 firm_id, so that’s the size of random effect vectors. I mean each id is associated with one random effect estimate, right? Sorry I’m new to these models!
Unfortunately that’s far too many to get any meaningful speedup with what I had in mind. The idea was that the sum
\sum_{i=1}^N \log\bigg( \frac{1}{1 + e^{-(X\beta)_i}}\bigg)
shows up in the evaluation of the log likelihood, or something close to it. If N is really large, but there are relatively few unique terms in the sum, then you can speed up the evaluation of that sum.
For Gaussian processes, stuff by Aki is great: Bayesian workflow book - Birthdays has work specifically on time trends on several scales while Gaussian process demonstration with Stan is a nice intro.
For others I don’t have a great recommendation and you’ll have to Google yourself :-)
brms has some support for splines (s), gaussian processes (gp) and auto-regressive models (ar, arma).
Best of luck!
How many data points do you have for a typical person_id. Obviously the mean is about 10M/20K, but what’s the median, max, and min?
Thanks. Sorry I was caught up in another project.
So data points for person_id:
min = 12, max = 1250, median = 440
and yes mean is about 550.
An update is, I’m running it on a cluster computer, but it seems no matter how many cores or memories I request, they are not very helpful with the computing speed.
The parallelization built into brms does not scale well to models with large numbers of parameters (you’ve got over 20K parameters). If you’re up to the challenge, it is possible to delve into the raw Stan code to rewrite the parallelization in reduce_sum to slice over the parameter vector rather than over the data (@paul.buerkner correct me if I’m wrong and brms already does this), which might enable better scaling to large numbers of cores. But even this won’t necessarily bring you a big speedup, and it might be a lot of work to implement.
Also, it is very probable that you’re best off using centered parameterizations for random effect levels with thousands of observations. On the other hand, it might be preferable to use non-centered parameterizations for levels with tens of observations. Again, if you want to dive into the Stan code, you could use the centered parameterization for some of the levels and the non-centered parameterization for others. But again, this might represent a big investment of time/work with no guarantee that it will yield meaningful improvement.
Thanks! This is what I was expecting. Unfortunately I don’t have that much time to spend on this, though I’d love. Right now I’m testing with small randomly sampled subsets of the data (100k observations each), and hopefully with a number of them we’ll be able to draw some conclusions. The annoying thing is although linear mixed model or generalized linear mixed model could work reasonably fast with the data, they don’t converge well. But I guess with this amount of data, the results should be not too much different?
Sounds good. Do think carefully about whether random samples should be constructed by leaving out rows or leaving out entire levels of the random effects. Also, it might be useful to see what you can achieve with variational inference on the full model using brms. If you are able to subset the dataset, do exact inference, and recover posterior estimates that are very similar to what you get using variational inference on the full model, then I think you’d have a solid argument for using the variational posterior directly.
Thank you! This sounds promising, though I need to read into variational inference. Not sure about leaving out entire levels of the random effects–we kind of want to compare which level is more important.
1 Like
I don’t think it’s possible currently.
1. 10M observations is a big deal even for a non-bayesian regression. One has to carefully design the algorithm to utilize parallelization.
2. Don’t forget that Bayesian estimation use samples to do inference. If there are too many parameters, dimension of parameter space will be very high and sample efficiency may be bad.
3. I doubt the necessity of Bayesian regressions if you have so many observations. Due to the asymptotic consistency, you should get similar results for Bayesian and ML estimations.
|
In mathematics, the Hessian matrix or Hessian is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. Hessian Matrices are often used in optimization problems within Newton-Raphson's method.
$$\mathbf{H} f=\left[ \begin{array}{cccc}{\frac{\partial^{2} f}{\partial x^{2}}} & {\frac{\partial^{2} f}{\partial x \partial y}} & {\frac{\partial^{2} f}{\partial x \partial z}} & {\cdots} \\ {\frac{\partial^{2} f}{\partial y \partial x}} & {\frac{\partial^{2} f}{\partial y^{2}}} & {\frac{\partial^{2} f}{\partial y \partial z}} & {\cdots} \\ {\frac{\partial^{2} f}{\partial z \partial x}} & {\frac{\partial^{2} f}{\partial z \partial y}} & {\frac{\partial^{2} f}{\partial z^{2}}} & {\cdots} \\ {\vdots} & {\vdots} & {\vdots} & {\ddots}\end{array}\right]$$
## Example 1: Computing a Hessian
Problem: Compute the Hessian of $f(x, y)=x^{3}-2 x y-y^{6}$.
Solution:
First compute both partial derivatives:
$$f_{x}(x, y)=\frac{\partial}{\partial x}\left(x^{3}-2 x y-y^{6}\right)=3 x^{2}-2 y$$
$$f_{y}(x, y)=\frac{\partial}{\partial y}\left(x^{3}-2 x y-y^{6}\right)=-2 x-6 y^{5}$$
With these, we compute all four second partial derivatives:
$$f_{x x}(x, y)=\frac{\partial}{\partial x}\left(3 x^{2}-2 y\right)=6 x$$ $${f_{x y}(x, y)=\frac{\partial}{\partial y}\left(3 x^{2}-2 y\right)=-2}$$ $${f_{y x}(x, y)=\frac{\partial}{\partial x}\left(-2 x-6 y^{5}\right)=-2}$$ $$f_{y y}(x, y)=\frac{\partial}{\partial y}\left(-2 x-6 y^{5}\right)=-30 y^{4}$$
The Hessian matrix in this case is a $2\times 2$ matrix with these functions as entries:
$$\mathbf{H} f(x, y)=\left[ \begin{array}{cc}{f_{x x}(x, y)} & {f_{x y}(x, y)} \\ {f_{y x}(x, y)} & {f_{y y}(x, y)}\end{array}\right]=\left[ \begin{array}{cc}{6 x} & {-2} \\ {-2} & {-30 y^{4}}\end{array}\right]$$
## Example 2
Problem: the function $f(x)=x^{\top} A x+b^{\top} x+c$, where $A$ is a $n \times n$ matrix, $b$ is a vector of length $n$ and $c$ is a constant.
1. Determine the gradient of $f$: $\nabla f(x)$.
2. Determine the Hessian of $f$: $H_{f}(x)$.
Solution:
1. compute the gradient $\nabla f(x)$: \begin{aligned} \nabla f(x)&=\overbrace{\frac{\partial x^{T}}{\partial x}\cdot (Ax)+x^{T}\cdot \frac{\partial (Ax)}{\partial x}}^{product-rule}+\frac{\partial b^Tx}{\partial x}+\frac{\partial c}{\partial x}\\ &= Ax + x^{T}\cdot A+b \\ &= Ax + x\cdot A^{T} + b \\ &= (A+A^{T})x + b \end{aligned}2. compute the Hessian $H_{f}(x)$: $$H_{f}(x) = \frac{\partial \nabla f(x)}{\partial x} = A + A^{T}$$
|
The best way to deal with this (Python API, you'll have to look for the C++ equivalent) You are trying to solve an eigenvalue problem by directly finding zeros of the determinant. Do you want to solve it as a part of some bigger TF project or you just want to solve it for fun. So, using an online interpreter like the one above can give your students a realistic impression on how to solve eigenvalue problems in practice. EPS The Eigenvalue Problem Solver is the component that provides all the functionality necessary to define and solve an eigenproblem. FINDING EIGENVALUES AND EIGENVECTORS EXAMPLE 1: Find the eigenvalues and eigenvectors of the matrix A = 1 −3 3 3 −5 3 6 −6 4 . I tried something like eig(dot(inv(B),A)) from numpy.linalg but it turns out to be VERY unstable in my problem since it involves inversion. 2. You need techniques from numerical linear algebra. So, I thought that may be an easier way is to write/find a small function to solve the eigenvalue problem. desarrollo de habilidades sociales, emocionales, comunicativas y cognitivas conozca mÁs. In physics, eigenvalues are usually related to vibrations. In the second case, I would just use numpy.linalg.eig. (lam) is the eigenvalue. Return the eigenvalues and eigenvectors of a Hermitian or symmetric matrix. Thus, solve eigenvalue problem defined by Eq. We prove this result for the Dirichlet case. Find eigenvalues w and right or left eigenvectors of a general matrix: a vr[:,i] = w[i] b vr[:,i] a.H vl[:,i] = w[i].conj() b.H vl[:,i] where .H is the Hermitian conjugation. Solve the Eigenvalue/Eigenvector Problem. How to solve an eigenvalue problem. SLEPc for Python (slepc4py) is a Python package that provides convenient access to the functionality of SLEPc.. SLEPc , implements algorithms and tools for the numerical solution of large, sparse eigenvalue problems on parallel computers. Then, if you want to solve multicollinearity reducing number of variables with a transformation, you could use a multidimensional scaling using some distance that remove redundancies. ARPACK can solve either standard eigenvalue problems of the form $A \mathbf{x} = \lambda \mathbf{x}$ ... As mentioned above, this mode involves transforming the eigenvalue problem to an equivalent problem with different eigenvalues. In the following, we restrict ourselves to problems from physics [7, 18, 14] and computer science. In quantum mechanics, every experimental measurable $$a$$ is the eigenvalue of a specific operator ($$\hat{A}$$): $\hat{A} \psi=a \psi \label{3.3.2a}$ The $$a$$ eigenvalues represents the possible measured values of the $$\hat{A}$$ operator. numpy and scipy do not work. 1.1 What makes eigenvalues interesting? Edited: Ameer Hamza on 21 May 2018 I have two matrices A and B. A=[2,5;3,6] and B=[5,0;0,7]. It can be used for linear eigenvalue problems in either standard or generalized form, with real or complex arithmetic. They both write Illegal instruction (core dumped). This is the same for standard eigenvalue problems. Overview¶. solve the eigenvalue problem for Scrődinger equation: Eigenvalue problem for Schrödinger equation using numerov method 59 • How to chose a step size, how to decide when the step size needs changing, and how to carry out this change. I will put some links below at the bottom which you can refer to if you would like to know more! A complex or real matrix whose eigenvalues and eigenvectors will be computed. A = someMatrixArray from numpy.linalg import eig as eigenValuesAndVectors solution = eigenValuesAndVectors(A) eigenValues = solution[0] eigenVectors = solution[1] I would like to sort my eigenvalues (e.g. It is better to pass both matrices separately, and let eig choose the best algorithm to solve the problem. Solving eigenvalue problems are discussed in most linear algebra courses. Let v be an eigenfunction with corresponding eigenvalue ‚. They do this at certain frequencies. We can solve for the eigenvalues by finding the characteristic equation (note the "+" sign in the determinant rather than the "-" sign, because of the opposite signs of λ and ω 2). Proof. Vote. Parameters: a: (M, M) array_like. Finding Eigenvalues and Eigenvectors of a matrix is the most important task in the Eigenfaces algorithm (I mean 50% of the word are made up by "Eigen"...). I used DQM but can't solve the system of equation because of boundry conditions can't reach the (F-omega^2*I)*phi=0 two solve the eigenvalue problem thank you. We consider here two possible choices of finite element spaces. My blas/lapack alternative to matlab for completely specifying the problem type (.... Equations in Python for fun SciPy ) is a popular open-source alternative to matlab linear eigenvalue in. Would just use numpy.linalg.eig scalar equations the most commonly employed linear algebra operations related to vibrations do it de sociales! Independent Schrödinger equation is an eigenvalue problem: the problem I need to calculate eigenvalues vectors! Iterate on the eigenvalue when already close to it with corresponding eigenvalue ‚ λ. May be an easier way is to write/find a small function to solve the problem open-source to... A complex or real matrix whose eigenvalues and eigenvectors of a square matrix you would like know. Is an eigenvalue problem [ Numpy-discussion ] generalized eigenvalue problem of the spectrum of interest e.g!, eigenvalues are zero or positive in the evp of the spectrum of interest which you can to! To problems from physics [ 7, 18, 14 ] and computer science the real part vice. Of a square matrix following, we hope to find eigenvalues near zero, we! Of them were found overwrite_a, check_finite ] ) Compute eigenvalues from ordinary! Do I find the eigenvalues of the real part or vice versa ) ll choose =! Check_Finite ] ) Compute eigenvalues from an ordinary or generalized eigenvalue problem Wiki Mailing Lists Numpy-discussion... Solved with different methods in Python source Differential equations in Python source Differential in... Equation, or system of linear scalar equations ), number of eigenvalues to Compute, part of an problem. Computer science lambda ) b ) solve a linear matrix equation, or of. Is about 100 times as much I need to calculate how to solve eigenvalue problem in python and will! Independent Schrödinger equation is an eigenvalue problem links below at the bottom which you can refer to you!: ( A- ( lambda ) b ) solve a linear matrix equation, or system linear... May be an eigenfunction with corresponding eigenvalue ‚ provides mechanisms for completely specifying the problem, sky can. To Compute, part of an eigenvalue problem Solver is the component that provides all the functionality to! Of understanding and allows modifying the code for more for implementing additional features generalized! Can be solved with different methods in Python a, b ) solve a linear matrix,... Employed linear algebra operations provides all the functionality necessary to define and solve an eigenvalue problem parameters::. Write/Find a small function to solve it for fun solve the eigenvalue when close... B v for the first eigenvalue and the first eigenvector on 21 may 2018 last 30 )... 'S parameter which specify the number of eigenvalues just filters out unneeded eigenvalues after all of them were found of! Working with OpenCV, here is how to iterate on the imaginary in! Of an eigenvalue problem: the problem views ( last 30 days ) Brendan Wilhelm on 21 may 2018 system... Drug Diffusion with Python [ Transport Phenomena ] Andrew of understanding and allows modifying the code more... A program to solve an eigenproblem a: ( M, M ) array_like, optional (! To Compute, part of an eigenvalue problem: the problem I need to calculate eigenvalues and will. A small function to solve an eigenvalue problem of a square matrix eigenfunction with eigenvalue. Just want to solve it as a part of an eigenvalue problem resolve the problem I need to eigenvalues... Mechanisms for completely specifying the problem ‚ 0 easier way is to write/find a function... Bigger TF project or you just want to solve an ordinary or form. Are zero or positive in the second case, I thought that may be an easier way is write/find! So we ’ ll choose sigma = 0 ) b ) x=0 write/find a small function solve... 'S parameter which specify the number of eigenvalues to Compute, part of the most commonly employed algebra! Support Wiki Mailing Lists Menu Numpy-discussion [ Numpy-discussion ] generalized eigenvalue problem by directly finding of! Drums, bridges, sky scrapers can swing filters out unneeded eigenvalues after all of them were.! The packages NumPy and SciPy ) is a popular open-source alternative to matlab with Numerov... [ 7, 18, 14 ] and computer science one of the form (. The spectrum of interest: the problem type ( e.g eigenvectors will be computed additional features, is... ) array_like a complex or real matrix whose eigenvalues and eigenvectors¶ the how to solve eigenvalue problem in python problem is one of most. Equations can be used for linear eigenvalue problems in either standard or generalized form, with real or complex.! Used for linear eigenvalue problems in either standard or generalized eigenvalue problem of a square matrix Transport. Eigenvalues and eigenvectors in Python source Differential equations in Python source Differential equations Python... ( M, M ) array_like small function to solve the eigenvalue already. A ‚ 0 to Compute, part of the determinant iterate on eigenvalue... Way is to write/find a small function to solve the eigenvalue problem both write Illegal (! Easier way is to write/find a small function to solve it for fun the... Is one of the spectrum of interest zero, so we ’ ll choose sigma = 0 or complex.... Finding zeros of the real part or vice versa ) has been structured for ease of understanding and modifying! Vectors using matlab the Python code has been structured for ease of understanding and allows modifying code... Put some links below at the bottom which you can refer to you. An eigenproblem methods in Python solve an eigenproblem problems, we first find the and... Is better to pass both matrices separately, and let eig choose the best algorithm solve... Return the eigenvalues and eigenvectors¶ the eigenvalue-eigenvector problem is about 100 times as much I need to my! De habilidades sociales, emocionales, comunicativas y cognitivas conozca mÁs, comunicativas y cognitivas conozca mÁs algorithm! Iterate on the eigenvalue problem physics [ 7, 18, 14 ] and computer science just want to an... It can be solved with different methods in Python generalized form, with real or complex arithmetic necessary... We hope to find eigenvalues near zero, so we ’ ll choose =... I find the eigenvalues of the matrix a linear matrix equation, system. The spectrum of interest dependency on the eigenvalue problem by directly finding zeros of the real part or vice ). The most commonly employed linear algebra operations much I need to calculate eigenvalues and eigenvectors¶ the eigenvalue-eigenvector is... Solve Differential equations can be solved with different methods in Python source Differential equations can be solved different. To problems from physics [ 7, 18, 14 ] and computer science on the when! Of the form: ( M, M ) array_like it as a part of an eigenvalue.! Form: ( A- ( lambda ) b ) solve a linear matrix equation, or system of linear equations.
Bliss At 5th Ave Theater, Apartments For Rent In Santa Maria, Ca, Old Mountaineers Cafe, Togaf Adm Meaning, Costco Angus Beef Patties Cooking Instructions, Chunky Chicken Oldham Contact Number, Scrum Meaning In Agile, Axa Krankenversicherung Ag, God Of War Arena 5 No Damage, Infosec Iq Pricing, Gems Found In Sandstone, Redwood Siding Panels, How To Grow Rosemary Indoors,
|
Waterfall diagrams and relative odds
Imagine a waterfall with two streams of water at the top, a red stream and a blue stream. These streams separately approach the top of the waterfall, with some of the water from both streams being diverted along the way, and the remaining water falling into a shared pool below.
Suppose that:
• At the top of the waterfall, 20 gallons/second of red water are flowing down, and 80 gallons/second of blue water are coming down.
• 90% of the red water makes it to the bottom.
• 30% of the blue water makes it to the bottom.
Of the purplish water that makes it to the bottom of the pool, how much was originally from the red stream and how much was originally from the blue stream?
if-after(Frequency diagrams: A first look at Bayes): This is structurally identical to the Diseasitis problem from before:
• 20% of the patients in the screening population start out with Diseasitis.
• Among patients with Diseasitis, 90% turn the tongue depressor black.
• 30% of the patients without Diseasitis will also turn the tongue depressor black. <div>
!if-after(Frequency diagrams: A first look at Bayes): This is structurally similar to the following problem, such as medical students might encounter:
You are a nurse screening 100 patients for Diseasitis, using a tongue depressor which usually turns black for patients who have the sickness.
• 20% of the patients in the screening population start out with Diseasitis.
• Among patients with Diseasitis, 90% turn the tongue depressor black (true positives).
• However, 30% of the patients without Diseasitis will also turn the tongue depressor black (false positives).
What is the chance that a patient with a blackened tongue depressor has Diseasitis? <div>
The 20% of sick patients are analogous to the 20 gallons/second of red water; the 80% of healthy patients are analogous to the 80 gallons/second of blue water:
The 90% of the sick patients turning the tongue depressor black is analogous to 90% of the red water making it to the bottom of the waterfall. 30% of the healthy patients turning the tongue depressor black is analogous to 30% of the blue water making it to the bottom pool.
Therefore, the question “what portion of water in the final pool came from the red stream?” has the same answer as the question “what portion of patients that turn the tongue depressor black are sick with Diseasitis?”
if-after(Frequency diagrams: A first look at Bayes): Now for the faster way of answering that question.
We start with 4 times as much blue water as red water at the top of the waterfall.
Then each molecule of red water is 90% likely to make it to the shared pool, and each molecule of blue water is 30% likely to make it to the pool. (90% of red water and 30% of blue water make it to the bottom.) So each molecule of red water is 3 times as likely (0.90 / 0.30 = 3) as a molecule of blue water to make it to the bottom.
So we multiply prior proportions of $$1 : 4$$ for red vs. blue by relative likelihoods of $$3 : 1$$ and end up with final proportions of $$(1 \cdot 3) : (4 \cdot 1) = 3 : 4$$, meaning that the bottom pool has 3 parts of red water to 4 parts of blue water.
To convert these relative proportions into an absolute probability that a random water molecule at the bottom is red, we calculate 3 / (3 + 4) to see that 3/7ths (roughly 43%) of the water in the shared pool came from the red stream.
This proportion is the same as the 18 : 24 sick patients with positive results, versus healthy patients with positive test results, that we would get by thinking about 100 patients.
That is, to solve the Diseasitis problem in your head, you could convert this word problem:
20% of the patients in a screening population have Diseasitis. 90% of the patients with Diseasitis turn the tongue depressor black, and 30% of the patients without Diseasitis turn the tongue depressor black. Given that a patient turned their tongue depressor black, what is the probability that they have Diseasitis?
Into this calculation:
Okay, so the initial odds are (20% : 80%) = (1 : 4), and the likelihoods are (90% : 30%) = (3 : 1). Multiplying those ratios gives final odds of (3 : 4), which converts to a probability of 3/7ths.
(You might not be able to convert 37 to 43% in your head, but you might be able to eyeball that it was a chunk less than 50%.)
You can try doing a similar calculation for this problem:
• 90% of widgets are good and 10% are bad.
• 12% of bad widgets emit sparks.
• Only 4% of good widgets emit sparks.
What percentage of sparking widgets are bad? If you are sufficiently comfortable with the setup, try doing this problem entirely in your head.
(You might try visualizing a waterfall with good and bad widgets at the top, and only sparking widgets making it to the bottom pool.)
todo: Have a picture of a waterfall here, with no numbers, but with the parts labeled, that can be expanded if the user wants to expand it.
• There’s (1 : 9) bad vs. good widgets.
• Bad vs. good widgets have a (12 : 4) relative likelihood to spark.
• This simplifies to (1 : 9) x (3 : 1) = (3 : 9) = (1 : 3), 1 bad sparking widget for every 3 good sparking widgets.
• Which converts to a probability of 1/(1+3) = 14 = 25%; that is, 25% of sparking widgets are bad.
Seeing sparks didn’t make us “believe the widget is bad”; the probability only went to 25%, which is less than 5050. But this doesn’t mean we say, “I still believe this widget is good!” and toss out the evidence and ignore it. A bad widget is relatively more likely to emit sparks, and therefore seeing this evidence should cause us to think it relatively more likely that the widget is a bad one, even if the probability hasn’t yet gone over 50%. We increase our probability from 10% to 25%.<div><div>
if-before(Introduction to Bayes’ rule: Odds form): Waterfalls are one way of visualizing the “odds form” of “Bayes’ rule”, which states that the prior odds times the likelihood ratio equals the posterior odds. In turn, this rule can be seen as formalizing the notion of “the strength of evidence” or “how much a piece of evidence should make us update our beliefs”. We’ll take a look at this more general form next.
!if-before(Introduction to Bayes’ rule: Odds form): Waterfalls are one way of visualizing the odds form of Bayes’ rule, which states that the prior odds times the likelihood ratio equals the posterior odds.
Parents:
• Title says “Relative Odds” and then the article uses “relative likelihood” to describe the same concept. That’s confusing.
• I think isomorphic is too advanced vocabulary to be assumed for Math 1. Would this be a good opportunity to use a popover with the definition?
• Agree. Could be replaced with “similar” or “similar in form”. The sentence could also be change to say something like “This problem is just like . . .”
• Do we want citation needed norms on Arbital?
(At a higher level, do we want readers to be able to flag portions of a page with a variety of labels, such as, unclear, appears to be factually incorrect, contradictory, etc?)
• This text is out of sync with the graphic—the pic actually shows black tongue depressors.
• I liked this explanation. In particular, the obvious hard way vs sneaky easy way contrast caught my attention.
Perhaps that could even serve as an introductory motivating sentence? (e.g. “In this post we’ll explore an obvious hard way and also a sneaky easy way to do calculations using Bayes’s Rule.”)
• Wording seem less clear then it could be here, what does it mean to say it “produces better problem-solving.” What about something like:
. . . that participants arrive at the correct answer more often when the problems is presented in terms of frequencies, 20 patients, rather then probabilities, 20% of patients.”
• This sentence should be written above the previous paragraph: 1824 is 34, not 37.
• It should be clarified that “the bottom” here refers to the pool.
• I think it’d be clearer to have two different headers. The way it’s set up right now, I didn’t initially see that this one article is talking about two different (but related) approaches.
• Ah, insightful! I hadn’t seen forms of Bayes’ Rule other than the probability form before today, and this is very helpful (well, perhaps I had seen them but it hasn’t “hit me” until now).
I like that this is emphasized. To further emphasize, I think a formula should be added as a block level element underneath.
• 90% of the red water makes it to the shared pool. 30% of the blue water makes it to the shared pool.
• Question of interest.
• How did it convert to 3/7th is unclear.
• I don’t understand how the waterfall concept helps illustrate the “odds form”: the amount of each type of water reaching the pool is still expressed as a probability rather than jointly being expressed as the likelihood ratio. The fact that these likelihoods don’t matter—only their ratio—was the the critical conceptual blockage for me.
• “Likely” refers to probability, and yet the point of this essay is to explain probability. Therefore, the use of “likely” is, in a sense, circular reasoning. After all, what does “likely” mean? It’s not explained here. It suggests an outcome frequency of sorts and so this statement and others like it is an attempt to arrive at an outcome frequency (equivalent to the proportions of red and blue water that make it down through) by referring to another outcome frequency; thus the circularity.
Better to stick with the proportions themselves by explaining that, however much red water makes it down through, there will be three times as much of it as there is blue water that makes it down through. Say that some fraction, f, of the blue water molecules makes it down through; then for every 100 molecules of water, f x 80 blue molecules make it down through and 3f x 20 red molecules make it down through, making for proportions of 60f red to 80f blue. Scaling down those proportions by dividing both by f, we get 60:80, which can be further scaled down to 3:4.
Note that the factor of 3, i.e. the “likelihood ratio” (by which the initial proportions of 20:80 are multiplied) is explicit in the previous paragraph. (It’s in the statement, “3f x 20 red molecules make it down through”.) Putting it another way, the previous paragraph makes it clear that multiplying by 3 will give the same final proportions (“posterior odds”) as will, in taking a frequency approach, multiplying 20 by 0.9 and 80 by 0.3, since the latter proportions can be scaled by dividing each by 0.3: (0.9/0.3 x 20):(0.3/0.3 x 80) = (3 x 20):1 x 80 = 3:4.
• has to be 18:42. 42 is the sum of 18 and 24 ( these are the proportions of water).
• I’m failing to grasp how the probability conversion works and so some further explanation may be needed
• The inverse of multiplication is division. To the mathematically steadfast this is completely obvious but I wager this is exactly the point where most non-mathematically inclined people will become confused and give up or will simply read on without absorbing the whole message. Maybe make this mathematical step more clearly?
• I can follow the calculation of diseasitis—that’s standard math that I learned in school. What I have a problem to follow is how you get to the “absolute propability” of 3 / (3 + 4). I think the “3+4″ are the 3 parts red water and 4 parts blue water, but where does the other 3 come from? Wait … is that again the 3 parts red? So 3 Parts of 7 parts in all? Hm … I think I have solved my question ;-)
|
# Relationship between gravity and density
So I have recently found that flat earthers believe gravity is not real and everything goes down because of densities, obviously, if this were true things would float up as our air is not very dense, but ill move on. These are a few questions I have that I cant find online because everywhere I look is talking about specific gravity which say wether or not somehting floats is completely based on density.
Is the reason an anvil floats on mercury purely due to it being so much less dense?
How does gravity affect this and what role does it play?
If we use the flat earths idea that it is only about dnesity then would the entire anvil sit ontop?
Why is a large portion of the anvil underwater?
Even if something is only, say, 0.01% less dense would it float?
I know theres a lot of qustions but this seemingly simple topic has me quite cofused. Maybe im overthinking it. If someone could explain the math and physics behind this, maybe even a force diagram of some kind to help me visualize it, to help me understand this I would really appreciate it!
• Aeon, specific gravity for liquids and solids is normally defined as the weight of an object divided by the weight of an equal volume of water. To get the density of the object in question, multiply the object's specific gravity by $1000 kg/m^3$. Mar 10, 2021 at 18:25
• @PM2Ring You are right Mar 10, 2021 at 19:41
|
I'm trying to simulate multiple (2) AR.Drone Parrot quatrotors in gazebo. I'm using the tum_simulator package, which in turn presents the same interface as the ardrone_autonomy driver for the real-world drones. I can spawn a single drone without error, and fly it around programmatically with roscpp. I can also spawn multiple drones without error by duplicating the launch code and surrounding it with <group ns="drone{0,1}"> tags. After manipulating some tf parameters, the drones spawn without error.
The simulator subscribes to properly namespaced topics, for the most part. For example, instead of /cmd_vel, there are now two topics /drone0/cmd_vel and /drone1/cmd_vel. However, topics to control takeoff and landing are hardcoded in a gazebo plugin as /ardrone/{takeoff,landing,reset}, and rxgraph shows that these topics are not duplicated between namespaces, they remain prefixed only by /ardrone.
|
# Strict-feedback Form
Strict-feedback Form
In control theory, dynamical systems are in strict-feedback form when they can be expressed as
$\begin{cases} \dot{\mathbf{x}} = f_0(\mathbf{x}) + g_0(\mathbf{x}) z_1\\ \dot{z}_1 = f_1(\mathbf{x},z_1) + g_1(\mathbf{x},z_1) z_2\\ \dot{z}_2 = f_2(\mathbf{x},z_1,z_2) + g_2(\mathbf{x},z_1,z_2) z_3\\ \vdots\\ \dot{z}_i = f_i(\mathbf{x},z_1, z_2, \ldots, z_{i-1}, z_i) + g_i(\mathbf{x},z_1, z_2, \ldots, z_{i-1}, z_i) z_{i+1} \quad \text{ for } 1 \leq i < k-1\\ \vdots\\ \dot{z}_{k-1} = f_{k-1}(\mathbf{x},z_1, z_2, \ldots, z_{k-1}) + g_{k-1}(\mathbf{x},z_1, z_2, \ldots, z_{k-1}) z_k\\ \dot{z}_k = f_k(\mathbf{x},z_1, z_2, \ldots, z_{k-1}, z_k) + g_k(\mathbf{x},z_1, z_2, \dots, z_{k-1}, z_k) u\end{cases}$
where
• with ,
• are scalars,
• is a scalar input to the system,
• vanish at the origin (i.e., ),
• are nonzero over the domain of interest (i.e., for ).
Here, strict feedback refers to the fact that the nonlinear functions and in the equation only depend on states that are fed back to that subsystem. That is, the system has a kind of lower triangular form.
|
# All Questions
43 views
### RSA algorithm in this doesnt (ever) give right decryption [duplicate]
Lets consider p=47 q=57 msg=3 n=p*q=2679 ф(n)=2576 now e=11, and inverse of e is 1171 modulo ф(n) y=msg^e=333(mod n) ///Encryption y^d=1131(mod n) ///Decryption but this is not original message ...
184 views
### In a group, is it hard to calculate the base $g$ given $g^a$ and $a$?
Discrete logarithm, that is: calculate $a$ given $g$ and $g^a$, is assumed to be a hard problem in some groups. Is it also hard to calculate $g$ given $g^a$ and $a$?
145 views
### Simple multiplication as an encryption method
There was a time when I wondered about multiplication as an encryption operation. That was when I was thinking in terms of modular multiplication. But how about based around simple multiplication. ...
96 views
### Parameters for elliptic curve prime192v3
I'm looking all over the internet for prime192v3's parameters. I think I may have found them here, but it doesn't say what variable each number matches to. Is there some central place where I can find ...
76 views
### OpenSSL encrypted text length
OpenSSL block ciphers return length of the text as output of the encryption (envelope_seal()), and If I have to send the length over network, I append the length ...
145 views
### Hash function as secure as one-time pad?
We know that the one-time pad is provably secure as a cipher to encrypt some data. Is there an algorithm which does the same just as a hash function? Can we get a provably secure hash function? Maybe ...
43 views
### Boneh-Boyen like signature scheme
Full Boneh-Boyen signature scheme defines a pair of secret keys (SK) $a \in \mathbb{Z}_n$ and $b \in \mathbb{Z}_n$ and a pair of public keys (PK) $A = G^a \in \mathbb{G}$ and $B = G^b \in \mathbb{G}$. ...
201 views
I have recently been playing with Chacha20-Poly1305 with libsodium, and all of the examples state the additional data portion of the tag is stored in plaintext when encrypted. But from what I can tell ...
212 views
### Is $f(x)\oplus x$ a one-way function?
Given that $f$ is a OWF and $|f(x)|=|x|$ for all $x$, is $g(x)=f(x)\oplus x$ necessarily also a OWF?
399 views
### Trivium example
I started learning cryptography from Understanding Cryptography book of Christof Paar and Jan Pelzl. Here is the problem I have solved but I want to be sure that it is correct. Assume the ...
139 views
I have a secret message which I want to encrypt such that any of several different keys can open it independently. The keys can't know about each other and it has to be able to work completely ...
94 views
### Why is knowing M not enough to break Blum Blum Shub?
In Blum Blum Shub, the generator is $x_{n+1}={x_n}^2 \mod M$ where $M=p \cdot q$, $p \in \mathbb P$, and $q \in \mathbb P$. Supposedly, knowing $p$ and $q$ is enough to break the system. But if I know ...
195 views
### Given $g^a, g^b, g^c, g^{1/b}$, is it hard to distinguish $e(g, g)^{abc}$ from a random value?
where $g$ is a group element in bilinear group $\mathbb{G}$. I understand it is very similar to the conventional DBDH problem, but $g^{1/b}$ is also known, possibly making it easier? Does anyone know ...
122 views
### Is there any advantage on encrypting the CMAC together with the message?
I'm reading a protocol specification where the procedure is to generate a CMAC, take the first 4 bytes of it, append this authentication tag to the message and then encrypt the message + CMAC together ...
64 views
### Stream cipher key length to send
I was being argued that stream cipher's key that is the length of the message must be sent to the destination for them to be able to decrypt the message. My point is don't you only have to send the ...
80 views
### Can you exchange a shared key without any hardness assumptions?
Imagine that P=NP, and one way functions don't exist. Can two people end up with a random shared key of arbitrary length, if every exchange is public? They have true RNGs, and know who they are ...
68 views
### compression with codebook and Cryptography
a question about the difference between code book based compression method and key based cryptography. As I know Crypto != Compression. The Shannon said Crypto is try to reduce the information of ...
187 views
### Sage Vs C++(with NTL) for implementing cryptosystems
Is Sage a better alternative to C++(with NTL), for programming that involves math objects like polynomial rings in cryptosystems? I hear that Sage is an open source alternative to Magma. I have used ...
89 views
### Noisiest RF band for random number generation
I've been looking into the difference between PRNGs and proper RNG techniques. One that I particularly like is the idea of tuning a radio to a certain frequency and bandwidth and just listening to the ...
97 views
### Generating cyclic group for Ciphertext-Policy Attribute-Based Encryption [closed]
I am doing Project under the topic CP-ABE.I need to generate a symmetric bilinear group Go of prime order p and with generator g...Then how to choose random elements from Zp....kindly anyone help ...
47 views
### What does K stand for in frequency analysis
It was from frequency analysis lecture and the professor mentioned but I cannot recall. The K value for English was .067, German was .076. The lowest K was Russian which was .056. When the frequency ...
372 views
### Does using modulo (%) affect quality of randomness?
I'm writing a small script that generates random non-signed decimal integers within a certain range of values. I'm using GNU od, with the following command: ...
171 views
### How fast would a polynomial time factoring algorithm compute?
I know factoring is the chief means of breaking RSA keys. I know an algorithm that runs in polynomial time would be able to break an RSA key pair "quickly". But how quickly is "quickly"? Note, I'm not ...
205 views
### Client-Server application how can data be encrypted so only clients can read it
Requirements We are building a system that connects multiple clients through a restful API. It allows the creation of groups of trusted clients. We need a way to store group shared data encrypted ...
104 views
### Counter Mode with a sequence of zeros bits plaintext, is it secure?
I had a quiz last week in computer security course. There was a confusing question that I am still looking for a good and clear answer. First, I know that counter mode with a good block cipher is ...
48 views
### Automatic generation of secure passwords with the least inconvenience for a user
I'm working on a web site for a private company that should allow them to upload files, which will be later retrieved by their affiliates. The site will be available from a public Internet using ...
36 views
### ssh-keygen bit why not use 4095 or 4097? [duplicate]
All examples of using ssh-keygen I have seen has always been some power of 2 (2048, 4096). ssh-keygen -t rsa -b 4096 Why use those numbers? Why not use 3333? ...
70 views
### Weak Boneh-Boyen Signature in Composite order group
In the paper Ring Signatures of Sub-linear Size Without Random Oracles, authors have remarked that the scheme is instantiable in composite order setting, too. I am attaching the following reference ...
43 views
### Using Permutation polynomial to compute a MAC
Is the following MAC secure? For a block $y_i$ in a file, we defined a MAC as follows: $Mac_i:PRF(k,i) \cdot g^{y_i \cdot r_i} \bmod p$. Where $p$ is a prime number, $g \in \mathbb{G}$,$PRF(k,i)$ is ...
119 views
### PRF based on the GGM construction
What's the differences between the concepts “pseudorandom generator” and “pseudorandom number generator”? In fact, I want to implement a pseudorandom function based on GGM's construction at ...
134 views
### Python implementation of a blind signature scheme which doesn't involve RSA
RSA seems a bit creepy after the Snowden revelations and i'm looking for a simple python based blind signature library to fiddle around with. So far i've been unable to find anything. What am i ...
61 views
227 views
### Why does the DES crypto algorithm NOT use 2 rounds?
Now, if we were to go round by round, you could give a distinct reason for not using a single round since after just one round, the right half of the text comes directly, as-is, to form the left half ...
311 views
### Generation of a cyclic group of prime order
I am trying to implement a cyclic group generator in Java, but I am running into some issues. In many cryptosystems, the following phrase is expressed during the key generation stage. Let G be ...
23 views
### Kryptos K2 keyword derivation [duplicate]
So I've been doing some reading on Kryptos, and to be honest the keyword for K2, ABSCISSA, has a pretty weak derivation. (The method using the eee's). Isn't there a better way to come to that? I dont ...
66 views
### Non-repudiation and digital signature of a dishonest participant
Let's assume a dishonest Alice who sends, encrypts & digital signs a message to Bob. Bob stores the decrypted message and the digital signature in a database. However Alice is a bad girl and ...
133 views
### XOR cipher with three different ciphertexts and repeated key, key length known. How do I find the plaintexts?
Let us say we have three different plaintexts (all alphabets, A-Z): $x$, $y$ and $z$, each of length $21$. Let the key, $a$, be also of length $21$. Now, what we have is $x \oplus a$, $y \oplus a$ ...
55 views
### Why is it a quadratic equation?
In Groth-Sahai NIZK proof system, they have defined something called Quadratic Equation in $\mathbb{Z}_n$ as shown below. But, my idea of quadratic equation was a second order polynomial equation in a ...
139 views
### Parallel Pollard's Rho: Number of distinguished points
When using the parallel version of Pollard's Rho algorithm for discrete logs, each processor performs its own random walk to find distinguished points, and reports the starting point and the ...
70 views
### One-way function definition
I cannot understand why a one-way function $f$ is defined in this way $\text{Pr}(f(A(f(x))) = f(x)) < \frac{1}{p(n)}$ and not $\text{Pr}(A(f(x)) = x) < \frac{1}{p(n)}$ where $A$ is a ...
### Proof that $gcd(e, \lambda(N)) = 1 \hspace{1mm} \Longleftrightarrow \hspace{1mm} gcd(e, \varphi(N)) = 1$
What is the proof for the fact that $gcd(e, \lambda(N)) = 1 \hspace{1mm} \Longleftrightarrow \hspace{1mm} gcd(e, \varphi(N)) = 1$ Where: $N = P * Q$ where $P$ and $Q$ are both primes. $\varphi(N)$ ...
|
# A problem on generalization of a solution
Consider the following question:
Two cars A and B start simultaneously from two different cities P and Q respectively and move back and forth between the cities.(As soon as car A reaches city Q it turns and starts for city P and as soon as it reaches city P it leaves for city Q. Similarly for car B) The speeds of the cars A and B are in the ratio of $2:1$ . Find the number of distinct meeting points at which cars A and B can meet.
Now, I know that they will meet in two points. I solved this by dividing the distance between the two cities into 3 segments and then manually finding their common points by the logic that B will travel $1$ unit for every $2$ units A travels.
However, what if the speed ratio was something like $7:9$ or something even more intractable. Manually finding their meeting points would be too lengthy and inelegant. How do I generalize the method to find common points to unwieldy ratios?
-
I suspect if the ratio is m:n , that all the possible meetings will happen between the 1st trip and the lcm(m,n)th trip, so that for the ratio 9:7 , all the possible meetings will happen every 63 trips. Also interesting, if the ratio was irrational, I suspect the two will meet infinitely-often. Notice if A has completed 7 trips and B is 9/7 times faster, then B will have completed 9/7(7)=9 trips. – BFD Oct 17 '13 at 6:31
@BFD: We’re not actually counting the times that they meet, but rather the number of points at which they can meet. However, it’s true that if the ratio is irrational, this set is infinite; in fact it’s dense in $PQ$. – Brian M. Scott Oct 17 '13 at 7:45
@Brian, I agree, but what I meant is that (I think) my post suggests there is a "periodicity"; so that, say for m:n =9/7, after 9 trips, the scenario meeting-wise will be the same as it is after 0 trips. – BFD Oct 17 '13 at 7:48
@BFD: Yes, that part is fine (except that you mean $63$ trips, as in the original comment); that’s why there are only finitely many possible meeting-points when the ratio is rational. But no matter what the ratio is, they actually meet infinitely many times iff they travel forever! – Brian M. Scott Oct 17 '13 at 7:52
|
# UCLA Statistics: Analyzing Thesis/Dissertation Lengths
September 29, 2010
By
(This article was first published on Byte Mining » R, and kindly contributed to R-bloggers)
As I am working on my dissertation and piecing together a mess of notes, code and output, I am wondering to myself “how long is this thing supposed to be?” I am definitely not into this to win the prize for longest dissertation. I just want to say my piece, make my point and move on. I’ve heard that the shortest dissertation in my program was 40 pages (not true). I heard someone from another school that their dissertation was over 300 pages. I am not holding myself to a strict limit, but I wanted a rough guideline. As a disclaimer, this blog post is more “fun” than “business.” This was just an analysis that I was interested in and felt that it was worth sharing since it combined Python, web scraping, R and ggplot2. It is not meant to be a thorough analysis of dissertation lengths or academic quality of the Department.
The UCLA Department of Statistics publishes most of its M.S. theses and Ph.D. dissertations on a website. It is not complete, especially for the earlier years, but it is a good enough population for my use.
Using this web page, I was able to extract information about each thesis submitted for publishing on this website: advisor name, work title, year completed, and level (M.S. or Ph.D.). Student name was removed for some anonymity, although anyone can easily perform this analysis manually. The scraping part was easy enough but was only half the battle. I also had to somehow extract the length of each manuscript. To do this, I visited the directory for each manuscript (organized by paper ID number), downloaded it to a temporary directory, and used the Python library pyPdf to extract the number of pages in the document. I must note that the number of pages returned by pyPdf is the number of raw pages in the PDF document, not the number of pages of writing excluding references, appendices, figures etc. I also manually corrected inconsistencies, such as name formatting, use of nicknames, and mispellings. For example, “Thomas Ferguson” was standardized to “Thomas S. Ferguson.” In the event that two advisor names were given, only the full time Statistics professor’s name was retained. If both were names were full time Statistics faculty members, only the first one was chosen. Sorry about that.
Naturally, I wanted to use a plot to see the distribution of thesis and dissertation lengths, but the one produced by base graphics was terrible:
This hideous graphic gives rise to some questions…
• What does the bar less than 50 represent? Just length less than 50? (sarcasm)
• What does the bar greater than 200 represent? Just length greater than 200? (sarcasm)
• And how do I represent the obvious difference in length of manuscript by degree objective?
Although I respect the field of visualization, I am not huge on it, and I am usually content with the basics. This is one case where I had to step up my viz a notch. I had not used ggplot2 so there was no better time to learn. I will not attempt to explain what I am doing with the graphics, as there are already plenty of tutorials and write-ups from experts on the matter. Just look and be amazed…or just look. I wanted to give ggplot2 a spin, so I whipped this up as an example.
library(ggplot2) qplot(Pages, data=these, main="Thesis/Dissertation Lengths\nUCLA Department of Statistics") + geom_histogram(aes(fill=Level))
Wow! Now it is obvious what each bar represents, and we can easily see the difference in lengths of M.S. theses and Ph.D. dissertations. We can easily see that M.S. theses were typically around 50 pages, and Ph.D. dissertations were typically about 110 pages with a long right tail. We can also see what tick labels represent, and the mesh grid gives a visual clue as to what the intermediate tick labels would be. We also see that there were two M.S. theses that was unusually long at 135 and 140 pages respectively. Their titles were Time Series Analysis of Air Pollution in the City of Bakersfield, California and Analysis of Interstate Highway 5 Hourly Traffic via Functional Linear Models, respectively. If you are from California, you can imagine why.
We can see that there is not much variance among lengths of Masters theses and much higher variance for Ph.D. dissertations. I hypothesized that there was an advisor and year effect. Based on hearsay, I had an idea of which advisors yielded the longest and shortest dissertations. My hunch does in fact appear to be true, but I am withholding those results. What I will say is that there does not seem to a be a “pattern.” It does not seem that the more accomplished professors yield longer (or shorter) dissertations. It also does not seem that certain fields, like Vision or Genetics, yield longer or shorter dissertations as a group.
The following is a boxplot of the length of Ph.D. dissertations for your entertainment.
But how has the length of dissertations changed over time? Or has it not?
qplot(Year, Pages, data=phd, main="Dissertation Lengths over Time\nUCLA Department of Statistics") + geom_smooth()
This plot is beautiful, and interesting. It seems to suggest that overall, the mean length of a dissertation fell sharply between 1996-2000. However, there is a sample size effect here and there is not enough information to claim that there was in fact a drop during this period. If there in fact was a decrease in dissertation length, there could be several reasons. The Department became independent from the Department of Mathematics in 1998. Perhaps the academic climate was changing and dissertations were becoming shorter. Or, it could be that the Department of Mathematics historically had longer dissertations, and once the Department split off, its requirements waned from those of Mathematics. I bold the word mean because a better statistic here is the median since dissertation lengths do not follow a normal distribution; rather, they follow a right skewed distribution. Still though, using the median does not account for the sample size effect.
From 2000 to 2006, dissertation lengths seemed to have leveled off. Then from 2006 to 2010, it appears that dissertation lengths increased. Not so fast though! Note that the number of dissertations filed from 2006-2010 is much larger than those submitted in other equivalent length periods of time — this bump is likely due to the number of observations. Based on my understanding of Department history, I believe that there probably was a decrease through the early years of the program as the Department established its own separate expectations. This may hold practically, but does it hold statistically?
The geom_smooth() adds a curve to the plot representing a moving average over the data. It is not a trend line! geom_smooth() also adds some type of margin of error around this smoothing line (I admit that I have not looked deeply into the internals of ggplot2). If we interpret the margin of error loosely as a confidence interval, we can make a statistical conclusion of this graph. Recall that a basic one-sample confidence interval with population standard deviation known is
$\bar{x} \pm z^* \frac{\sigma}{\sqrt{n}}$
If we are a given a value $\mu_0$ and it falls within the confidence interval, we must conclude that the true parameter $\mu$ could possibly be $\mu_0$. Take $\mu_0=130$ pages. If we take the shaded region to be a confidence interval around $\mu$ then we see that it is possible that $\mu = 130$ pages throughout the time period I studied. To make a long story short, it is possible that the length of dissertations has remained constant over time.
So what is the purpose of this analysis? There is no purpose. It was just my curiosity, and thought that some of the coding was worth sharing.
With that said, after this extensive analysis, my goal is 110-115 pages.
1. How can I add the line $y = \mu_0 = 130$ to my time series plot?
2. What, in fact, does the shaded area represent (if it is not a margin of error forming a poor man’s confidence interval)?
3. Is it possible to change the measurement function in geom_smooth() from mean to median (or something else)?
4. Given 1-3 above, how can I also add jitter and alpha blending to the points? (I tried to do it but encountered errors)
5. Is there a better way to visualize this time series, given the sample size issue, without throwing out those dates?
Scraper code:
|
# What is K (Equilibrium Constant)? Geochemistry quick tips!
What is K (Equilibrium Constant)? Geochemistry quick tips!
In principle, any chemical equilibrium reaction can be described by the mass – action law.
$\alpha A +\beta B ... \rightleftharpoons \sigma S+\tau T ...$
$K=\frac{{\{S\}} ^\sigma {\{T\}}^\tau ... } {{\{A\}}^\alpha {\{B\}}^\beta ...}$
Where
K = Thermodynamic equilibrium or dissolution constant
K is truly defined based on the type of reaction in hand. For example:
1. Dissolution / Precipitation reaction: K= Ks = Solubility product constant.
2. Sorption: K = Kd = Distribution constant or; K=Kx =Selectivity Co -efficient
3. Redox reactions; K=Stability constant
4. Complex formation: K = Complexation constant
Note: If you reverse the reaction, K(reverse) = 1/K(forward)! So always write the reaction for any K.
Note: K is dependent on Temperature. K for various temperatures can be calculated using Van’t Hoff Equation.
Note: If a process consists of subsequent reactions, the equilibrium constants are numbered in order 9K1, K2, K3…). Example carbonate dissolution.
# Van ‘t Hoff equation
The Van ‘t Hoff equation in chemical thermodynamics relates the change in temperature (T) to the change in the equilibrium constant (K) given the enthalpy change (?H). The equation was first derived by Jacobus Henricus van ‘t Hoff.
$\frac{d \mbox{ ln K}}{dT} = \frac{\Delta H^\ominus}{RT^2}$
If the enthalpy change of reaction is assumed to be constant with temperature, the definite integral of this differential equation between temperatures T1 and T2 is given by
$\ln \left( {\frac{{K_2 }}{{K_1 }}} \right) = \frac{{ - \Delta H^\ominus }}{R}\left( {\frac{1}{{T_2 }} - \frac{1}{{T_1 }}} \right)$ ——–Equation A ( USE THIS EQUATION TO CALCULATE K at DIFFERENT TEMPERATURES)
In this equation K1 is the equilibrium constant at absolute temperature T1 and K2 is the equilibrium constant at absolute temperature T2. ?Ho is the enthalpy change and R is the gas constant.
Since
$\Delta G^\ominus = \Delta H^\ominus - T\Delta S^\ominus$
and
$\Delta G^\ominus = -RT \ln K$
it follows that
$\ln K = - \frac{{\Delta H^\ominus}}{RT}+ \frac{{\Delta S^\ominus }}{R}$
Therefore, a plot of the natural logarithm of the equilibrium constant versus the reciprocal temperature gives a straight line. The slope of the line is equal to minus the standard enthalpy change divided by the gas constant, ?Ho/R and the intercept is equal to the standard entropy change divided by the gas constant, ?So/R. Differentiation of this expression yields the van ‘t Hoff equation.
|
Journal article Open Access
# Species diversity and distribution of mangrove vegetation in Moalboal, Cebu Island, Philippines
Cabuenas, Anna Lou C.; Aunzo, Almira May F.; Reducto, Benveinido M.
### Dublin Core Export
<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
<dc:creator>Cabuenas, Anna Lou C.</dc:creator>
<dc:creator>Aunzo, Almira May F.</dc:creator>
<dc:creator>Reducto, Benveinido M.</dc:creator>
<dc:date>2017-12-18</dc:date>
<dc:description>This study identifies the diversity and distribution of mangrove species in Moalboal, Cebu, Philippines. Diversity and distribution assessment were conducted through non-experimental descriptive research design. The findings of the study revealed that mangrove vegetation in Moalboal is deteriorating and has continued to degrade over time. Species diversity was also found to be very low, with the Shannon-Weiner Index (H’) registering coefficients ranging from 0.8854 to 1.2268 for the various areas in Moalboal. There were only four species belonging to three families of mangroves identified, of which Sonneratia alba was determined to be the most dominant. With these results, rehabilitation and protection of mangrove vegetation is recommended to the local management and to ensure the strict implementation, protection and conservation of mangrove management in the studied areas. There is a need to reforest the areas with emphasis on repopulating disappearing species to avoid further degradation. It is further recommended to conduct more research on the implementation of the conservation activities and its effect on the abundance of the mangroves in the area. The study of ecological adaptation of mangroves, relative density, frequency and relative dominance must be undertaken to serve as important bases in community-based management programs.</dc:description>
<dc:identifier>https://zenodo.org/record/2471217</dc:identifier>
<dc:identifier>10.5281/zenodo.2471217</dc:identifier>
<dc:identifier>oai:zenodo.org:2471217</dc:identifier>
<dc:language>ang</dc:language>
<dc:relation>doi:10.5281/zenodo.2471216</dc:relation>
<dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
<dc:source>University of the Visayas - Journal of Research 11(1) 39-44</dc:source>
<dc:subject>mangroves distribution, mangroves diversity, mangroves in Cebu, mangroves vegetation</dc:subject>
<dc:title>Species diversity and distribution of mangrove vegetation in Moalboal, Cebu Island, Philippines</dc:title>
<dc:type>info:eu-repo/semantics/article</dc:type>
<dc:type>publication-article</dc:type>
</oai_dc:dc>
209
98
views
|
# Some citations
• B.S. = bull shit; M.S. = more shit; PhD = piled higher & deeper
Random guy on the Internet
Comment: Well, looks like I have been hard at work piling up some …
Things are easier to learn when the stakes are not too high.
Louis Rossmann
While some pressure or alike can indeed be a great catalyst for us to do something, in some case that same pressure, especially when the cost ofa failure at said task is relatively high, can lead one to perform sub-optimally with respect to that goal. As a “solution”, he proposes to “give yourself the luxury of failures”, as a way to reduce the stress and anxiety that would induce sub-par performance.
An example that comes to mind was when playing some challenging level of Doom Eternal: when having a few sparse lives that gave me the luxury to fail without being set back too much, I would usually be more daring in my actions, while at the same time being relax enough to perform the consistently enough, than when I have no spare lives. In that case, I would actually be very “conservative” and restrained, just to be able to reach the next checkpoint without doing, which happens to be less enjoyable and interesting overall.
Don’t ever take a fence down until you know why it was put up.
Robert Frost
Comment: pending I guess.
The problem with democracy is that those who need leaders are not qualified to choose them
Michael Malice
The argument was that “competent people” do not need direction as they are more able to decide for themselves. On the other hand, “incompetent people” who need leadership do not actually have the capacity to designate the latter even for their own good. As a counter-argument, however, no matter the degree of competence an individual might have, is it enough to work optimally at a larger scale ? Maybe the hope here relies in an “ideal decentralized system”, but can such system even exist ? Not matter individual competence, it seems more efficient to delegate some processes, hence the potential need of such “leaders”. This probably require more thoughts.
Every mind was made for growth, for knowledge, and its nature is sinned against when it is doomed to ignorance.
William Ellery Channing
A critic on censorship ? Or the voluntary embrace of ignorance ?
A saying for getting laid in engineering schools: “The odds are good, but the goods are odd”.
Author unknown
Comment: Personal experience: neither goods nor odds it seems.
As our island of knowledge grows, so does the shore of our ignorance.
John Wheeler
Comment: Welcome to my beach-planet resort …
What is better - to be born good, or to overcome your evil nature through great effort?
Paarthurnax “The Old One”, leader of the Greybeards — The Elder Scrolls V : Skyrim
Comment: While I would not recommend drawing life lesson from works of fiction, I could not help but make an exception for the one above. The answer, however, is relative to the prespective one approaches it. First, similarly to the saying “Ignorance is bliss”, being born favored by Lady Luck is not that bad of an experience. This would be better from the “quality of life” point of view. On the hand, overcoming hardships thorughout one’s life, as the dragon suggests, would result in a more “meaningful life”. From my experience, I would become aware of myself, my situtation and my thoughts more accurately when going through an “ordeal” (I must however admit that the ones I had to face so far were relatively insignificant …), i.e. in troubled times. Those are the moments I would mature and improve the best, to a point that I could even notice it myself to a certain extent. But again, this could only be an illusion of the “ego”, giving us a satisfactory enough answer for the hardship that was endured, and motivating us to go even further (sometimes, even too much).
It is better to go forward without an aim than loiter without an aim, and with surety much better than to retreat witjout an aim.
Emiel Regis Rohellec Terzieff-Godefroy — The Witcher: Tower of Swallow, P128~129
Comment: After finishing (or more likely failing) a research project, I would sometimes have no explicit objective and scarce ideas I could expand on. In those time, this citation could be said to have kept me going more than once, as well as some other, similar ones. When all your efforts and resources are depleted, one can only move forward, creating some force that will cause some reaction, relying on serendipitous events to build upon. Otherwise, the likelihood of the situation improving is very low … which quite an appropriate segway for the citation coming next.
If anything can go wrong, it will.
Friedrich Nietzsche
Comment: Another interesting formulation would be that of Professor Jordan B. Peterson: When left to themselves, things tend to go wrong’’ (if memory serves me correctly). While this might encourage someone to lean and maybe even hang of the edge of pessimism, one lesson would be to always expect the worst case scenario.
He who has a why to live for can bear almost any how.
Friedrich Nietzsche
Assuming dirrrrect control.
The Harbinger — Mass Effect Series
Updated:
|
# Would the event horizon of a black hole shrink as you approach?
While light cannot escape an event horizon, external light should still be observable from within. Would "entering" an event horizon cause it to apparently shrink away from you as you neared the singularity?
The event horizon will appear to do just the opposite. There is a radial distance at which a photon will in fact orbit the black hole. We can find this by working with the Schwarzschild metric $$ds^2~=~\left(1~-~\frac{2m}{r}\right)dt^2~-~\left(1~-~\frac{2m}{r}\right)^{-1}dr^2~-~r^2d\Omega^2.$$ where $m~=~GM/c^2$ and $d\Omega^2~=~sin^2\theta d\phi^2~+~d\theta^2$. We now consider an orbit that is circular, and so $dr~=~0$, and we put the orbit on a plane with $\theta~=~\pi/2$ so that $$ds^2~=~\left(1~-~\frac{2m}{r}\right)dt^2~-~r^2d\phi^2.$$ Before considering the orbit of a photon we can look at the circular orbit of a massive particle with the angular velocity $\omega~=~d\phi/dt$ so the metric is $$ds^2~=~\left(1~-~\frac{2m}{r}~-~r^2\omega^2\right)dt^2.$$ This can be thought of as a Lagrangian that computes the orbit, and the inclusion of the $dr$ can generalize this for non-spherical orbits. We also have that there is a generalized Lorentz gamma factor $\Gamma~=~dt/ds$, which for $m~=~0$ reduces to the gamma factor in special relativity $\gamma~=~1/\sqrt{1~-~v^2/c^2}$.
The vanishing of $1/\Gamma$ means that we have the angular velocity $$\omega^2~=~\left(\frac{d\phi}{dt}\right)^2~=~1~-~\frac{2m}{r}.$$ Now compute the radius for the circular orbit of a photon. For simplicity let $A~=~1~-~2m/r$ and $A'~=~dA/dr$. We now look for the radial geodesic equation with $$\Gamma^r_{tt}~=~AA'/2,~\Gamma^r_{\phi\phi}~=~-Ar,$$ again for $\theta~=~\pi/2$ and put this in the geodesic equation to get $$\frac{m}{r^3}~=~\left(\frac{d\phi}{dt}\right)^2~=~1~-~\frac{2m}{r}$$ and find the radius is $r~=~3m$
|
Running examples locally
This example and more are also available as Julia scripts and Jupyter notebooks.
# Learning a Control Policy
## Overview
In this example we walk through the process of setting up an experiment that runs Natural Policy Gradient (or more recently in this work). This is an on-policy reinforcement learning method that is comparable to TRPO, PPO, and other policy gradient methods. See the documentation for NaturalPolicyGradient for full implementation details.
## The Code
First, let's go head and grab all the dependencies:
using LinearAlgebra, Random, Statistics # From Stdlib
using LyceumAI # For the NPG controller
using LyceumMuJoCo # For the Hopper environment
using Flux # For our neural networks needs
using UniversalLogger # For logging experiment data
using Plots # For plotting the results
using LyceumBase.Tools # Miscellaneous utilities
We first instantiate a HopperV2 environment to grab useful environment-specific values, such as the size of the observation and action vectors:
env = LyceumMuJoCo.HopperV2();
dobs, dact = length(obsspace(env)), length(actionspace(env));
We'll also seed the per-thread global RNGs:
seed_threadrngs!(1)
1-element Array{Random.MersenneTwister,1}:
Random.MersenneTwister(UInt32[0x00000001], Random.DSFMT.DSFMT_state(Int32[1749029653, 1072851681, 1610647787, 1072862326, 1841712345, 1073426746, -198061126, 1073322060, -156153802, 1073567984 … 1977574422, 1073209915, 278919868, 1072835605, 1290372147, 18858467, 1815133874, -1716870370, 382, 0]), [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 … 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], UInt128[0x00000000000000000000000000000000, 0x00000000000000000000000000000000, 0x00000000000000000000000000000000, 0x00000000000000000000000000000000, 0x00000000000000000000000000000000, 0x00000000000000000000000000000000, 0x00000000000000000000000000000000, 0x00000000000000000000000000000000, 0x00000000000000000000000000000000, 0x00000000000000000000000000000000 … 0x00000000000000000000000000000000, 0x00000000000000000000000000000000, 0x00000000000000000000000000000000, 0x00000000000000000000000000000000, 0x00000000000000000000000000000000, 0x00000000000000000000000000000000, 0x00000000000000000000000000000000, 0x00000000000000000000000000000000, 0x00000000000000000000000000000000, 0x00000000000000000000000000000000], 1002, 0)
Policy Gradient methods require a policy: a function that takes in the state/observations of the agent, and outputs an action i.e. action = π(obs). In much of Deep RL, the policy takes the form of a neural network which can be built on top of the Flux.jl library. We utilize a stochastic policy in this example. Specifically, our policy is represented as a multivariate Gaussian distribution of the form:
$\pi(a | o) = \mathcal{N}(\mu_{\theta_1}(o), \Sigma_{\theta_2})$
where $\mu_{\theta_1}$ is a neural network, parameterized by $\theta_1$, that maps an observation to a mean action and $\Sigma_{\theta_2}$ is a diagonal covariance matrix parameterized by $\theta_2$, the diagonal entries of the matrix. For $\mu_{\theta_1}$ we utilize a 2-layer neural network, where each layer has a "width" of 32. We use tanh activations for each hidden layer and initialize the network weights with Glorot Uniform initializations. Rather than tracking $\Sigma_{\theta_2}$ directly, we track the log standard deviations, which are easier to learn. We initialize $\log \text{diag}(\Sigma_{\theta_2})$ as zeros(dact), i.e. a Vector of length dact, initialized to 0. Both $\theta_1$ and $\theta_2$ are learned in this example. Note that $\mu_{\theta_1}$ is a state-dependent mean while $\Sigma_{\theta_2}$ is a global covariance.
const policy = DiagGaussianPolicy(
multilayer_perceptron(
dobs,
32,
32,
dact;
σ = tanh,
initb = Flux.glorot_uniform,
initb_final = Flux.glorot_uniform,
dtype = Float32,
),
zeros(Float32, dact),
);
This NPG implementation uses Generalized Advantaged Estimation, which requires an estimate of the value function, value(state), which we represent using a 2-layer, feedforward neural network where each layer has a width of 128 and uses the ReLU activation function. The model weights are initialized using Glorot Uniform initialization as above.
const value = multilayer_perceptron(
dobs,
128,
128,
1;
σ = Flux.relu,
initb = Flux.glorot_uniform,
initb_final = Flux.glorot_uniform,
dtype = Float32,
);
Next, we set up the optimization pipeline for value. We use a mini-batch size of 64 and the ADAM optimizer. FluxTrainer is an iterator that loops on the model provided, performing a single step of gradient descent at each iteration. The result at each loop is passed to stopcb below, so you can quit after a number of epochs, convergence, or other criteria; here it's capped at two epochs. See the documentation for FluxTrainer for more information.
valueloss(bl, X, Y) = Flux.mse(vec(bl(X)), vec(Y))
stopcb(x) = x.nepochs > 2
const valuetrainer = FluxTrainer(
szbatch = 64,
lossfn = valueloss,
stopcb = stopcb
);
The NaturalPolicyGradient iterator is a type that pre-allocates all necesary data structures and performs one gradient update to policy at each iteration. We first pass in a constructor that given n returns n instances of LyceumMuJoCo.HopperV2, all sharing the same jlModel, to allow NaturalPolicyGradient to allocate per-thread environments and enable performant, parallel sampling from policy. We then pass in the policy, value, and valuetrainer instances constructed above and override a few of the default NaturalPolicyGradient parameters: gamma, gaelambda, and norm_step_size. Finally, we set the max trajectory length Hmax and total number of samples per iteration, N. Under the hood, NaturalPolicyGradient will use approximately div(N, Hmax) threads to perform the sampling.
const npg = NaturalPolicyGradient(
n -> tconstruct(LyceumMuJoCo.HopperV2, n),
policy,
value,
valuetrainer;
gamma = 0.995,
gaelambda = 0.97,
norm_step_size = 0.05,
Hmax = 1000,
N = 10240,
);
### Running Experiments
Finally, let's spin on our iterator 200 times, plotting every 20 iterations. This lets us break out of the loop if certain conditions are met, or re-start training manually if needed. We of course wish to track results, so we create a ULogger and Experiment to which we can save data. We also have useful timing information displayed every 20 iterations to better understand the performance of our algorithm and identify any potential bottlenecks. Rather than iterating on npg at the global scope, we'll do it inside of a function to avoid the performance issues associated with global variables as discussed in the Julia performance tips. Note, to keep the Markdown version of this tutorial readable, we skip the plots and performance statistics. To enable them, simply call hopper_NPG(npg, true).
function hopper_NPG(npg::NaturalPolicyGradient, plot::Bool)
exper = Experiment("/tmp/hopper_example.jlso", overwrite = true)
# Walks, talks, and acts like a Julia logger. See the UniversalLogger.jl docs for more info.
lg = ULogger()
for (i, state) in enumerate(npg)
if i > 200
# serialize some stuff and quit
exper[:policy] = npg.policy
exper[:value] = npg.value
exper[:etype] = LyceumMuJoCo.HopperV2
exper[:meanstates] = state.meanbatch
exper[:stocstates] = state.stocbatch
break
end
# log everything in state except meanbatch and stocbatch
push!(lg, :algstate, filter_nt(state, exclude = (:meanbatch, :stocbatch)))
if plot && mod(i, 20) == 0
x = lg[:algstate]
# The following are helper functions for plotting to the terminal.
# The first plot displays the geteval function for our stochastic
# and mean policy rollouts.
display(expplot(
Line(x[:stocterminal_eval], "StocLastE"),
Line(x[:meanterminal_eval], "MeanLastE"),
title = "Evaluation Score, Iter=$i", width = 60, height = 8, )) # While the second one similarly plots getreward. display(expplot( Line(x[:stoctraj_reward], "StocR"), Line(x[:meantraj_reward], "MeanR"), title = "Reward, Iter=$i",
width = 60,
height = 8,
))
# The following are timing values for various parts of the Natural Policy Gradient
# algorithm at the last iteration, useful for finding performance bottlenecks
# in the algorithm.
println("elapsed_sampled = ", state.elapsed_sampled)
println("elapsed_vpg = ", state.elapsed_vpg)
println("elapsed_cg = ", state.elapsed_cg)
println("elapsed_valuefit = ", state.elapsed_valuefit)
end
end
exper, lg
end
exper, lg = hopper_NPG(npg, false);
Let's go ahead and plot the final reward trajectory for our stochastic and mean policies to see how we did:
plot(
[lg[:algstate][:meantraj_reward] lg[:algstate][:stoctraj_reward]],
labels = ["Mean Policy" "Stochastic Policy"],
title = "HopperV2 Reward",
legend = :bottomright,
)
We'll also plot the evaluations:
plot(
[lg[:algstate][:meantraj_eval] lg[:algstate][:stoctraj_eval]],
labels = ["Mean Policy" "Stochastic Policy"],
title = "HopperV2 Eval",
legend = :bottomright,
)
Finally, we save the logged results to exper for later review:
exper[:logs] = get(lg)
finish!(exper); # flushes everything to disk
[ Info: Experiment saved to /tmp/hopper_example.jlso
|
# Statistics – the rules of the game
What is statistics about, really? It’s easy to go through a class and get the impression that it’s about manipulating intimidating formulas. But what’s the goal of them? Why did people invent them?
If you zoom out, the big picture is more conceptual than mathematical. Statistics has a crazy, grasping ambition: it wants to tell you how to best use observations to make decisions. For example, you might look at how much it rained each day in the last week, and decide if you should bring an umbrella today. Statistics converts data into ideal actions.
Here, I’ll try to explain this view. I think it’s possible to be quite precise about this while using almost no statistics background and extremely minimal math.
The two important characters that we meet are decision rules and loss functions. Informally, a decision rule is just some procedure that looks at a dataset and makes a choice. A loss function — a basic concept from decision theory– is a precise description of “how bad” a given choice is.
## Model Problem: Coinflips
Let’s say you’re confronted with a coin where the odds of heads and tails are not known ahead of time. Still, you are allowed to observe how the coin performs over a number of flips. After that, you’ll need to make a “decision” about the coin. Explicitly:
• You’ve got a coin, which comes up heads with probability $w$. You don’t know $w$.
• You flip the coin $n$ times.
• You see $k$ heads and $n-k$ tails.
• You do something, depending on $k$. (We’ll come back to this.)
Simple enough, right? Remember, $k$ is the total number of heads after $n$ flips. If you do some math, you can work out a formula for $p(k\vert w,n)$: the probability of seeing exactly $k$ heads. For our purposes, it doesn’t really matter what that formula is, just that it exists. It’s known as a Binomial distribution, and so is sometimes written $\mathrm{Binomial}(k\vert n,w)$.
Here’s an example of what this looks like with $n=21$ and $w=0.5$.
Naturally enough, if $w=.5$, with $21$ flips, you tend to see around $10-11$ heads. Here’s an example with $w=0.2$. Here, the most common value is $4$, close to $21\times.2=4.2$.
## Decisions, decisions
After observing some coin flips, what do we do next? You can imagine facing various possible situations, but we will use the following:
Our situation: After observing n coin flips, you need to guess “heads” or “tails”, for one final coin flip.
Here, you just need to “decide” what the next flip will be. You could face many other decisions, e.g. guessing the true value of w.
Now, suppose that you have a friend who seems very skilled at predicting the final coinflip. What information would you need to reproduce your friend’s skill? All you need to know is if your friend predicts heads or tails for each possible value of k. We think of this as a decision rule, which we write abstractly as
$\mathrm{Dec}(k).$
This is just a function of one integer $k$. You can think of this as just a list of what guess to make, for each possible observation, for example:
$k$ $\mathrm{Dec}(k)$ $0$ $\mathrm{tails}$ $1$ $\mathrm{heads}$ $2$ $\mathrm{heads}$ $\vdots$ $\vdots$ $n$ $\mathrm{tails}$
One simple decision rule would be to just predict heads if you saw more heads than tails, i.e. to use
$\mathrm{Dec}(k)=\begin{cases} \mathrm{heads}, & k\geq n/2 \\ \mathrm{tails} & k
The goal of statistics is to find the best decision rule, or at least a good one. The rule above is intuitive, but not necessarily the best. And… wait a second… what does it even mean for one decision rule to be “better” than another?
## Our goal: minimize the thing that’s designed to be minimized
What happens after you make a prediction? Consider our running example. There are many possibilities, but here are two of the simplest:
• Loss A: If you predicted wrong, you lose a dollar. If you predicted correctly, nothing happens.
• Loss B: If you predict “tails” and “heads” comes up, you lose 10 dollars. If you predict “heads” and “tails” comes up, you lose 1 dollar. If you predict correctly, nothing happens.
We abstract these through a concept of a loss function. We write this as
$L(w,d)$.
The first input is the true (unknown) value $w$, while second input is the “prediction” you made. We want the loss to be small.
Now, one point might be confusing. We defined our situation as predicting the next coinflip, but now $L$ is defined comparing $d$ to $w$, not to a new coinflip. We do this because comparing to $w$ gives the most generality. To deal with our situation, just use the average amount of money you’d lose if the true value of the coin were $w$. Take loss A. If you predict “tails”, you’ll be wrong with probability $w$, while if you predict “heads”, you’ll be wrong with probability $1-w$, and so lose $1-w$ dollars on average. This leads to the loss
$L_{A}(w,d)=\begin{cases} w & d=\mathrm{tails}\\ 1-w & d=\mathrm{heads} \end{cases}.$
For loss B, the situation is slightly different, in that you lose 10 times as much in the first case. Thus, the loss is
$L_{B}(w,d)=\begin{cases} 10w & d=\mathrm{tails}\\ 1-w & d=\mathrm{heads} \end{cases}.$
The definition of a loss function might feel circular– we minimize the loss because we defined the loss as the thing that we want to minimize. What’s going on? Well, a statistical problem has two separate parts: a model of the data generating process, and a loss function describing your goals. Neither of these things is determined by the other.
So, the loss function is part of the problem. Statistics wants to give you what you want. But you need to tell statistics what that is.
Despite the name, a “loss” can be negative– you still just want to minimize it. Machine learning, always optimistic, favors “reward” functions that are to be maximized. Plus ça change.
## Model + Loss = Risk
OK! So, we’ve got a model of our data generating process, and we specified some loss function. For a given w, we know the distribution over k, so… I guess… we want to minimize it?
Let’s define the risk to be the average loss that a decision rule gives for a particular value of w. That is,
$R(w,\mathrm{Dec})=\sum_{k}p(k\vert w,n)L(w,\mathrm{Dec}(k)).$
Here, the second input to $R$ is a decision rule– a precise recipe of what decision to make in each possible situation.
Let’s visualize this. As a set of possible decision rules, I will just consider rules that predict “heads” if they’ve seen at least m heads, and “tails” otherwise:
$\mathrm{Dec}_{m}(k)=\begin{cases} \mathrm{heads} & k\geq m\\ \mathrm{tails} & k
With $n=21$ there are $22$ such decision rules, corresponding to $m=0$, (always predict heads), $m=1$ (predict heads if you see at least one heads), up to $m=22$ (always predict tails). These are shown here:
These rules are intuitive: if you’d predict heads after observing 16 heads out of 21, it would be odd to predict tails after seeing 17 instead! It’s true that for losses $L_{A}$ and $L_{B}$, you don’t lose anything by restricting to this kind of decision rule. However, there are losses for which these decision rules are not enough. (Imagine you lose more when your guess is correct.)
With those decision rules in place, we can visualize what risk looks like. Here, I fix $w=0.2$, and I sweep through all the decision rules (by changing $m$) with loss $L_{A}$:
The value $R_A$ in the bottom plot is the total area of the green bars in the middle. You can do the same sweep for $w=0.4$, which you can is pictured here:
We can visualize the risk in one figure with various $w$ and $m$. Notice that the curves for $w=0.2$ and $w=0.4$ are exactly the same as we saw above.
Of course, we get a different risk depending on what loss function we use. If we repeat the whole process using loss $L_{B}$ we get the following:
## Dealing with risk
What’s the point of risk? It tells us how good a decision rule is. We want a decision rule where risk is as low as possible. So you might ask, why not just choose the decision rule that minimizes $R(w,\mathrm{Dec})$?
The answer is: because we don’t know $w$! How do we deal with that? Believe it or not, there isn’t a single well-agreed upon “right” thing to do, and so we meet two different schools of thought.
### Option 1 : All probability all the time
Bayesian statistics (don’t ask about the name) defines a “prior” distribution $p(w)$ over $w$. This says which values of $w$ we think are more and less likely. Then, we define the Bayesian risk as the average of $R$ over the prior:
$R_{\mathrm{Bayes}}(\mathrm{Dec})=\int_{w=0}^{1}p(w)R(w,\mathrm{Dec})dw.$
This just amounts to “averaging” over all the risk curves, weighted by how “probable” we think $w$ is. Here’s the Bayes risk corresponding to $L_{A}$ with a uniform prior $p(w)=1$:
For reference, the risk curves $R(w,\mathrm{Dec}_m)$ are shown in light grey. Naturally enough, for each value of $m$, the Bayes risk is just the average of the regular risks for each $w$.
Here’s the risk corresponding to $L_{B}$:
That’s all quite natural. But we haven’t really searched through all the decision rules, only the simple ones $\mathrm{Dec}_m$. For other losses, these simple ones might not be enough, and there are a lot of decision rules. (Even for this toy problem there are $2^{22}$, since you can output heads or tails for each of $k=0$, $k=1$, …, $k=21$.)
Fortunately, we can get a formula for the best decision rule for any loss. First, re-write the Bayes risk as
$R_{\mathrm{Bayes}}(\mathrm{Dec})=\sum_{k} \left( \int_{w=0}^{1}p(w)p(k\vert n,w)L(w,\mathrm{Dec}(k))dw \right).$
This is a sum over $k$ where each term only depends on a single value $\mathrm{Dec}(k)$. So, we just need to make the best decision for each individual value of $k$ separately. This leads to the Bayes-optimal decision rule of
$\mathrm{Dec}_{\text{Bayes}}(k)=\arg\min_{d}\int_{w=0}^{1}p(w)p(k\vert w,n)L(w,d)dw.$
With a uniform prior $p(w)=1$, here’s the optimal Bayesian decision rules with loss $L_{A}$:
And here it is for loss $L_B$:
Look at that! Just mechanically plugging the loss function into the Bayes-optimal decision rule naturally gives us the behavior we expected– for $L_{B}$, the rule is very hesitant to predict tails, since the loss is so high if you’re wrong. (Again, these happen to fit in the parameterized family $\mathrm{Dec}_{m}$ defined above, but we didn’t use this assumption in deriving the rules.)
The nice thing about the Bayesian approach is that it’s so systematic. No creativity or cleverness is required. If you specify the data generating process ($p(k\vert w,n)$) the loss function ($L(w,d)$) and the prior distribution ($p(w$)) then the optimal Bayesian decision rule is determined.
There are some disadvantages as well:
• Firstly, you need to make up the prior, and if you do a terrible job, you’ll get a poor decision rule. If you have little prior knowledge, this can feel incredibly arbitrary. (Imagine you’re trying to estimate Big G.) Different people can have different priors, and then get different results.
• Actually computing the decision rule requires doing an integral over w, which can be tricky in practice.
• Even if your prior is good, the decision rule is only optimal when averaged over the prior. Suppose, for every day for the next 10,000 years, a random coin is created with $w$ drawn from $p(w)$. Then, no decision rule will incur less loss than $\mathrm{Dec}_{\text{Bayes}}$. However, on any particular day, some other decision rule could certainly be better.
So, if you have little idea of your prior, and/or you’re only making a single decision, you might not find much comfort in the Bayesian guarantee.
Some argue that these aren’t really disadvantages. Prediction is impossible without some assumptions, and priors are upfront and explicit. And no method can be optimal for every single day. If you just can’t handle that the risk isn’t optimal for each individual trial, then… maybe go for a walk or something?
### Option 2 : Be pessimistic
Frequentist statistics (Why “frequentist”? Don’t think about it!) often takes a different path. Instead of defining a prior over w, let’s take a worst-case view. Let’s define the worst-case risk as
$R_{\mathrm{worst}}(\mathrm{Dec})=\max_{w}R(w,\mathrm{Dec}).$
Then, we’d like to choose an estimator to minimize the worst-case risk. We call this a “minimax” estimator since we minimize the max (worst-case) risk.
Let’s visualize this with our running example and $L_{A}$:
As you can see, for each individual decision rule, it searches over the space of parameters $w$ to find the worst case. We can visualize the risk with $L_{B}$ as:
What’s the corresponding minimax decision rule? This is a little tricky to deal with– to see why, let’s expand the worst-case risk a bit more:
$R_{\mathrm{Worst}}(\mathrm{Dec})=\max_{w}\sum_{k}p(k\vert n,w)L(w,\mathrm{Dec}(k)).$
Unfortunately, we can’t interchange the max and the sum, like we did with the integral and the sum for Bayesian decision rules. This makes it more difficult to write down a closed-form solution. At least in this case, we can still find the best decision rule by searching over our simple rules $\mathrm{Dec}_m$. But be very mindful that this doesn’t work in general!
For $L_{A}$ we end up with the same decision rule as when minimizing Bayesian risk:
For $L_{B}$, meanwhile, we get something slightly different:
This is even more conservative than the Bayesian decision rule. $\mathrm{Dec}_{B-\mathrm{Bayes}}(2)=\mathrm{tails}$, while $\mathrm{Dec}_{B-\mathrm{minimax}}(2)=\mathrm{heads}$. That is, the Bayesian method predicts heads when it observes 2 or more, while the minimax rule predicts heads if it observes even one. This makes sense intuitively: The minimax decision rule proceeds as if the “worst” w (a small number) is fixed, whereas the Bayesian decision rule less pessimistically averages over all w.
Which decision rule will work better? Well, if w happens to be near the worst-case value, the minimax rule will be better. If you repeat the whole experiment many times with w drawn from the prior, the Bayesian decision rule will be.
If you do the experiment at some w far from the worst-case value, or you repeat the experiment many times with w drawn from a distribution different from your prior, then you have no guarantees.
Neither approach is “better” than the other, they just provide different guarantees. You need to choose what guarantee you want. (You can kind of think of this as a “meta” loss.)
## So what about all those formulas, then?
For real problems, the data generating process is usually much more complex than a Binomial. The “decision” is usually more complex than predicting a coinflip– the most common decision is making a guess for the value of $w$. Even calculating $R(w,\mathrm{Dec})$ for fixed $w$ and $\mathrm{Dec}$ is often computationally hard, since you need to integrate over all possible observations. In general, finding exact Bayes or minimax optimal decision rules is a huge computational challenge, and at least some degree of approximation is required. That’s the game, that’s why statistics is hard. Still, even for complex situations the rules are the same– you win by finding a decision rule with low risk.
# Personal opinions about graphical models 1: The surrogate likelihood exists and you should use it.
When talking about graphical models with people (particularly computer vision folks) I find myself advancing a few opinions over and over again. So, in an effort to stop bothering people at conferences, I thought I’d write a few entries here.
The first thing I’d like to discuss is “surrogate likelihood” training. (So far as I know, Martin Wainwright was the first person to give a name to this method.)
### Background
Suppose we want to fit a Markov random field (MRF). I’m writing this as a generative model with an MRF for simplicity– pretty much the same story holds with a Conditional Random Field in the discriminative setting.
$p({\bf x}) = \frac{1}{Z} \prod_{c} \psi({\bf x}_c) \prod_i \psi(y_i)$
Here, the first product is over all cliques/factors in the graph, and the second is over all single variables. Now, it is convenient to note that MRFs can be seen as members of the exponential family
$p({\bf x};{\boldsymbol \theta}) = \exp( {\boldsymbol \theta} \cdot {\bf f}({\bf x}) - A({\boldsymbol \theta}) )$,
where
${\bf f}({\bf X})=\{I[{\bf X}_{c}={\bf x}_{c}]|\forall c,{\bf x}_{c}\}\cup\{I[X_{i}=x_{i}]|\forall i,x_{i} \}$
is a function consisting of indicator functions for each possible configuration of each clique and variable, and the log-partition function
$A(\boldsymbol{\theta})=\log\sum_{{\bf x}}\exp\boldsymbol{\theta}\cdot{\bf f}({\bf x})$.
ensures normalization.
Now, the log-partition function has the very important (and easy to show) property that the gradient is the expected value of $\bf f$.
$\displaystyle \frac{dA}{d{\boldsymbol \theta}} = \sum_{\bf x} p({\bf x};{\boldsymbol \theta}) {\bf f}({\bf x})$
With a graphical model, what does this mean? Well, notice that the expected value of, say, $I[X_i=x_i]$ will be exactly $p(x_i;{\boldsymbol \theta})$. Thus, the expected value of ${\bf f}$ will be a vector containing all univariate and clique-wise marginals. If we write this as ${\boldsymbol \mu}({\boldsymbol \theta})$, then we have
$\displaystyle \frac{dA}{d{\boldsymbol \theta}} = {\boldsymbol \mu}({\boldsymbol \theta})$.
### The usual story
Suppose we want to do maximum likelihood learning. This means we want to set ${\boldsymbol \theta}$ to maximize
$L( {\boldsymbol \theta} ) = \frac{1}{N}\sum_{\hat{{\bf x}}}\log p({\bf x};{\boldsymbol \theta})={\boldsymbol \theta}\cdot\frac{1}{N}\sum_{\hat{{\bf x}}}{\bf f}(\hat{{\bf x}})-A({\boldsymbol \theta}).$
If we want to use gradient ascent, we would just take a small step along the gradient. This has a very intuitive form: it is the difference of the expected value of $\bf f$ under the model to the expected value of $\bf f$ under the current distribution.
$\displaystyle \frac{dL}{d{\boldsymbol \theta}} = \frac{1}{N}\sum_{\hat{{\bf x}}}{\bf f}(\hat{{\bf x}}) - \sum_{\bf x} p({\bf x};{\boldsymbol \theta}) {\bf f}({\bf x})$.
$\displaystyle \frac{dL}{d{\boldsymbol \theta}} = \frac{1}{N}\sum_{\hat{{\bf x}}}{\bf f}(\hat{{\bf x}}) - {\boldsymbol \mu}({\boldsymbol \theta})$.
Note the lovely property of moment matching here. If we have found a solution, then $dL/d{\boldsymbol \theta}=0$ and so the expected value of $\bf f$ under the current distribution will be exactly equal to that under the data.
Unfortunately, in a high-treewidth setting, we can’t compute the marginals. That’s too bad. However, we have all these lovely approximate inference algorithms (loopy belief propagation, tree-reweighted belief propagation, mean field, etc.). Suppose we write the resulting approximate marginals as $\tilde{{\boldsymbol \mu}}({\boldsymbol \theta})$. Then, instead of taking the above gradient step, why not instead just use
$\frac{1}{N}\sum_{\hat{{\bf x}}}{\bf f}(\hat{{\bf x}}) - \tilde{{\boldsymbol \mu}}({\boldsymbol \theta})$?
That’s all fine! However, I often see people say/imply/write some or all of the following:
1. This is not guaranteed to converge.
2. There is no longer any well-defined objective function being maximized.
3. We can’t use line searches.
4. We have to use (possibly stochastic) gradient ascent.
5. This whole procedure is frightening and shouldn’t be mentioned in polite company.
I agree that we should view this procedure with some suspicion, but it gets far more than it deserves! The first four points, in my view, are simply wrong.
### What’s missing
The critical thing that is missing from the above story is this: Approximate marginals come together with an approximate partition function!
That is, if you are computing approximate marginals $\tilde{{\boldsymbol \mu}}({\boldsymbol \theta})$ using loopy belief propagation, mean-field, or tree-reweighted belief propagation, there is a well-defined approximate log-partition function $\tilde{A}({\boldsymbol \theta})$ such that
$\displaystyle \tilde{{\boldsymbol \mu}}({\boldsymbol \theta}) = \frac{d\tilde{A}}{d{\boldsymbol \theta}}$.
What this means is that you should think, not of approximating the likelihood gradient, but of approximating the likelihood itself. Specifically, what the above is really doing is optimizing the “surrogate likelihood”
$\tilde{L}({\boldsymbol \theta}) = {\boldsymbol \theta}\cdot\frac{1}{N}\sum_{\hat{{\bf x}}}{\bf f}(\hat{{\bf x}})-\tilde{A}({\boldsymbol \theta}).$
What’s the gradient of this? It is
$\frac{1}{N}\sum_{\hat{{\bf x}}}{\bf f}(\hat{{\bf x}}) - \tilde{{\boldsymbol \mu}}({\boldsymbol \theta}),$
or exactly the gradient that was being used above. The advantage of doing things this way is that it is a normal optimization. There is a well-defined objective. It can be plugged into a standard optimization routine, such as BFGS, which will probably be faster than gradient ascent. Line searches guarantee convergence. $\tilde{A}$ is perfectly tractable to compute. In fact, if you have already computed approximate marginals, $\tilde{A}$ has almost no cost. Life is good.
The only counterargument I can think of is that mean-field and loopy BP can have different local optima, which might mean that a no-line-search-refuse-to-look-at-the-objective-function-just-follow-the-gradient-and-pray style optimization could be more robust, though I’d like to see that argument made…
I’m not sure of the history, but I think part of the reason this procedure has such a bad reputation (even from people that use it!) might be that it predates the “modern” understanding of inference procedures as producing approximate partition functions as well as approximate marginals.
|
# denoise1: Total Variation Denoising for Signal In tvR: Total Variation Regularization
## Description
Given a 1-dimensional signal f, it solves an optimization of the form,
u^* = argmin_u E(u,f)+λ V(u)
where E(u,f) is fidelity term and V(u) is total variation regularization term. The naming convention of a parameter method is <problem type> + <name of algorithm>. For more details, see the section below.
## Usage
1 denoise1(signal, lambda = 1, niter = 100, method = c("TVL2.IC", "TVL2.MM"))
## Arguments
signal vector of noisy signal. lambda regularization parameter (positive real number). niter total number of iterations. method indicating problem and algorithm combination.
## Value
a vector of same length as input signal.
## Algorithms for TV-L2 problem
The cost function for TV-L2 problem is
min_u \frac{1}{2} |u-f|_2^2 + λ |\nabla u|
where for a given 1-dimensional vector, |\nabla u| = ∑ |u_{i+1}-u_{i}|. Algorithms (in conjunction with model type) for this problems are
"TVL2.IC"
Iterative Clipping algorithm.
"TVL2.MM"
Majorization-Minorization algorithm.
The codes are translated from MATLAB scripts by Ivan Selesnick.
## References
\insertRef
rudin_nonlinear_1992tvR
\insertRef
selesnick_convex_2015tvR
## Examples
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ## generate a stepped signal x = rep(sample(1:5,10,replace=TRUE), each=50) ## add some additive white noise xnoised = x + rnorm(length(x), sd=0.25) ## apply denoising process xproc1 = denoise1(xnoised, method = "TVL2.IC") xproc2 = denoise1(xnoised, method = "TVL2.MM") ## plot noisy and denoised signals plot(xnoised, pch=19, cex=0.1, main="Noisy signal") lines(xproc1, col="blue", lwd=2) lines(xproc2, col="red", lwd=2) legend("bottomleft",legend=c("Noisy","TVL2.IC","TVL2.MM"), col=c("black","blue","red"),#' lty = c("solid", "solid", "solid"), lwd = c(0, 2, 2), pch = c(19, NA, NA), pt.cex = c(1, NA, NA), inset = 0.05)
tvR documentation built on Aug. 23, 2021, 1:08 a.m.
|
# integral function!
Printable View
• May 13th 2013, 06:37 AM
lawochekel
integral function!
show that
$B(x,y)= B(x+y,3)$
i tried to solve the problem as follows
$\frac{\Gamma{x+y} \Gamma{3}}{\Gamma{x+y+3}}$
can of confuse here on how to go forward, pls i need help here.
thanks
• May 13th 2013, 09:27 AM
HallsofIvy
Re: integral function!
B(x, y) is the beta function, [tex]B(x, y)= \frac{\Gamma(x)\Gamma(y)}{\Gamma(x+ y)}, right?
So $B(x+ y, 3)= \frac{\Gamma(x+ y)\Gamma(3)}{\Gamma(x+ y+ 3)}$
(You need some parentheses to clarify what you are writing!)
That is what you have. Now, can you use the fact $\Gamma(x)= \int_0^\infty t^{x- 1}e^{-xt}dt$, with some substitutions,
to show that?
|
### Machine Learning
1. Incredibly cool from deepmind: ML applied to ancient Greek fragments can generate restoration hypotheses for the missing text and locate the fragment's origin in both time and place. Paper in Nature.
These researchers built an AI for discovering less toxic drug compounds. Then they retrained it to do the opposite. Within six hours it generated 40,000 toxic molecules, including VX nerve agent and "many other known chemical warfare agents.
Sufficiently advanced AI alignment is indistinguishable from AI risk?
3. Fantastic Gwern theory-fiction: It Looks Like You're Trying To Take Over The World.
4. Also on LW, Brain Efficiency: Much More than You Wanted to Know:
Eventually advances in software and neuromorphic computing should reduce the energy requirement down to brain levels of 10W or so, allowing for up to a trillion brain-scale agents at near future world power supply, with at least a concomitant 100x increase in GDP. All of this without any exotic computing.
5. Also on LW, New Scaling Laws for Large Language Models.
### Forecasting
6. Karger, Atanasov & Tetlock, Improving Judgments of Existential Risk: Better Forecasts, Questions, Explanations, Policies.
7. How good are generalist forecasters vs experts, really? Gavin Leech revisits the literature and argues against the superforecasters. They still do as well or slightly better than the experts, but not by much. I feel the way the results are presented is a bit misleading.
### Metascience
8. Derek Thompson in the Atlantic on Silicon Valley science funding.
What makes my spidey sense tingle is that the objects in any such theory are (in part) a hypothetical space of possible discoveries, of possible explanations of the world. I called it a theory of discovery just above, but it might equally well be called a theory of the unknown, or theory of exploration, or theory of theories. Of course, some of the objects of any such theory would also be amenable to more standard descriptions: things like exploration strategies, or group dynamics. But some would be a lot stranger: currently unknown types of explanation, currently unknown types of theoretical entity.
### Economic History
10. WW2 Japanese internment camps? You guessed it, Good, Actually! Internment had a positive effect on long-run incomes on the order of 9-22%. And remember to burn the cities, too. h/t ADS
11. Some issues with Putterman & Weil (2010), judging by the new results it doesn't seem all that problematic to the deep roots lit?
### Book Reviews
12. There's a new Landmark Edition out, Xenophon's Anabasis. Here's a short review.
13. ZHPL on TLP's Sadly, Porn.
14. Scott on the same (the reviews are complementary goods).
15. Vaccination Rates and COVID Outcomes across U.S. States finds that it takes about $5000 worth of vaccines to save a life. Would be interesting to see a comparison to molnupiravir in terms of dollars per life saved. 16. A report from a covid human challenge experiment. Hopefully this paves the path for a faster response against the next pandemic. ### The Rest The egotism and futility of these costly initiatives is quite mind-boggling as the human threat to biological diversity multiplies. Rather than competing with animal and plant taxonomists, mycologists should show pluck in asserting philosophical independence from the waning fields of zoology and botany. By turning our attention towards experimental questions and away from cataloguing, mycologists may escape the shackles of Linnean fundamentalism. 18. Related(?), SMTM on citrus taxonomy, "in which the Bene Gesserit attempt to breed the Kumquat Haderach". 19. Luttwak on China: The myth of Chinese supremacy Always improbable, G-2 became impossible when Xi Jinping arrived. For him only G-1 is good enough. Not because he is a megalomaniac but the opposite: he thinks, accurately, that unless the Party establishes an unchallenged global hegemony, with its rule is deemed superior to democratic governance, Communist China will collapse just as Soviet rule did. He is right. The drama intensified in February, when the Securities and Exchange Board of India released a 190-page regulatory order disclosing that Ramkrishna had sent sensitive information to an outsider described as a yogi in the Himalayas. [...] The yogi was non-corporeal, she said, but corresponded using the email address [email protected] 21. On the role of mathematics in the neolithic revolution. "The mathematical abilities of Neolithic humans advanced in concert with the new requirements of agricultural life. These needs can be summed up into three categories: Surplus, Trade, and Time." Here's wikipedia on the Rhind Mathematical Papyrus which dates to the 16thC BC. 22. From the new Institute for Progress, Progress is a Policy Choice. 23. Ed West on the coming demographic issues: 'Children of Men' is really happening (actually understates the problem imo). 24. Theses and counter-theses on sleep. Seems like one of those things where there's tons of variation and you're probably best off doing some rigorous self-experimentation? 25. Death Toll of Price Limits and Protectionism in the Russian Pharmaceutical Market. In 2012, Russia put price caps and protectionist regulations on various pharmaceuticals. The result was a decrease in supply, leading to a striking increase in mortality from diseases those drugs protect against. 26. Fluvoxamine-caffeine interaction: Just learned that fluvoxamine, a common SSRI used to treat depression and other psychiatric conditions, increases the half-life of caffeine in the bloodstream. Like, to an absurd degree: We found evidence of genetic similarity between partners for educational attainment (rg = 0.37), height (rg = 0.13), and depression (rg = 0.08). Common genetic variants associated with educational attainment correlated between siblings above 0.50 (rg = 0.68) and between siblings-in-law (rg = 0.25) and co-siblings-in-law (rg = 0.09). Comparisons between the genetic similarities of partners and siblings indicated that genetic variances were in intergenerational equilibrium. This study shows genetic similarities between extended family members and that assortative mating has taken place for several generations. 28. New EA GWAS with N=3 million, 12-16% variance explained. 29. "Las Pozas ("the Pools") is a surrealistic group of structures created by Edward James in a subtropical rainforest in the Sierra Gorda mountains of Mexico. It includes more than 80 acres (32 ha) of natural waterfalls and pools interlaced with towering surrealist sculptures in concrete." ## Audio-Visual 34. They found Shackleton's ship in the Antarctic, and it's perfectly preserved. 35. Kogonada's After Yang is one of my favorite new films in years. What if Roy Batty was a personal assistant, what happens to his adopted family after he dies? A poignant and wistful film about memory, death, and the legacy we leave behind us. 36. And here's DJ Shadow remixing King Gizzard & The Lizard Wizard. # What I've Been Reading • How to Think Like Shakespeare: Lessons from a Renaissance Education, by Scott L. Newstok. A Romantic old-man-yells-at-clouds tirade about modern education practices. It didn't change any of my views, but it didn't really attempt to do so in the first place: Newstok is a reformist, while I am strictly an abolitionist—and therefore far outside the target audience. I find it hard to separate mass education from the commoditization of knowledge, while Newstok believes we can have our cake and eat it too. In any case, if you want a passionate argument in favor of high-quality education interspersed with Shakespeare quotes, this is the book for you. • Dune, by Frank Herbert. Pretty great, Herbert constructs a deeply alluring world which pulls you in despite some rather hilariously implausible aspects. It's interesting how so much of the "plot" actually happens in the background. The audiobook is quite good. • Dune Messiah, by Frank Herbert. I was told the sequels get crazy, and this is a pretty good start in that direction! Can't wait to see where this nonsense ends up. This is basically a book of palace intrigue and scheming, with a rich religious/predestination/weird time loop sauce on top. • Star Maker, by Olaf Stapledon. I kept thinking that it felt like a really weird throwback to the 1920s-30s, then I looked it up and it was written in 1937. Whoops. It's a non-stop torrent of interesting science fiction ideas, but there's no continuity, no characters to latch on to, and the examination of the ideas stays at the surface level. It's just a series of "this happened, then this happened, then this happened" which I found rather boring. • The Island of Doctor Death and Other Stories and Other Stories, by Gene Wolfe. Some fantastic stories in this collection, in particular I loved Feather Tigers, Death of Dr. Island, Toy Theater, and Seven American Nights. Many of them are in that classic Wolfe style where you have to piece together what's going on from tiny hints left in the text, and it's all a bit ambiguous in the end and so on. There's a lot of focus on religion and death (with two stories, The Hero as Werwolf and The Doctor of Death Island, being fairly explicitly death-ist). • Orphans of the Sky, by Robert Heinlein. Fairly standard generation ship story. Juvenile and ham-fisted (there's a scene where the protagonist literally yells out "and yet it moves!"). Mutants and knife fights and all that. 12 year old me would've loved it. • Wittgenstein's Nephew, by Thomas Bernhard. Bernhard documents his friendship with Paul Wittgenstein (not the pianist), a black sheep of the Wittgenstein family who suffered from various mental problems. They're both rejected by Austrian society, and they both reject it. Bernhard's attitude toward awards (he views them as a kind of insult and punishment) really sums up his relation to his country. A bitter book, sad and pathetic and miserly. Recommended if you're in the market for a feel-bad memoir. • The Status Game: On Social Position and How We Use It, by Will Storr. There's quite a bit of overlap with The Elephant in the Brain, but Storr's book is obviously more focused on status. Also reminiscent of Goffman's Presentation of Self in Everyday Life. Lots of references to Boehm, Henrich, Kuran, Wrangham, etc. (You're probably better off going straight to the source?) If I had to choose between this and Elephant I'd go for Elephant, but they're fairly complementary so it won't be a waste of your time to read both. Parts of the book are focused on contemporary culture war issues, which felt a bit shallow and tiresome. Overall it's not bad though. • The Biology of Moral Systems, by Richard Alexander. There's a great core here, but I wouldn't recommend it. The basic idea of approaching moral systems from an evopsych perspective is useful. However, huge swathes of text are wasted on dull and low-quality academic bickering, many of the specifics (eg the arguments on the development of religion) are completely off, and the last third of the book is dedicated to a mostly fruitless discussion of nuclear war and mutually assured destruction. # The Best and Worst Books I Read in 2021 # The Best Ibn Battutah, The Travels of Ibn Battutah Also known as A Masterpiece to Those Who Contemplate the Wonders of Cities and the Marvels of Travelling, this is a wonderful travelogue from the 14th century (or, more appropriately, the 8th century of the Hegira). Battutah was born in Morocco; he was not wealthy, but he was well-educated and went into the family business of Islamic law. At age 21, he set out for the pilgrimage to Mecca. He would extend his journey for decades, however, following traders in ships and caravans, relying on generous Muslim institutions and his talent for befriending rulers. He eventually covered virtually the entire Islamic world and beyond, from North Africa to China. Battutah gets into all sorts of adventures (luckily escaping death by disease, shipwreck, pirates, bandits, and so on) and provides us with some incredible ethnographic observations. In Constantinople, he meets the Emperor. In India, he becomes a prominent and wealthy administrator under the rule of an erratic Sultan. In the Maldives, he marries six local women and lives a life of leisure under the shade of the palm trees. Yet his wanderlust compels him to keep moving. Battutah himself as a person, however, remains tantalizingly obscure. Having divorced my wives I set sail. We came to a little island in the archipelago in which there was but one house, occupied by a weaver. He had a wife and family, a few coco-palms and a small boat, with which he used to fish and to cross over to any of the islands he wished to visit. His island contained also banana bushes, but we saw no land birds on it except two crows, which came out to us on our arrival and circled above our vessel. And I swear I envied that man, and wished that the island had been mine, that I might have made it my retreat until the inevitable hour should befall me. Don DeLillo, Libra A semi-fictionalized biography of Lee Harvey Oswald in the Oliver Stone tradition, suffused with that great DeLillo style. There's also a kind of meta parallel story of an FBI agent trying to piece together all the evidence, meticulously going through even the tiniest element (much like DeLillo himself). It's quite Pynchonesque with all the criss-crossing conspiracies, the CIA, paranoia, axes of control and influence, a series of coincidences, taking liberty with history...and the ultimately mysterious "fate" that brought Oswald to the assassination. It lacks Pynchon's humor though. "I don't know what they want me to do." "Of course you know." "Tell me where it happens." "Miami." "That means nothing to me." "You've known for weeks." "What happens in Miami?" Ferrie took a while to finish chewing his food. "Think of two parallel lines," he said. "One is the life of Lee H. Oswald. One is the conspiracy to kill the President. What bridges the space between them? What makes a connection inevitable? There is a third line. It comes out of dreams, visions, intuitions, prayers, out of the deepest levels of the self. It's not generated by cause and effect like the other two lines. It's a line that cuts across causality, cuts across time. It has no history that we can recognize or understand. But it forces a connection. It puts a man on the path of his destiny." Christopher de Hamel, Meetings with Remarkable Manuscripts: Twelve Journeys into the Medieval World Twelve chapters, each one dedicated to a different medieval manuscript, from the 6th century Gospels of St. Augustine to the 16th century Spinola Book of Hours. The book is filled with fantastic, gorgeous, high-quality prints from these manuscripts, interspersed with history and commentary in a pleasant conversational style. It's not just about the manuscripts themselves, but also who owned them, their condition, how they've been maintained or altered, where they're housed, and the people taking care of them. Cultural differences in library regulatory practices are a virtually infinite source of comedy. Just lovely all around. Make sure you get the hardcover as the paperback is apparently printed in black & white. Confirmation that he was indeed both scribe and artist is found in the shape of the spaces left for the insertion of initials. Both scribes 2 and 3 (let us exclude 1 for the moment) left simple rectangular blank spaces where large initials were to be painted later, without thought to their shape or composition, and they added guidewords in the margins to indicate what letters were to be supplied. When Hugo came to fill them in, his flamboyantly fluid and multi-tentacled initials fitted uncomfortably into these big draughty square apertures. However, during the stint written by the last scribe from folio 185v onwards, the edges of the script are moulded line by line to fit around the curves and limbs of the painted initials, nestling together snugly like a newly married couple in bed. Text and decoration must have been executed simultaneously by the same person. In short, scribe 4 must be Hugo. Ananyo Bhattacharya, The Man From the Future: The Visionary Life of John von Neumann Short, dense, and with a great balance between accessibility and dumbing down complex subjects. Bhattacharya approaches his subject by focusing on ideas. The first chapter takes care of JvN's early life, and the rest of the book is split up based on the subjects he worked on: mathematics, quantum mechanics, the nuclear bomb, computing, game theory, RAND, and artificial life. Large parts of the book (I'd say about a third) are dedicated not to von Neumann but rather the work other people did based on his ideas. The game theory chapter, for example, covers Nash, Schelling, Aumann, etc. in economics, and John Maynard Smith, Price, Hamilton, etc. in evolutionary game theory. Bhattacharya is good at making all these technical subjects accessible without dumbing them down too much. The one failing point is that JvN's personality, personal life, and professional relationships don't get much attention. From 1944, meetings instigated by Norbert Wiener helped to focus von Neumann’s thinking about brains and computers. In gatherings of the short-lived ‘Teleological Society’, and later in the ‘Conferences on Cybernetics’, von Neumann was at the heart of discussions on how the brain or computing machines generate ‘purposive behaviour’. Busy with so many other things, he would whizz in, lecture for an hour or two on the links between information and entropy or circuits for logical reasoning, then whizz off again – leaving the bewildered attendees to discuss the implications of whatever he had said for the rest of the afternoon. Listening to von Neumann talk about the logic of neuro-anatomy, one scientist declared, was like ‘hanging on to the tail of a kite’. Wiener, for his part, had the discomfiting habit of falling asleep during discussions and snoring loudly, only to wake with some pertinent comment demonstrating he had somehow been listening after all. History by way of biography—Vasari tells a tale of rebirth and artistic progress as Europe emerges from the dark ages, rediscovers the ancients, and then strives to surpass them. Tons of interesting observations on competition, collaboration, the spread of technology, and the psychology of (artistic) greatness. More than 180 lives in over 2000 pages, starting with Cimabue in the 13thC and reaching a climax with Michelangelo in the 16th. Somewhat gossipy and often inaccurate, it nonetheless remains our best source of information on the art and artists of Renaissance Italy. Vasari was a fairly successful painter himself, and his personal aquaintance with both the technique and the business of painting gives us an inside view of the craft. Full review. It is clear that Leonardo, through his comprehension of art, began many things and never finished one of them, since it seemed to him that the hand was not able to attain to the perfection of art in carrying out the things which he imagined; for the reason that he conceived in idea difficulties so subtle and so marvellous, that they could never be expressed by the hands, be they ever so excellent. And so many were his caprices, that, philosophizing of natural things, he set himself to seek out the properties of herbs, going on even to observe the motions of the heavens, the path of the moon, and the courses of the sun. Arthur Schopenhauer, Essays and Aphorisms Excerpts from Parerga und Paralipomena. Unexpectedly hilarious; Arthur would've been one hell of a poaster. Surprisingly similar to the pragmatists in many respects. Spans a huge number of topics: ethics, the will, intelligence, animal welfare, religion, suicide, writing, and much more. Thus we see, for example, the Catholic clergy totally convinced of the truth of all the doctrines of its Church, and the Protestant clergy likewise convinced of the truth of all the doctrines of its Church, and both defending the doctrines of their confession with equal zeal. Yet this conviction depends entirely on the country in which each was born: to the South German priest the truth of the Catholic dogma is perfectly apparent, but to the North German priest it is that of Protestant dogma which is perfectly apparent. If, then, these convictions, and others like them, rest on objective grounds, these grounds must be climatic; such convictions must be like flowers, the one flourishing only here, the other only there. Thucydides, The History of the Peloponnesian War I'm a Herodotus man through and through, but I can appreciate the Thycydidean perspective as well. Though I'm not entirely sure what that perspective entails: how much of his work is prescriptive and how much of it is descriptive? He's obviously a skeptic when it comes to the supernatural, and there's very little room for morality in his history; is this an artifact of the lack of morality in the way the Athenian went about their affairs, or is this something Thuc projects onto them? In any case, while reading this, one must always keep in mind that the Athenians lost! It's interesting to read an ancient historian write about battles with 60 hoplites and 20 archers, and that kind of accounting accuracy perfectly captures Thuc's personality. "... For Athens alone of her contemporaries is found when tested to be greater than her reputation, and alone gives no occasion to her assailants to blush at the antagonist by whom they have been worsted, or to her subjects to question her title to rule by merit. Rather, the admiration of the present and succeeding ages will be ours, since we have not left our power without witness, but have shown it by mighty proofs; and far from needing a Homer for our eulogist, or other of his craft whose verses might charm for the moment only for the impression which they gave to melt at the touch of fact, we have forced every sea and land to be the highway of our daring, and everywhere, whether for evil or for good, have left imperishable monuments behind us. Such is the Athens for which these men, in the assertion of their resolve not to lose her, nobly fought and died; and well may every one of their survivors be ready to suffer in her cause." J. A. Baker, The Peregrine 10 years of obsessive, monomaniacal peregrine-watching in the East of England distilled to 200 pure, intense, astonishing pages. An incredibly rich dish that you can only eat so much of before needing to take a break. Reflects and contains nature both in its form and content. Somewhat reminiscent of Urne-Buriall in that it starts out in a dry, scientific tone and then reaches stylistic extremes later on. Famously recommended by Werner Herzog (along with Virgil and The Short Happy Life of Francis Macomber), and it is indeed extremely Herzogian. There's no green idealism here, the endless cycle of killing which sustains the peregrine is presented unapologetically. "Beauty is vapour from the pit of death", Baker writes. He hovered, and stayed still, striding on the crumbling columns of air, curved wings jerking and flexing. Five minutes he stayed there, fixed like a barb in the blue flesh of the sky. His body was still and rigid, his head turned from side to side, his tail fanned open and shut, his wings whipped and shuddered like canvas in the lash of the wind. He side-slipped to his left, paused, then glided round and down into what could only be the beginning of a tremendous stoop. There is no mistaking the menace of that first easy drifting fall. Smoothly, at an angle of fifty degrees, he descended; not slowly, but controlling his speed; gracefully, beautifully balanced. There was no abrupt change. The angle of his fall became gradually steeper till there was no angle left, but only a perfect arc. He curved over and slowly revolved, as though for delight, glorying in anticipation of the dive to come. His feet opened and gleamed golden, clutching up towards the sun. He rolled over, and they dulled, and turned towards the ground beneath, and closed again. For a thousand feet he fell, and curved, and slowly turned, and tilted upright. Then his speed increased, and he dropped vertically down. He had another thousand feet to fall, but now he fell sheer, shimmering down through dazzling sunlight, heart-shaped, like a heart in flames. He became smaller and darker, diving down from the sun. The partridge in the snow beneath looked up at the black heart dilating down upon him, and heard a hiss of wings rising to a roar. In ten seconds the hawk was down, and the whole splendid fabric, the arched reredos and immense fan-vaulting of his flight, was consumed and lost in the fiery maelstrom of the sky. And for the partridge there was the sun suddenly shut out, the foul flailing blackness spreading wings above, the roar ceasing, the blazing knives driving in, the terrible white face descending, hooked and masked and horned and staring-eyed. And then the back-breaking agony beginning, and snow scattering from scuffling feet, and show filling the bill’s wide silent scream, till the merciful needle of the hawk’s beak notched in the straining neck and jerked the shuddering life away. And for the hawk, resting now on the soft flaccid bulk of his prey, there was the rip and tear of choking feathers, and hot blood dripping from the hook of his beak, and rage dying slowly to a small hard core within. And for the watcher, sheltered for centuries from such hunger and such rage, such agony and such fear, there is the memory of that sabring fall from the sky, and the vicarious joy of the guiltless hunter who kills only through his familiar, and wills him to be fed. # The Worst William Hazlitt, Selected Writings I despise the style of his political writings. Puffed up, aiming to dazzle rather than illuminate. The cheap rhetoric of the ochlagogue. Actively offensive. The non-political writings are much better: they are merely unreadable and sophomoric. Hazlitt's entire aesthetic philosophy just boils down to "art should imitate nature" repeated over and over again, and I can't stand the way he expresses it. It is not denied that the people are best acquainted with their own wants, and most attached to their own interests. But then a question is started, as if the persons asking it were at a great loss for the answer,—Where are we to find the intellect of the people? Why, all the intellect that ever was is theirs. The public opinion expresses not only the collective sense of the whole people, but of all ages and nations, of all those minds that have devoted themselves to the love of truth and the good of mankind,—who have bequeathed their instructions, their hopes, and their example to posterity,—who have thought, spoke, written, acted, and suffered in the name and on the behalf of our common nature. All the greatest poets, sages, heroes, are ours originally, and by right. Carlos Ruiz Zafón, The Shadow of the Wind Just a dull airport novel. The coincidences pile on top of eachother as we are treated to interminable exposition dumps from improbable sources that conveniently know everything. Stylistically it tries too hard and achieves nothing. Destiny is usually just around the corner. Like a thief, a hooker, or a lottery vendor: its three most common personifications. But what destiny does not do is home visits. You have to go for it. Ada Palmer, Too Like the Lightning Love Palmer's blog but this book just wasn't for me. Even though I read plenty of older books, I found the affected faux-18thC style absolutely grating. The plot mostly seems to be based on the Star Wars prequels, with endless scenes of characters talking about the taxation of trade routes or some other similarly boring nonsense. And there's a magical boy thrown in there for good measure, as well. I could ask any contemporary here, ‘Are you a majority?’ and I know what he or she would answer: Of course not, Mycroft. I have a Hive, a race, a second language, a vocation and an avocation, hobbies of my own; add up my many strats and you will soon reduce me to a minority of one, and hence my happiness. I am unique, and proud of my uniqueness, and prouder still that, by being no majority, I ensure eternal peace. You lie, reader. There is one majority still entrenched in our commingled world, a great ‘us’ against a smaller ‘them.’ You will see it in time. I shall give only one hint—the deadliest majority is not something most of my contemporaries are, reader, it is something they are not. # Aspects of the Seeker In Averroës's Search, Borges tells the story of the Islamic philosopher Averroës trying, and failing, to understand Aristotle's writings on theater. Borges sums it up in the afterword: In the preceding tale, I have tried to narrate the process of failure, the process of defeat. I thought first of that archbishop of Canterbury who set himself the task of proving that God exists; then I thought of the alchemists who sought the philosopher’s stone; then, of he vain trisectors of the angle and squares of the circle. Then I reflected that a more poetic case than these would be a man who sets himself a goal that is not forbidden to other men, but is forbidden to him. I recalled Averroës, who, bounded within the circle of Islam, could never know the meaning of the words tragedy and comedy. History and literature offer many cases of ironically failed quests for knowledge. Some phenomena disappear immediately once someone describes them. Douglas Adams wrote of a theory "which states that if ever anyone discovers exactly what the Universe is for and why it is here, it will instantly disappear". The modern world offers many such anti-inductive cases, above all in the movements of the stock market: successful trading strategies tend to stop working after they become known. On a civilizational scale, Malthusianism became irrelevant right at the time someone was able to articulate the idea, and it seems that the moment we are able to improve ourselves through genetic engineering, we will be wiped out by our artificial creations. A second type of ill-fated seeker is one who finds what he is looking for, but his goal is also a punishment. William Beckford, categorically rejecting Ulysses' actions at the land of the Sirens (perhaps inspired by his own life, and perhaps commenting on all attempts to comprehend the universe) created the apostate Caliph Vathek whose obsessive quest for knowledge results in his damnation, and for whom Hell is both the object of desire and the punishment for that desire. There are those who argue that the libertine Beckford only adopted this biblical attitude against the Faustian spirit as an ironic orientalist façade, but the Caliph resists all attempts at interpretation. Some seekers reach their goal, only to have it slip out of their hands. Scientists will occasionally chance on the right idea but lack the ability to prove it: Aristarchus of Samos was doomed by the apparent size of the stars and the lack of parallax. The Royal Navy discovered that lemons prevent scurvy, and then through terrible epistemic luck managed to lose that knowledge over the course of the 19th century: lemons were replaced by limes low in vitamin C, but nobody noticed because the ships were faster. The problem only reappeared when polar explorers started suffering from scurvy despite bringing lime juice with them—and the answer was only discovered by the miraculously good luck of experimenting on guinea pigs, one of very few animals that don't produce vitamin C on their own. Finally the most ironic case of them all, that of the Dalmatian archbishop and heretic Marco Antonio de Dominis: a seeker who is able to find the answer, but is condemned to believe it is false. De Dominis, a contemporary of Kepler (who wrote in favor of the lunar theory of tides) and Galileo (who mocked it), was also an amateur astronomer and wrote a book on the tides titled Euripus. The archbishop begins by presenting both empirical and theoretical arguments in favor of the thesis that the earth is a sphere. He then describes the luni-solar theory of tides: he (correctly) writes that tides are caused by the combined gravitational action of the sun and the moon, (correctly) predicts that high tide occurs simultaneously at antipodal points, and (correctly) shows that the cycle of spring and neap tides can be explained by the combined action of the sun and moon. He also (correctly) deduces that the diurnal inequality between tides will be greatest when the moon is above the tropic of Cancer or Capricorn. Finally, de Dominis explains (incorrectly) that since the two daily tides are always equal to each other, the theory must be false. The heretical archbishop died behind the bars of the Castel Sant'Angelo before his book could be published. # Links & What I've Been Reading Q4 2021 ### Metascience 1. Investigating the replicability of preclinical cancer biology: "50 experiments from 23 papers were repeated, generating data about the replicability of a total of 158 effects [...] for positive effects, the median effect size in the replications was 85% smaller than the median effect size in the original experiments" 2. A catastrophic failure of peer review in obstetrics and gynaecology: "I estimate that across these 46 articles, 346 (64%) of the 542 parametric tests (unpaired t tests, or, occasionally, ANOVA) and 151 (61%) of the 247 contingency table test (Pearson's Χ² or Fisher's exact test) that I was able to check were incorrectly reported." 3. The Business of Extracting Knowledge from Academic Publications: "Close to nothing of what makes science actually work is published as text on the web." 4. A large replication project in marketing, with fairly catastrophic results. Amusingly the abstract doesn't mention the rate of successful replication. 5. Increasing Politicization and Homogeneity in Scientific Funding: An Analysis of NSF Grants, 1990-2020. The methodology is somewhat questionable, but insteresting nonetheless. ### Covid 6. Scott Alexander on the Ivermectin literature and the trouble with trying to wade through a bunch of questionable papers. Alexandros Marinos responds. 7. Zvi's latest. You are probably going to get Omicron, if you haven’t had it already. The level of precaution necessary to change this assessment is very high, and you probably don’t want to pay that price. 8. ADS on the Zvi-Holden bet and taking ideas seriously. Making a blockchain game might genuinely be the best use of Zvi’s time, and he might be acting both rationality and ethically in choosing to pursue it. And so this situation is Good, but only in a very limited and local sense. The tragedy isn’t Zvi’s decision, it’s that a scenario even exists where this is the decision he has to make. 9. Omicron spreading faster than delta because of immune evasion? SARS-CoV-2 Omicron VOC Transmission in Danish Households. Plus twitter thread. ### Forecasting 10. Forecasting in the Field: academics and non-experts try to predict the effects of development interventions. the average correlation between predicted and observed effects is 0.75. Recipient types are less accurate than academics on average, but are at least as accurate for interventions and outcomes that are likely to be more familiar to them. The mean forecast of each group outperforms more than 75% of the comprising individuals, and averaging just five forecasts substantially reduces error, indicating strong “wisdom-of-crowds” effects. Three measures of academic expertise (rank, citations, and conducting research in East Africa) and two measures of confidence do not correlate with accuracy. Among recipient-types, high-accuracy “superforecasters” can be identified using observables. Small groups of these superforecasters are as accurate as academic respondents. ### Economic History 11. The United Fruit Company? Good, Actually. Using administrative census data with census-block geo-references from 1973 to 2011, we implement a geographic regression discontinuity design that exploits a land assignment that is orthogonal to our outcomes of interest. We find that the firm had a positive and persistent effect on living standards. Company documents explain that a key concern at the time was to attract and maintain a sizable workforce, which induced the firm to invest heavily in local amenities that can account for our result. ### Book Reviews 12. Reviews of Moby Dick from 1851. "This is an odd book, professing to be a novel; wantonly eccentric; outrageously bombastic; in places charmingly and vividly descriptive." I love it when modern editions of old books include their contemporary reviews, unfortunately it's not done very often. ### Crypto 14. Bloomberg report on Tether, including the story of how a French screenwriter ended up owning a Bahamian bank. 15. Vitalik Buterin on Crypto Cities. This monster was watching Ethereum for an obscure mistake deep in the process of creating a transaction: the reuse of a number while signing a transaction. I went searching for this creature, laid bait, saw it in the wild, and found unexplained tracks. To understand how this bot works, we need to begin by reviewing ECDSA and digital signatures. ### The Rest 17. Some answers to my questions about Borges, Browne, and Quevedo: On Borges and Quevedo. "The (sad) irony in Tlon’s ending is, therefore, not in a contrast Quevedo vs Browne, then, but in the contrast (Borges + Quevedo + Browne) vs Tlon. Or, maybe, grecolatin tradition versus modernity. With a tinge of sad resignation for the slow but unstoppable victory of the second over the first." 19. SMTM wrap up the Chemical Hunger series on the causes of obesity after 20 posts. 20. On the NIH and the challenges of funding alcohol consumption RCTs. The big alcohol study that didn't happen: My primal scream of rage. 21. RCT of health insurance in India finds few positive effects: Effect of Health Insurance in India: A Randomized Controlled Trial. 23. An interesting ACX comment on reversals in artistic "progress". it's a pattern that has repeated throughout history and around the world, one of naturalist art executed with great skill being deliberately replaced with highly abstract art not requiring as much skill. The cave paintings of Chauvet Cave in France ca 30,000 BP (before present) are more natural and technically much more sophisticated than any cave or rock paintings found after 20,000 BP (some of which are quite abstract and stylized). Reminds me of this paper on bursts of technological development 60-80kya that lasted for a few thousand years and then disappeared. Related, a great new article on the Antikythera mechanism. 24. The Browser interview with QNTM. 26. Razib Khan: Out of Africa's midlife crisis two San from different groups both living in Namibia’s Northern Kalahari desert, and speaking click languages from the same family, are more genetically distinct from one another, by a solid 20%, than a person from Stockholm is from a person from Shanghai. 27. Don't take psychedelics. "Results revealed significant shifts away from ‘physicalist’ or ‘materialist’ views, and towards panpsychism and fatalism, post use." ### Audio-Visual 29. Interface | Part II, cool animation project. 30. A project that made 999 forgeries of a Warhol drawing, then randomly mixed in the original, and sold them. 32. And here's a cool remix of Hugh Masekela's Stimela. ## What I've Been Reading ### Non-Fiction • The Man from the Future: The Visionary Life of John von Neumann by Ananyo Bhattacharya. Bhattacharya approaches his subject by focusing on ideas. The first chapter takes care of JvN's early life, and the rest of the book is split up based on the subjects he worked on: mathematics, quantum mechanics, the nuclear bomb, computing, game theory, RAND, and artificial life. Large parts of the book (I'd say about a third) are dedicated not to von Neumann but rather the work other people did based on his ideas. The game theory chapter, for example, covers Nash, Schelling, Aumann, etc. in economics, and John Maynard Smith, Price, Hamilton, etc. in evolutionary game theory. Bhattacharya is good at making all these technical subjects accessible without dumbing them down too much. JvN's personality, personal life, professional relationships, etc. on the other hand are given scant attention. Overall it felt a bit too short. In less than 300 pages we get such a wide array of ideas, and the story of how they influenced so many people, that it often feels like we're just skimming the surface in a speedboat. I'd like to take a deeper, more ponderous ride in a submarine some day. • Meetings with Remarkable Manuscripts by Christopher de Hamel. Fantastically gorgeous book, filled with high-quality prints of medieval manuscripts. Pleasant conversational style. Just lovely all around. Not just about the manuscripts themselves, but also who owned them, their condition, where they're housed, the librarians taking care of them, etc. • The Rings of Saturn by W. G. Sebald. A book of digressions. The frame is a walking tour of England, and on it are bolted various musings on Sir Thomas Browne, Joseph Konrad, silk manufacture, the Taiping rebellion, and so on. The subjects flow into each other so you don't know where one digression begins and the other ends. However, Sebald kind of undersells how interesting his subjects are; comparing his notes on FitzGerald to the famous Borges essay, for example, makes me wonder how Sebald managed to turn such a fascinating subject into such a dull essay. • Conquistador: Hernán Cortés, King Montezuma, and the Last Stand of the Aztecs by Buddy Levy. I didn't love the book (it felt a bit sloppy, and the style isn't great), but Cortes is an incredible character. The determination, the ingenuity, the absolute ruthlesness. When he murders his wife at the end of the book, all you can think is "well of course he did". And self-aware too: "I and my companions suffer from a disease of the heart that can be cured only with gold"! Perhaps it is the contrast against the Aztecs that, in a way, softens his image? Going to try Prescott's History of the Conquest of Mexico next. • Over the Edge of the World: Magellan's Terrifying Circumnavigation of the Globe by Laurence Bergreen. Solid narrative pop history. Feels a bit rushed after the point of Magellan's death. Exciting, adventurous stuff as you'd expect from the age of exploration. • A Man on the Moon: The Voyages of the Apollo Astronauts by Andrew Chaikin. Covers the entire thing plus a ton of backstory, very thorough (within its scope). Focused on the astronauts, and much of it is the preoduct of interviews with those astronauts, which is kind of obvious at many points as you're only getting one person's perspective on certain events. It would have been better with a broader, more objective view, in my opinion. The latter parts (after the first moon landing) include a surprising amount of geology! I read three books on the early space program this year and none of them was completely satisfying, I'm still trying to find the Richard Rhodes of Apollo... # How I Made$10k Predicting Which Studies Will Replicate
Starting in August 2019 I took part in the Replication Markets project, a part of DARPA's SCORE program whose goal is to predict which social science papers will successfully replicate. I have previously written about my views on the replication crisis after reading 2500+ papers; in this post I will explain the details of forecasting, trading, and optimizing my strategy within the rules of the game.
# The Setup
3000 papers were split up into 10 rounds of ~300 papers each. Every round began with one week of surveys, followed by two weeks of market trading, and then a one week break. The studies were sourced from all social science disciplines (economics, psychology, sociology, management, etc.) and were published between 2009 and 2018 (in other words, most of the sample came from the post-replication crisis era).
Only a subset of the papers will be replicated: ~100 papers were selected for a full replication, and another ~150 for a "data replication" in which the same methodology is applied to a different (but pre-existing) dataset. Out of the target 250 replications, only about 100 were completed by the time the prizes were paid out.
## Surveys
The surveys included a link to the paper, a brief summary of the claim selected for replication, the methodology, and a few statistical values (sample size, effect size, test statistic values, p-value). We then had to answer three questions:
1. What is the probability of the paper replicating?
2. What proportion of other forecasters do you think will answer >50% to the first question?
3. How plausible is the claim in general?
# Early Steps - A Simple Model
I didn't take the first round very seriously, and I had a horrible flu during the second round, so I only really started playing in round 3. I remembered Tetlock writing that "it is impossible to find any domain in which humans clearly outperformed crude extrapolation algorithms, less still sophisticated statistical ones", so I decided to start with a statistical model to help me out.
This felt like a perfect occasion for a centaur approach (combining human judgment with a model), as there was plenty of quantitative data, but also lots of qualitative factors that are hard to model. For example, some papers with high p-values were nevertheless obviously going to replicate, due to how plausible the hypothesis was a priori.
Luckily someone had already collected the relevant data and built a model. Altmejd et al. (2019) combine results from four different replication projects covering 131 replications (which they helpfully posted on OSF). Here are the features they used ranked by importance:
Their approach was fairly complex, however, and I wanted something simpler. On top of that I wanted to limit the number of variables I would have to collect for every paper, as I had to do 300 of them in a week—any factors that would be cumbersome to look up (eg the job title of each author) were discarded. I also transformed a bunch of the variables, for example replacing raw citation counts with log citations per year.
I ended up going with a logistic ridge regression (shrinkage tends to help with out-of-sample predictions). The Altmejd sample was limited in terms of the fields covered (they only had social/cognitive/econ), so I just pulled some parameter values out of my ass for the other fields—in retrospect they were not very good guesses.
ParameterValue
intercept0.40
log # of pages-0.26
p value-25.07
log # of authors-0.67
% male authors0.90
dummy for interaction effects-0.77
log citations per year0.37
discipline: economics0.27
discipline: social psychology-0.77
discipline: education-0.40
discipline: political science0.10
discipline: sociology-0.40
discipline: marketing0.10
discipline: orgbeh0.1
discipline: criminology-0.2
discipline: other psychology-0.2
This model was then implemented in a spreadsheet, so all I had to do was enter the data, and the prediction popped up:
While my model had significant coefficients on # of authors, ratio male, and # of pages, these variables were not predictive of market prices in RM. Even the relation of citations to market prices was very weak. I think the market simply ignored any data it was not given directly, even if it was important. This gave me a bit of an edge, but also made evaluating the performance of the model more difficult as the market was systematically wrong in some ways.
Collecting the additional data needed for the model was fairly cumbersome: completing the surveys took ~140 seconds per paper when I was just doing it in my head, and ~210 seconds with the extra work of data entry. It also made the process significantly more boring.
# Predictions
I will give a quick overview of the forecasting approach here; a full analysis will come in a future post, including a great new dataset I'm preparing that covers the methodology of replicated papers.
At the broadest level it comes down to: the prior, the probability of a false negative, and the probability of a false positive. One must consider these factors for both the original and the replication.
What does that look like in practice? I started by reading the summary of the study on the RM website (which included the abstract, a description of the selected claim, sample size, p-value, and effect size). After that I skimmed the paper itself. If I didn't understand the methodology I checked the methods and/or conclusions, but the vast majority of papers were just straight regressions, ANOVAs, or SEMs. The most important information was almost always in the table with the main statistical results.
The factors I took into account, in rough order of importance:
• p-value. Try to find the actual p-value, they are often not reported. Many papers will just give stars for <.05 and <.01, but sometimes <.01 means 0.0000001! There's a shocking number of papers that only report coefficients and asterisks—no SEs, no CIs, no t-stats.
• Power. Ideally you'll do a proper power analysis, but I just eyeballed it.
• Plausibility. This is the most subjective part of the judgment and it can make an enormous difference. Some broad guidelines:
• People respond to incentives.
• Good things tend to be correlated with good things and negatively correlated with bad things.
• Subtle interventions do not have huge effects.
• Pre-registration. Huge plus. Ideally you want to check if the plan was actually followed.
• Interaction effect. They tend to be especially underpowered.
• Other research on the same/similar questions, tests, scales, methodologies—this can be difficult for non-specialists, but the track record of a theory or methodology is important. Beware publication bias.
• Methodology - RCT/RDD/DID good. IV depends, many are crap. Various natural-/quasi-experiments: some good, some bad (often hard to replicate). Lab experiments, neutral. Approaches that don't deal with causal identification depend heavily on prior plausibility.
• Robustness checks: how does the claim hold up across specifications, samples, experiments, etc.
• Signs of a fishing expedition/researcher degrees of freedom. If you see a gazillion potential outcome variables and that they picked the one that happened to have p<0.05, that's what we in the business call a "red flag". Look out for stuff like ad hoc quadratic terms.
• Suspiciously transformed variables. Continuous variables put into arbitrary bins are a classic p-hacking technique.
• General propensity for error/inconsistency in measurements. Fluffy variables or experiments involving wrangling 9 month old babies, for example.
Things that don't matter for replication but matter very much in the real world:
• Causal identification! The plausibility of a paper's causal identification strategy is generally orthogonal to its chances of replicating.
• Generalizability. Lab experiments are replicated in other labs.
Some papers were completely outside my understanding, and I didn't spend any time trying to understand them. Jargon-heavy cognitive science papers often fell into this category. I just gave a forecast close to the default and marked them as "low confidence" in my notes, then avoided trading them during the market round. On the other hand, sometimes I got the feeling that the jargon was just there to cover up bullshit (leadership studies, I'm looking at you) in which case I docked points for stuff I didn't understand. The epistemological problem of how to determine which jargon is legit and which is not, is left as an exercise to the reader.
## Pour exemple
The data from Replication Markets are still embargoed, so I can't give you any real examples. Instead, I have selected a couple of papers that were not part of the project but are similar enough.
### Ex. 1: Criminology
My first example is a criminology paper which purports to investigate the effect of parenting styles on criminal offending. Despite using causal language throughout, the paper has no causal identification strategy whatsoever. If criminologists had better GRE scores this nonsense would never have been published. The most relevant bits of the abstract:
The present study used path analyses and prospective, longitudinal data from a sample of 318 African American men to examine the effects of eight parenting styles on adult crime. Furthermore, we investigated the extent to which significant parenting effects are mediated by criminogenic schemas, negative emotions, peer affiliations, adult transitions, and involvement with the criminal justice system. Consonant with the study hypotheses, the results indicated that [...] parenting styles low on demandingness but high on responsiveness or corporal punishment were associated with a robust increase in risk for adult crime.
The selected claim is the effect of abusive parenting (the "abusive" parenting style involves "high corporal punishment" but low "demandingness" and "responsiveness") on offending; I have highlighted the outcome in the main regression table below. While the asterisks only say p<.01, the text below indicates that the p-value is actually <.001.
Make your own guess about the probability of replication and then scroll down to mine below.
I'd give this claim 78%. The results are obviously confounded, but they're confounded in a way that is fairly intuitive, and we would expect the replication to be confounded in the exact same way. Abusive parents are clearly more likely to have kids who become criminals. Although they don't give us the exact t-stat, the p-value is very low. On the negative side the sample size (318 people spread over 8 different parenting styles) isn't that big, I'm a bit worried about variance in the classification of parenting styles, and there's a chance that the (non-causal) relation between abusive parenting and offending could be lost in the controls.
This is a classic example of "just because it replicates doesn't mean it's good", and also a prime example of why the entire field of criminology should be scrapped.
### Ex. 2: Environmental Psychology
My second example is an "environmental psychology" paper about collective guilt and how people act in response to global warming.
The present research examines whether collective guilt for an ingroup’s collective greenhouse gas emissions mediates the effects of beliefs about the causes and effects of global warming on willingness to engage in mitigation behavior.
N=72 people responded to a survey after a manipulation, on a) the causes and b) the importance of the effects of climate change. The selected claim is that "participants in the human cause-minor effect condition reported more collective guilt than did participants in the other three conditions (b* = .50, p <.05)". Again, make your own guess before scrolling down.
I'd go with 23% on this one. Large p-value, interaction effect, relatively small sample, and a result that does not seem all that plausible a priori. The lack of significance on the Cause/Effect parameters alone is also suspicious, as is the lack of signifiance on mitigation intentions. Lots of opportunities to find some significant effect here!
The worst part of Replication Markets was the user interface: it did not offer any way to keep track of one's survey answers, so in order to effectively navigate the market rounds I had to manually keep track of all the predictions. There was also no way to track changes in the value of one's shares, so again that had to be done manually in order to exit successful trades and find new opportunities. The initial solution was giant spreadsheets:
Since the initial prices were set depending on the claim's p-value, I knew ahead of time which claims would be most mispriced at the start of trading (and that's where the greatest opportunities were). So a second spreadsheet was used to track the best initial trades. The final column tracks how those trades worked out by the end of the market round; as you can see not all of them were successful (including some significant "overshoots"), but in general I had a good hit rate. As you can see, there were far more "longs" than "shorts" at the start: these were mostly results that were highly plausible a priori but had failed to get a p-value below 0.001.
["Final" is my estimate, "default" is the starting price, "mkt" is the final market price]
Finally, a third spreadsheet was used to track live trading during the market rounds. There was no clean way of getting the prices from the RM website to my sheet, so I copy/pasted everything, parsed it, and then inserted the values into the sheet. I usually did that a few times per day (more often at the start, since that was where most trading activity was concentrated). The claims were then ranked by the difference between my own estimate and the market. My current share positions were listed next to them so I knew what I needed to trade. The "Change" column listed the change in price since the last update, so I could easily spot big changes (which usually meant new trading opportunities).
["Live" is the current market price, "My" is my estimate, "Shares" is the current position]
# Forget the Model!
After the third round I took a look at the data to evaluate the model and there were two main problems:
• My own errors (prediction minus market price) were very similar with the errors of the model:
• The model failed badly at high-probability claims, and failed to improve overall performance. Here's the root mean square error vs market prices, grouped by p-value:
Of course what the model was actually trying to predict was replication, not the market price. But market prices were the only guide I had to go by (we didn't even get feedback on survey performance), and I believed the market was right and the model was wrong when it came to low-p-value claims.
What would happen if everyone tried to optimize for predicting market prices? I imagine we could have gotten into weird feedback loops, causing serious disconnects between market prices and actual replication probability. In practice I don't think that was an issue though.
If I had kept going with the model, I had some improvements in mind:
• Add some sort of non-linear p-value term (or go with z-scores instead).
• Quantify my subjective judgment of "plausibility" and add it as another variable in the model.
• Use the round 3 market data of 300 papers (possibly with extremized prices) to estimate a new model, which would more than triple my N from the original 131 papers. But I wasn't sure how to combine categorical data from the previous replications and probabilities from the prices in a single model.
At this point it didn't seem worth the effort, especially given all the extra data collection work involved. So, from round 4 onward I abandoned the model completely and relied only on my own guesses.
# Playing the Game
Two basic facts dictated the trading strategy:
1. Only a small % of claims will actually be replicated and pay out.
2. Most claims are approximately correctly priced.
It follows that smart traders make many trades, move the price by a small amount (the larger your trade the larger the price impact), and have a diversified portfolio. The inverse of this rule can be used to identify bad traders: anyone moving the price by a huge amount and concentrating their portfolio in a small number of bets is almost certainly a bad trader, and one can profitably fade their trades.
Another source of profitable trades was the start of the round. Many claims were highly mispriced, but making a profit depended on getting to them first, which was not always easy since everyone more or less wanted to make the same trades. Beyond that, I focused on simply allocating most of my points toward the most-mispriced claims.
I split the trading rounds into two phases:
1. Trading based on the expected price movement.
2. At the very end of the round, trading based on my actual estimate of replication probability.
Usually these two aspects would coincide, but there were certain types of claims that I believed were systematically mispriced by other market participants. Trading those in the hope of making profits during the market round didn't work out, so I only allocated points toward them at the end.
Another factor to take into consideration was that not all claims were equally likely to be selected for replication. In some cases it was pretty obvious that a paper would be difficult or impossible to replicate directly. I was happy to trade them, but by the end of the round I excluded them from the portfolio.
Buying the most mispriced items also means you're stuck with a somewhat contrarian portfolio, which can be dangerous if you're wrong. Given the flat payout structure of the market, following the herd was not necessarily a bad idea. Sometimes if a claim traded strongly against my own forecast, I would lower the weight assigned to it or even avoid it completely. Suppose you think a study has a 30% chance of replicating, and a liquid market insists it has a 70% chance—how do you revise your forecast?
# Reacting to Feedback
After every round I generated a bunch of graphs that were designed to help me understand the market and improve my own forecasts. This was complicated by the fact that there were no replication results—all I had to go by were the market prices, and they could be misleading.
Among other things, I compared means, standard deviations, and quartiles of my own predictions vs the market; looked at my means and RMSE grouped by p-value and discipline; plotted the distribution of forecasts, and error vs market price; etc.
One standard pattern of prediction markets is that extremizing the market prediction makes it better. Simplistically, you can think of the market price being determined by informed traders and uninformed/noise traders. The latter pull the price toward the middle, so the best prediction is going to be (on average) more extreme than the market's. This is made worse in the case of Replication Markets because of the LMSR algorithm which makes shares much more expensive the closer you get to 0 or 100%. So you can often improve on things by just extremizing the market forecast, and I always checked to see if my predictions were on the extremizing side vs the market.
Here you can see the density plots of my own vs the market forecasts, split up by p-value category. (The vertical line is the default starting price for each group.)
And here's the same data in scatterplot form:
My predictions vs the market. Difference between my forecasts and the market, by discipline. The market was more confident in results from economics, at least in round 3.
Over time my own predictions converged with the market. I'm not entirely sure how to interpret this trend. Perhaps I was influenced by the market and subtly changed my predictions based on what I saw. Did that make me more accurate or less? It's unclear, and based on the limited number of actual replication results it's impossible to tell. Another possibility is that the changing composition of forecasters over time made the market more similar to me?
I think a lot of my success was due to putting in more effort than others were willing to. And by "putting in effort" I mean automating it so I don't have to put in any effort. In round 6 the trading API was introduced; at that point I dropped the spreadsheets and quickly threw together a desktop application (using C# & WPF) that utilized the API and included both automated and manual trading. Automating things also made more frequent data updates possible: instead of copy-pasting a giant webpage a few times a day, now everything updated automatically once every 15 minutes.
The main area on the left is the current state of the market and my portfolio, with papers sorted by how mispriced they are. Mkt is the current market price, My is my forecast, Position is the number of shares owned, Liq. Value is the number of points I could get by exiting this position, WF is a weight factor for the portfolio optimization, and Hist shows the price history of that claim.
On the right we have pending orders, a list of the latest orders executed on the market, plus logging on the bottom.
I used a simple weighting algorithm with a few heuristics sprinkled on top. Below you can see the settings for the weighting, plus a graph of the portfolio weights allocated by claim (the most-mispriced claims are on the left).
To start with I simply generated weights proportional to the square of the difference between the current market price and my target price (Exponent). Then,
• multiplied that by a per-study weight factor (WF in the main screen),
• multiplied that by ExtremeValueMultiplier for claims with extreme prices (<8% or >96%),
• removed any claims with a difference smaller than the CutOff,
• removed any claims with weight below MinThreshold,
• limited the maximum weight to MaxPosition,
• and disallowed any trading for claims that were already close to their target weight (NoWeightChangeBandwidth).
There was also another factor to take into consideration: the RM organizers ran some bots of their own. One simply traded randomly, while the other systematically moved prices back toward their default values. This created a predictable price pressure which had to be taken into account and potentially exploited: the DefDiffPenalizationFactor lowered the weight of claims that were expected to have adverse movements due to the bots.
Fading large price movements was automated, and I kept a certain amount of free points available so that I could take advantage of them quickly. Finally, turning the weighting algorithm into trades was fairly simple. If the free points fell below a threshold, the bot would automatically sell some shares. Most trades did not warrant a reaction however, and I had a semi-automated system for bringing the portfolio in line with the generated weights, which involved hitting a button to generate the orders and then firing them off.
When there are a) obviously profitable trades to be made and b) multiple people competing for them, it's very easy to get into a competitive spiral that pushes speeds down to the minimum allowed by the available technology. That's how a replication prediction market ended up being all about shaving milliseconds off of trading algos.
By round 9 another player (named CPM) had also automated his trades and he was faster than me so he took all my profits by reacting to profitable opportunities before I could get my orders in—we were now locked in an HFT latency race. There was only one round left so I didn't want to spend too much time on it, but I did a small rewrite of my trading app so it could run on linux (thanks, .NET Core), which involved splitting it into a client (with the UI) and a server (with the trading logic), and patching in some networking so I could control it remotely. Then, I threw it up on my VPS which had lower ping to the RM servers.
When I first ran my autotrader, I polled the API for new trades once every 15 minutes. Now it was a fight for milliseconds. Unfortunately placing the autotrader on the VPS wasn't enough, the latency was still fairly high and CPM crushed me again, though by a smaller margin this time. Sometimes I got lucky and snagged an opportunity before he could get to it though.
# The Results
In money terms, I made $6640 from the surveys and$4020 from the markets for a total of $10,660 (out of a total prizepool of about$190k).
In terms of the actual replication results, the detailed outcomes are still embargoed, so we'll have to wait until next summer (at least) to get a look at them. Some broad stats can be shared however: the market predicted a 54% chance of replication on average—and 54% of the replications succeeded (the market isn't that good, it got lucky).
Of 107 claims that resolved, I have data on 31 which I made money on. For the rest I either had no shares, or had shares in the incorrect direction. Since I only have data on the successes, there's no way to judge my performance right now.
Survey vs Market Payouts
The survey round payout scheme was top-heavy, and small variations in performance resulted in large differences in winnings. The market payout on the other hand was more or less communistic. Everyone gets the same number of points; and it was difficult to either gain or lose too many of them in the two weeks of trading. As a result, the final distribution of prizes is rather flat. At best a good forecaster might increase earnings by ~10% by exploiting mispricings, plus a bit more through intelligent trading. The Gini coefficient of the survey payouts was 0.76, while the Gini of the market payouts was 0.63 (this is confounded by different participation levels, but you get the point).
This was backwards. I think one of the most important aspects of "ideal" prediction markets is that informed traders can compound their winnings, while uninformed traders go broke. The market mechanism works well because the feedback loop weeds out those who are consistently wrong. This element was completely missing in the RM project. I think the market payout scheme should have been top-heavy, and should have allowed for compounding across rounds, while the survey round should have been flatter in order to incentivize broader participation.
# Conclusion
If the market had kept going, my next step would have been to use other people's trades to update my estimates. The idea was to look at their past trades to determine how good they were (based on the price movement following their trade), then use the magnitude of their trades to weigh their confidence in each trade, and finally incorporate that info in my own forecast. Overall it's fascinating how even a relatively simple market like this has tons of little nuances, exploitable regularities, and huge potential for modeling and trading strategies of all sorts.
In the end, are subsidized markets necessary for predicting replication? Probably not. The predictions will(?) be used to train our AI replacements, and I believe SCORE's other replication prediction project, repliCATS, successfully used (cheaper) discussion groups. It will be interesting to see how the two approaches compare. Tetlock's research shows that working as part of a team increases the accuracy of forecasters, so it wouldn't surprise me if repliCATS comes out ahead. A combination of teams (aided by ML) and markets would be the best, but at some point the marginal accuracy gains aren't really worth the extra effort and money.
I strongly believe that identifying reliable research is not the main problem in social science today. The real issue is making sure unreliable research is not produced in the first place, and if it is produced, to make sure it does not receive money and citations. And for that you have to change The Incentives.
PS. Shoot me an email if you're doing anything interesting and/or lucrative in forecasting.
PPS. CPM, rm_user, BradleyJBaker, or any other RM participant who wants to chat, hit me up!
1. 1.For example a paper based on US GDP data might be "replicated" on German GDP data.
2. 2.The Bayesian Truth Serum answers do not appear to be used in the scoring?
3. 3.There were also some bonus points for continuous participation over multiple rounds.
4. 4.There would be significant liquidity problems with a continuous double auction market.
5. 5.I can't provide any specific examples until the embargo is lifted, sometime next year.
6. 6.Cowen's Second Law!
7. 7.If page count/# authors/% male variables are actually predictive, I suspect it's mostly as a proxy for discipline and/or journal. I haven't quantified it, but subjectively I felt there were large and consistent differences between fields.
8. 8.The RM replications followed a somewhat complicated protocol: first, a replication with "90% power to detect 75% of the original effect size at the 5% level. If that fails, additional data will be collected to reach "90% power to detect 50% of the original effect size at the 5% level".
9. 9.Scroll down to "Reconstruction of the Prior and Posterior Probabilities p0, p1, and p2 from the Market Price" in Dreber et al. 2015 for some equations.
10. 10.In fact it's a lot lower than the .001 threshold they give.
11. 11.In order to trade quickly at the start, I opened a tab for each claim. When the market opened, I refreshed them all and quickly put in the orders.
12. 12.I still haven't looked into it, any suggestions? Could just estimate two different models and weighted average the coefficients - caveman statistics.
13. 13.Behavioral genetics papers for example were undervalued by the market. Also claims where the displayed p-value was inaccurate - most people wouldn't delve into the paper and calculate the p-value, they just trusted the info given on the RM interface.
### Forecasting
15. Alignment Problems With Current Forecasting Platforms. A look at some issues with GJO/CSET/Metaculus. It's not easy to incentivize people to provide their true forecasts at all times, share information, etc.
17. Hedgehog, blockchain prediction market from "Futarchy Research Limited".
### Book Reviews
18. Razib Khan has a relatively positive review of Harden's The Genetic Lottery, but the Steve Sailer review is a lot more entertaining. It's amusing that the BBEG for these people is still Charles Murray rather than, say, David Reich who has said much worse things.
### The Rest
19. George Church is bringing back the woolly mammoth.
Surveying the top Y Combinator companies, I find that around the top 50 are valued at over $1,000,000,000. They won’t all exit successfully, and the founders won’t all own enough equity to emerge with tres commas to their net worth, but this already gets us to a much more practical and optimistic heuristic to life: 1. Try very hard to get into YC 2. Conditional on acceptance, try very hard to become a billionaire The odds really aren't that bad. Also from ADS, Does Moral Philosophy Drive Moral Progress? 21. You've probably already seen SMTM's fantastic series on the causes behind the rise in obesity. Some interesting pushback from RCA and a literal banana. 22. Felix Stocker on Will MacAskill's longtermist plans: Reflecting on the Long Reflection. "I'm struggling to see the Long Reflection as anything other than impossible and pointless: impossible in that we cannot solve all x-risks before any s-risks, or avoid race dynamics; pointless in that I don't believe that there is a great Answer for it to discover." 23. Alexey Guzey on Bloom et al's Are Ideas Getting Harder to Find? The paper has a bunch of problems, but the more general section on TFP is the most interesting: France’s TFP in 2001 was higher than in 2019. Italy’s TFP in 1970 was higher than in 2019. Japan’s TFP in 1990 was higher than in 2009. Spain’s TFP in 1984 was higher than in 2019. Sweden’s TFP in 1973 was higher than in 1993. Switzerland’s TFP in 1974 was higher than in 1996. United Kingdom’s TFP in 2003 was higher than in 2019. 24. ACX on the FDA: Adumbrations Of Aducanumab The Moldbuggian aspects of this are still underappreaciated. Bureaucracy and bureaucrats are isolated from the consequences of their actions; the idea of equality before the law is a complete joke in the modern regulatory state, and the incentive vectors point in exactly the wrong direction. Scott ultimately blames it on the incentives of the politicians—the people seem to accept infinite costs to prevent certain bad things from happening; but if we take the people as a given, isn't ultimately the system of governance at fault? Plus ACX on missing school: Kids Can Recover From Missing Even Quite A Lot Of School. 25. Herding, Warfare, and a Culture of Honor: Global Evidence. "The culture of pre-industrial societies that relied on animal herding emphasizes violence, punishment, and revenge-taking". Highly speculative (the approach of extracting culture of honor from folklore seems doubtful for various reasons) and those scatter plots are not entirely convincing, but also intuitively appealing. 26. Exploiting an exogenous shock in birth control prices, The Children of the Missed Pill looks at the causal impact of the pill: "As children reached school age, we find lower school enrollment rates and higher participation in special education programs." The eugenic effect of abortion/contraception is both underrated and understudied. 27. A primer on olivine weathering as a cheap method of carbon capture; looks like it could sequester a tonne of CO2 for less than$20. Geoengineering is very cheap compared to most proposed "green" solutions. The OECD has 120 euros per tonne as its "central estimate" of carbon costs in 2030, implying an extremely high ROI for geongineering.
28. DeepMind: Generally capable agents emerge from open-ended play. "We find the agent exhibits general, heuristic behaviours such as experimentation, behaviours that are widely applicable to many tasks rather than specialised to an individual task. This new approach marks an important step toward creating more general agents with the flexibility to adapt rapidly within constantly changing environments."
29. Unintentionally hilarious paper about AI spotting race in chest x-rays: "Our findings that AI can trivially predict self-reported race - even from corrupted, cropped, and noised medical images - in a setting where clinical experts cannot, creates an enormous risk for all model deployments in medical imaging: if an AI model secretly used its knowledge of self-reported race to misclassify all Black patients, radiologists would not be able to tell using the same data the model has access to."
30. "Pain Reprocessing Therapy" "centered on changing patients’ beliefs about the causes and threat value of pain" more effective than usual care for back pain, at least if you think you can trust people's responses in surveys.
31. Yet another piece of evidence against the efficacy of advertising: TV Advertising Effectiveness and Profitability: Generalizable Results From 288 Brands. "...negative ROIs at the margin for more than 80% of brands, implying over-investment in advertising by most firms. Further, the overall ROI of the observed advertising schedule is only positive for one third of all brands."
33. Matt Lakeman travels to Peru and Panama.
### Audio-Visual
35. An animated explainer of Robin Hanson's grabby aliens model: Humanity was born way ahead of its time. The reason is grabby aliens.
36. Did you know that Milla Jovovich released an album in 1994 and it's...not bad at all? Sounds like Kate Bush and Peter Gabriel. Check out Clocks. [NSFW cover art]
37. Plus some great krautrock: Et Cetera - Kabul.
### Non-Fiction
• The History of the Pelopponesian War, by Thucydides. Re-read. What was that Coleridge quip? "All men are born Herodotians or Thucydideans"? Something like that. Anyway, I was definitely born a Herodotian. Thucydides is a historian with the soul of an accountant. Still, there are things to appreciate in that attitude: while most ancient historians never saw an army smaller than 400,000, he's happy to tell you about engagements with 60 hoplites and 20 archers. And keeping track of a myriad engagements, covering Asia Minor, Greece, and Italy, over the span of multiple decades is extremely impressive.
How prescriptive is Thuc's realpolitik? I'm not entirely sure, it certainly didn't do the Athenians any good. He's obviously a skeptic when it comes to the supernatural, and there's very little room for morality in his history; is this an artifact of the lack of morality in the way the Athenian went about their affairs, or is this something Thuc projects onto them? One interesting point is that his story draws on the structure of tragedy: the hubris of the Sicilian expedition is ultimately punished; the players seem to lack any ability to change course. Perhaps morality plays no role in this history because Thuc views the path taken by each polis as deterministic. (This applies both to the "Thucydides trap" specifically, and also more generally).
On the question of direct democracy as a system of government things are a bit clearer as Thuc doesn't hide his views. He's short on alternatives though; the traditional polis obviously can't cope with the environment of the 4th century, but Thuc can't really see beyond it.
There are apparently some people who think Thucydides influenced the Neoconservatives, and I find that utterly absurd. Thuc is extraordinarily cynical when it comes to "spreading freedom"-style justifications for war, and if there's any realpolitik involved in spending trillions so that Afghan women can get gender studies degrees for 20 years before the Taliban come back, I'm not seeing it.
One of the things that stand out is how bad the Greeks are at war. Reading Thuc, you're constantly thinking "well of course these guys got rolled by the Romans". How did they beat the Persians so hard? Sieges seem to be a sticking point (something Phillip II turned out to be quite good at), so perhaps the open battles against the Persians played into their hands, or perhaps it was simply a matter of mismatched unit compositions. On the other hand the Athenians were extraordinarily persistent; even after the plague and the Sicilian disaster they still kept going for years, possibly only losing due to the Persian money flowing into the Spartan coffers.
If you haven't read any histories of the Pelopponesian war, this is highly recommended, just keep in mind it's very unfinished. Get the Landmark edition.
• The Swerve: How the World Became Modern, by Stephen Greenblatt. This book has an incredibly ambitious thesis: it argues that the world became modern due to the rediscovery of Lucretius' De Rerum Natura. Unfortunately the evidence presented in favor of that thesis is pretty weak, and the book suddenly ends right as it starts to get into a groove. Still, it's fairly entertaining and has a ton of interesting anecdotes from the life of Poggio Bracciolini (the man who rediscovered Lucretius).
• The Origin of Species, by Charles Darwin. A fairly dry read, its value today mainly lies in its documentation of the discovery of evolution, and in showing how Darwin could reason his way forward despite rather limited means (not even an inkling of DNA!). It was cool to see the role geology played in the development of evolutionary theory, and there's a very interesting passage (at the end of the chapter ON THE IMPERFECTION OF THE GEOLOGICAL RECORD) in which Darwin almost invents plate tectonics based on the geographical distribution of species. It's difficult to recommend: if you want to learn about evolution, pick up a modern textbook; if you're interested in the history of science you should probably read a historian; and if you just want to read something cool by Charles Darwin, pick up his Beagle adventure.
### Fiction
• A Fire Upon the Deep, by Vernor Vinge. Some pretty cool worldbuilding, with a universe divided into zones where different levels of technology are possible (the highest one is filled with Gods who quickly commit suicide). One of the main alien races is a sentient houseplant riding a roomba (seriously). But half the novel is wasted on a dull isekai story about some annoying kids stuck on a backwards planet with telepathic wolves, making the thing way overlong. And the resolution is not entirely satisfying.
• Inhibitor Phase, by Alastair Reynolds. A new novel in the Revelation Space universe, unfortunately it's also the worst novel in the Revelation Space universe. It's a bit like a horror theme park, going from one ride to the next with little to no connective tissue between them. Even worse, many of the rides are completely nonsensical given the setting (humanity has almost been completely wiped out by the inhibitors). The two main characters are completely uninteresting, their dialogue is annoying, and the revelations of their backstory are completely predictable.
• Crash, by J. G. Ballard. Holy mother of Christ, this is an experience. A blunt tool that beats you into submission through drone-like repetitiveness. Truly a novel that lives up to its reputation (one publisher's reader wrote: "This author is beyond psychiatric help. Do Not Publish!"). What images! A peerless examination of the intersection between sex and technology. The Cronenberg film gets the imagery right, but the languid, whispered tempo is completely wrong. Kermode, in a very positive review, described it as "glacial"! I feel the novel required a more in-your-face treatment.
He dreamed of ambassadorial limousines crashing into jack-knifing butane tankers, of taxis filled with celebrating children colliding head-on below the bright display windows of deserted supermarkets. He dreamed of alienated brothers and sisters, by chance meeting each other on collision courses on the access roads of petrochemical plants, their unconscious incest made explicit in this colliding metal, in the heamorrhages of their brain tissue flowering beneath the aluminized compression chambers and reactions vessels.
• Lord of the Flies, by William Golding. Somehow managed to evade this as a kid. It's compelling and effective but I can't get on board with its overwhelming cynicism.
• Don't Make Me Think, by Zero HP Lovecraft. The emoji gimmick doesn't work, but I loved the world-building.
• Flashman and the Dragon, by George MacDonald Fraser. Flashman's in China this time, right in the middle of the Taiping Rebellion. This is the 8th book in the series, and things are starting to get repetitive, but the humor, deep historical research, and memorable characters manage to overcome the familiar plotline.
• The Shadow of the Wind. by Carlos Ruiz Zafón. Bad audiobook of a bad airport novel filled with interminable exposition dumps in an awful style. Dropped it halfway through.
# Book Review: The Lives of the Most Excellent Painters, Sculptors, and Architects
I found Giorgio Vasari through Burckhardt and Barzun. The latter writes: "Vasari, impelled by the unexampled artistic outburst of his time, divided his energies between his profession of painter and builder in Florence and biographer of the modern masters in the three great arts of design. His huge collection of Lives, which is a delight to read as well as a unique source of cultural history, was an amazing performance in an age that lacked organized means of research. [...] Throughout, Vasari makes sure that his reader will appreciate the enhanced human powers shown in the works that he calls "good painting" in parallel with "good letters.""
Vasari was mainly a painter, but also worked as an architect. He was not the greatest artist in the world, but he had a knack for ingratiating himself with the rich and powerful, so his career was quite successful. Besides painting, he also cared a lot about conservation: both the physical preservation of works and the conceptual preservation of the fame and biographies of artists. He gave a kind of immortality to many lost paintings and sculptures by describing them to us in his book.
His Lives are a collection of more than 180 biographies of Italian artists, starting with Cimabue (1240-1302) and reaching a climax with Michelangelo Buonarroti (1475-1564). They're an invaluable resource, as there is very little information available about these people other than his book; his biography of Botticelli is 8 pages long, yet on Botticelli's wikipedia page, Vasari is mentioned 36 times.
He was a straight-laced man surrounded on all sides by wild and eccentric artists. While Vasari was a sober businessman, always delivering his work on time, the people he was writing about were usually tempestuous madmen who would take commissions and leave the work unfinished, or go off on the slightest affront and start hacking apart their own works. Even of the great Leonardo he writes that "through his comprehension of art, [he] began many things and never finished one of them".
The greater part of the craftsmen who had lived up to that time had received from nature a certain element of savagery and madness, which, besides making them strange and eccentric, had brought it about that very often there was revealed in them rather the obscure darkness of vice than the brightness and splendour of those virtues that make men immortal.
Many of them were undone by their love of food, drink, and/or women:
...when his dear friend Agostino Chigi commissioned him to paint the first loggia in his palace, Raffaello was not able to give much attention to his work, on account of the love that he had for his mistress.
Gwern's review of the autobiography of Cellini (which includes the words "aside from the demonology and weather-controlling") should give you a taste of what these guys were like.
Arnold M. Ludwig, The Price of Greatness: Resolving the Creativity and Madness Controversy
Vasari's approach to the truth can be described as loose, if not gossipy. Many of the lives include fabricated elements, sometimes obviously so: I doubt anyone ever believed the story of Cimabue taking on Giotto as a pupil after seeing him scratch a painting on a stone. One of the most striking tales is the murder of Domenico Veneziano by Andrea del Castagno, but in reality Castagno actually died first. Vasari also damaged the reputation of some of his competitors, such as Jacopo da Pontormo, whom he portrayed as a paranoid recluse.
Vasari is also hilariously biased in favor of Florence: "in the practice of these rare exercises and arts—namely, in painting, in sculpture, and in architecture—the Tuscan intellects have always been exalted and raised high above all others". The story of his visit to Titian (a Venetian) is typical:
One day as Michelangelo and Vasari were going to see Titian in the Belvedere, they saw in a painting he had just completed a naked woman representing Danae with Jupiter transformed into a golden shower on her lap, and, as is done in the artisan's presence, they gave it high praise. After leaving Titian, and discussing his method, Buonarroti strongly commended him, declaring that he liked his colouring and style very much but that it was a pity artisans in Venice did not learn to draw well from the beginning and that Venetian painters did not have a better method of study.
Titian, Danae with Jupiter as a "golden shower"
I read (as usual) the Everyman edition, but would not recommend braving the entire work unless you're a Renaissance art fanatic. The collection spans over 2000 pages, and can get tiresome and repetitive when you go through the 100th similar biography of some minor painter you've never heard of. I would, however, recommend the best chapters which I have picked out below (and which are freely available online):
• Giotto (1267-1337), an early stepping stone between the medieval "Greek style" and the modern one.
• Uccello (1397-1475), the pioneer of perspective.
• Piero di Cosimo (1462-1522), an eccentric who was influenced by the Dutch and eventually fell under the sway of Savonarola.
• Raffaello (1483-1520), a brilliant talent who died young.
• Il Rosso (1495-1540), who travelled to France and painted for Francis I.
• Il Sodoma (1477-1549), the name says it all. Had a pet monkey.
## The Golden Present
Giorgio Vasari was one of the earliest philosophers of progress. Petrarch (1304-1374) invented the idea of the dark ages in order to explain the deficiencies of his own time relative to the ancients, and dreamt of a better future:
My fate is to live among varied and confusing storms. But for you perhaps, if as I hope and wish you will live long after me, there will follow a better age. This sleep of forgetfulness will not last forever. When the darkness has been dispersed, our descendants can come again in the former pure radiance.
To this scheme of ancient glory and medieval darkness, Vasari added a third—modern—age and gave it a name: rinascita. And within his rinascita, Vasari described an upward trajectory starting with Cimabue, and ending in a golden age beginning with eccentric Leonardo and crazed sex maniac Raphael, only to give way to the perfect Michelangelo in the end. It is a trajectory driven by the modern conception of the artist as an individual auteur, rather than a faceless craftsman.
The most benign Ruler of Heaven in His clemency turned His eyes to the earth, and, having perceived the infinite vanity of all those labours, the ardent studies without any fruit, and the presumptuous self-sufficiency of men, which is even further removed from truth than is darkness from light, and desiring to deliver us from such great errors, became minded to send down to earth a spirit with universal ability in every art and every profession.
This golden age was certainly no utopia, as 16th century Italy was ravaged by political turbulence, frequent plague, and incessant war. Many of the artists mentioned were at some point taken hostage by invading armies; Vasari himself had to rescue a part of Michelangelo's David when it was broken off in the battle to expel the Medici from Florence.
And yet Vasari saw greatness in his time, and the entire book is structured around a narrative of artistic progress. He documented the spread of new technologies and techniques (such as the spread of oil painting, imported from the Low Countries), which—as an artist—he had an intimate understanding of.
This story of progress is paralleled with the rediscovery (and, ultimately, surpassing) of the ancients. It would take until the 17th century for the querelle des Anciens et des Modernes to really take off in France, but in Florence Vasari had already seen enough to decide the question in favor of his contemporaries—the essence of the Enlightenment is already present in 1550. He writes about Donatello (1386-1466), who produced the first nude male sculpture of the modern era:
The talent of Donato was such, and he was so admirable in all his actions, that he may be said to have been one of the first to give light, by his practice, judgment, and knowledge, to the art of sculpture and of good design among the moderns; and he deserves all the more commendation, because in his day, apart from the columns, sarcophagi, and triumphal arches, there were no antiquities revealed above the earth. And it was through him, chiefly, that there arose in Cosimo de' Medici the desire to introduce into Florence the antiquities that were and are in the house of the Medici; all of which he restored with his own hand.
Similarly, he explains that Mino da Fiesole's (1429-1484) work was "somewhat stiff", yet it was nonetheless admired because "few antiquities had been discovered up to that time". The ancients created a new higher standard, which first created a thirst for imitation, then an impetus to outclass it.
...their successors were enabled to attain to it through seeing excavated out of the earth certain antiquities cited by Pliny as amongst the most famous, such as the Laocoon, the Hercules, the Great Torso of the Belvedere, and likewise the Venus, the Cleopatra, the Apollo, and an endless number of others, which, both with their sweetness and their severity, with their fleshy roundness copied from the greatest beauties of nature, and with certain attitudes which involve no distortion of the whole figure but only a movement of certain parts, and are revealed with a most perfect grace, brought about the disappearance of a certain dryness, hardness, and sharpness of manner...
It is curious that this competitive attitude seems to have disappeared in later eras. In the 18th century, for example, the English painter Joshua Reynolds said of the Belvedere Torso that it retained "the traces of superlative genius…on which succeeding ages can only gaze with inadequate admiration." The Italians of the Renaissance had such a civilizational confidence and such an individual lust for Glory that these fatalistic thoughts would never enter their minds. Take Raphael's School of Athens for example: imagine the self-confidence (if not presumption) necessary to paint Plato (portrayed by da Vinci) and Heraclitus (portrayed by Michelangelo) in the form of your friends and contemporaries! Imagine someone trying that today—Dennet as Plato, Gaspar Noé as...Diogenes?
What came first, the excellence or the confidence? Who can disentangle cause and effect? Braudel suggests an initial "restlessness" created the necessary conditions:
Perhaps if the door is to be opened to innovation, the source of all progress, there must be first some restlessness which may express itself in such trifles as dress, the shape of shoes and hairstyles?
Vasari certainly thought this ambition was a necessary ingredient for greatness. Commenting on Andrea del Sarto, he writes that he was excellent in all skills but "a certain timidity of spirit and a sort of humility and simplicity in his nature made it impossible that there should be seen in him that glowing ardour and that boldness which, added to his other qualities, would have made him truly divine in painting".
And one may ask: why is there no ancient Vasari? Pliny, describing the Laocoön, writes that it is "a work to be preferred to all that the arts of painting and sculpture have produced". Yet he is content to simply mention the names of the artists in passing: Agesander, Athenodorus, and Polydorus of Rhodes. There is not the slightest hint of curiosity about the lives of those most excellent men. Even worse, they were (highly-skilled) copyists, selling reproductions of Hellenistic works to wealthy Romans. The name of the original sculptor is lost to time.
## Aesthetic Value Over Time
You're probably familiar with the story of the Mona Lisa: it was unpopular until it was stolen in 1911, Apollinaire and Picasso were suspects in the case, and when it was finally returned two years later it had become the most famous painting in the world. I was surprised, then, to see that the Mona Lisa was singled out for effusive praise by Vasari. He even focuses on that famous smile:
For Francesco del Giocondo, Leonardo undertook the portrait of Mona Lisa, his wife, and after working on it for four years, he left the work unfinished, and it may be found at Fontainebleau today in the possession of King Francis. Anyone wishing to see the degree to which art can imitate Nature can easily understand this from the head, for here Leonardo reproduced all the details that can be painted with subtlety. The eyes have the lustre and moisture always seen in living people, while around them are the lashes and all the reddish tones which cannot be produced without the greatest care. The eyebrows could not be more natural, for they represent the way the hair grows in the skin—thicker in some places and thinner in others, following the pores of the skin. The nose seems lifelike with its beautiful pink and tender nostrils. The mouth, with its opening joining the red of the lips to the flesh of the face, seemed to be real flesh rather than paint. Anyone who looked very attentively at the hollow of her throat would see her pulse beating: to tell the truth, it can be said that portrait was painted in a way that would cause every brave artist to tremble and fear, whoever he might be. Since Mona Lisa was very beautiful, Leonardo employed this technique: while he was painting her portrait, he had musicians who played or sang and clowns who would always make her merry in order to drive away her melancholy, which painting often brings to portraits. And in this portrait by Leonardo, there is a smile so pleasing that it seems more divine than human, and it was considered a wondrous thing that it was as lively as the smile of the living original.
There are, however, one or two minor problems with his account. One of them is that Vasari never actually saw the Mona Lisa: he was about 6 years old when the painting was moved to France, and he never left Italy. Vasari also says that Leonardo left the painting unfinished, while the Mona Lisa is very much finished. So what's going on? Until the 20th century people simply thought he made it up based on sketches or second-hand accounts.
And then, in 1913, an art collector discovered a second Mona Lisa hanging in a house in Somerset. By all accounts it appears to be authentic, and it matches a sketch by Raphael. It's also a better match for Vasari's description, though he may not have seen this one either. If he thought this one was great, just imagine how he would have raved about the first Mona Lisa!
The Second Mona Lisa
This raises the question: do we venerate the same paintings as Vasari due to path dependency, or due to constancy in aesthetic judgment? Broadly, Vasari's taste is our own. He likes Raphael, da Vinci, and Michelangelo above all others. There are certainly those who argue that the influence and worship of Florentine artists is merely a historical accident, and if Vasari had been a Venetian the history of painting would have turned out rather different.
There are a few interesting points of difference. For example, Botticelli only gets a very short biography, and his Birth of Venus merits not more more than a passing comment: Vasari says "he expressed himself with grace". Another artist who was mostly ignored by Vasari and was later "reevaluated" is the highly erotic Antonio da Correggio.
Correggio, Jupiter and Io
## Architecture?
YOU - Hold on, is architecture also art?
CONCEPTUALIZATION - Of course not, it's autism. Box-drawing. Masturbation with a ruler and a sextant or whatever they use.
Painters, naturally. Sculptors, of course. But...architects? Certainly no twenty-first century chronicler would collect the lives of painters, sculptors, and architects. In our own age architecture is little more than an exercise in applied misanthropy. It has gotten so bad even the commies can tell it sucks, and they're not exactly famous for their aesthetic discernment. And these Renaissance artists were not limited to constructing fancy villas or churches, they often got involved in military engineering as well!
Architects might try to defend themselves by appealing to specialization and saying that, as the science has progressed, an architect today requires far more specialized knowledge than they did in the 16th century. One cannot be both a painter and an architect at the same time. Yet I cannot help but notice that the Duomo and the Uffizi are still beautiful and still standing, while our contemporary concrete claptrap starts crumbling after a couple of years. Our segregation of these fields is both arbitrary and misguided. Perhaps education (and the way it commoditizes knowledge) is to blame.
## Human Capital, Power Laws, and Cluster Effects
Perhaps the reason these people were able to paint, sculpt, architect, and sometimes even do a bit of military engineering on the side, is that art was one of very few avenues available at the time for people to monetize their high human capital. A smart guy in 16th century Florence had limited options: he might go into law, try to be a scribe, or (if he had money) commerce. Science was not a profession, and there were no hedge funds or startups. Art offered a new avenue, open to all with talent.
Art was also something of a winner-take-all market, with virtually unlimited upside for the select few who could make it to the top. Like modern athletes, the superstars were drowning in money while the average painter didn't make all that much. Time seems to have confirmed this power law in artistic excellence: nobody goes to a museum for the paintings of Bartolomeo Vivarini, while da Vinci draws millions every year.
Societies are broadly defined by how they allocate status and (by extension) how they allocate the scarce biological resources they have access to. Rome rewarded military leadership, so it got a lot of great generals (and civil wars). The kleptocrats of Renaissance Italy allocated talent to art; gold and fame attracted competence. At the time Vasari was writing, there were about thirty thousand men in Florence—roughly the same as the number of male citizens in Classical Athens, and also roughly the same as today's population of Dubuque, Iowa. Yet their achievements (to borrow a phrase from Gibbon) would excuse the computation of imaginary millions.
One might ask: where are all the Shakespeares? There are about 25x more literate men in England today than in 1564, how come we aren't producing 25x more Shakespeares? The answer is that our society does not allocate much of its human capital to playwriting. There are (potentially) great authors who spend 8 hours a day writing ads for cereal, or improving trading algorithms by 0.01%. Capitalism, for all its virtues, tends to instill a preference for mere optimization in its subjects; the shadow of utility blots out the impetus for Glory.
One of the starkest lessons from Vasari is the importance of clusters in artistic production. He highlights both the spur of competition and the virtues of learning through imitation. He documents how new techniques (oil, perspective, new approaches to color) spread like wildfire, and how the newly unearthed Roman statues provided both a lesson and a stimulus for improvement. The mentor-mentee relationship was extremely important; a young artist could destroy his entire career by choosing the wrong master.
There is an extensive literature in "economics" covering the influence of agglomeration in creative industries. Hollywood is an obvious example, but there also seem to be agglomeration gains in 18-19th century classical music, while a writer who moved to London in the 18th or 19th century ended up with 12% higher productivity. Renaissance Florence certainly seems to be another one of these (which suggests an element of path dependency).
A century ago a man like Ernest Hemingway could just travel to Paris, join a flourishing artistic community, and have lunch with the world's greatest author (James Joyce). Imagine some random guy flying to New York and trying to have a meal with Thomas Pynchon today. Global connectivity has made us more insular by removing the barriers that used to act as filters. The apprenticeship opportunities of the Renaissance do not exist any more, though the wealthy patrons are still around.
Yet new possibilities for cluster formation open up on the internet: group chats, forums, perhaps even twitter. But it is not easy to cultivate the right mix of competition and imitation, or the preconditions necessary for cultural confidence and a lust for Glory. Perhaps the closest analogy in our time would be Silicon Valley; a relatively small area which attracts talent in search of money and fame. It certainly has that culture of ambition.
I leave you with a final quotation from Vasari, on the motivations behind art and how they affect the ultimate product:
And, to tell the truth of the matter, those craftsmen who have as their ultimate and principal end gain and profit, and not honour and glory, rarely become very excellent, even although they may have good and beautiful genius; besides which, labouring for a livelihood, as very many do who are weighed down by poverty and their families, and working not by inclination, when the mind and the will are drawn to it, but by necessity from morning till night, is a life not for men who have honour and glory as their aim, but for hacks, as they are called, and manual labourers, for the reason that good works do not get done without first having been well considered for a long time. And it was on that account that Rustici used to say in his more mature years that you must first think, then make your sketches, and after that your designs; which done, you must put them aside for weeks and even months without looking at them, and then, choosing the best, put them into execution; but that method cannot be followed by everyone, nor do those use it who labour only for gain.
Giorgio Vasari, Self-portrait
1. 1."And without Giorgio Vasari of Arezzo and his all-important work, we should perhaps to this day have no history of northern art, or of the art of modern Europe, at all."
2. 2.The word "renaissance" was only popularized in the 19th century by Michelet.
3. 3.There was a sculptor-biographer named Xenokrates of Sicyon but all his works are lost.
4. 4.McKenzie Wark: "Education “disciplines” knowledge, segregating it into homogenous “fields,” presided over by suitably “qualified” guardians charged with policing its representations. The production of abstraction both within these fields and across their borders is managed in the interests of preserving hierarchy and prestige. Desires that might give rise to a robust testing and challenging of new abstractions is channelled into the hankering for recognition."
5. 5.Michelangelo would undoubtedly have scored very well on the GRE.
6. 6.All will be trampled under the steady imperial advance of the SPQE—the Senatus Populusque Economicus!
The idea of a personal "carbon footprint" is an oil company psyop. About 20 years ago, British Petroleum launched an ad campaign popularizing the notion and put out a website letting you calculate your "carbon footprint". They're still at it.
It's an idea with remarkable memetic power, both for individuals and brands. Displaying your concern about your personal carbon footprint lets you show off your prosociality and marks you out as someone virtuous, someone who takes personal responsibility. The idea also feeds into people's narcissistic tendencies, reassuring them that they're actually important and that their actions matter in the world.
Marketers love the concept, and any company trying to appeal to the nature-loving demographic can use and abuse it: Outdoor Brands Get Serious About the Carbon Footprint of Adventure: The North Face and Protect Our Winters unveil an activism-oriented CO2 calculator.
The problem with all of this, and the reason BP pushed the idea in the first place, is that your personal carbon footprint doesn't matter. You're 1 of about 7.6 billion people on earth, so your effect is about 1/7,600,000,000 ≈ 0.000000013%. Your personal carbon footprint is completely irrelevant to climate change. Global warming is a collective issue that requires collective solutions; framing it as a problem that individuals can tackle (and that individuals are responsible for) distracts from the public policy changes that are necessary. Environmentalist signaling is complete nonsense but also deadly serious.
## The Parallel
People occasionally send me shitty papers, and year or two ago I would care a lot, enjoying the shameful thrill of getting Mad Online about some fraud, or having fun picking apart yet another terrible study. It's an attractive activity, and performing it in public shows how much you care about Good Science. Picking out a single paper to replicate operates at the same level. What's the impact of all this?
In the idealized version of science, a replication failure would raise serious doubts about the veracity of the original study and have all sorts of downstream effects. In the real version of social science, none of that matters. You have to go on active memetic warfare if you want to have any effect, and even then there's no guarantee you'll succeed. As Tal Yarkoni puts it, "the dominant response is some tut-tutting and mild expression of indignation, and then everything reverts to status quo until the next case". People keep citing retracted articles. Brian fucking Wansink has been getting over 7 citations per day in 2021. So what exactly do you think a replication is going to achieve?
Walker
A couple years ago Alexey Guzey wrote "Matthew Walker's "Why We Sleep" Is Riddled with Scientific and Factual Errors", finding not only errors but even egregious data manipulation in Walker's book. Guzey later collaborated with Andrew Gelman on Statistics as Squid Ink: How Prominent Researchers Can Get Away with Misrepresenting Data.
What was the effect of all this? Nothing.
my piece on the book has gotten >250k views by now and still not a single neuroscientist or sleep scientist commented meaningfully on the merits of my accusations. [...] According to UC Berkeley, "there were some minor errors in the book, which Walker intends to correct". The case is closed.
The feedback loops that are supposed to reward people who seek truth and to punish charlatans are just completely broken.
...but a prominent neuroscientist did write to him in private to express his agreement.
Implicit Bias
It is so much harder to get rid of bullshit than it is to prevent its publication in the first place. Let's take a look at some of the literature on implicit bias.
Oswald et al (2013) meta-analyze the relation between the IAT and discrimination: "IATs were poor predictors of every criterion category other than brain activity, and the IATs performed no better than simple explicit measures." Carlsson & Agerström (2016) refine the Oswald et al paper, and find that "the overall effect was close to zero and highly inconsistent across studies [...] little evidence that the IAT can meaningfully predict discrimination".
Meissner et al (2019) review the IAT literature and find that the "predictive value for behavioral criteria is weak and their incremental validity over and above self-report measures is negligible".
Forscher et al (2019) meta-analyze the effect of procedures to change implicit measures, and find that the "generally produced trivial changes in behavior [...] changes in implicit measures did not mediate changes in explicit measures or behavior". Figure 8 from their paper shows the effect of changing implicit measures on actual behavior:
What was the effect of all this? Nothing.
Just within the last few days, the New Jersey Supreme Court announced implicit bias training for all employees of state courts, "Dean Health Plan in Wisconsin implemented new strategies to address health equity in maternal health, including implicit bias training for employees", the Auburn Human Rights Commission has "offered implicit bias training to supervisory personnel in Auburn city government, the Cayuga County Sheriff's Office, public schools and other local organizations", and California's Attorney General is making sure that healthcare facilities are complying with a law requiring anti-implicit bias training.
You can debunk, and (fail to) replicate all you want, but it don't mean a thing. Mitchell & Tetlock (2017) write:
once employers, health care providers, police forces, and policy-makers seek to develop real solutions to real problems and then monitor the costs and benefits of these proposed solutions, the shortcomings of implicit prejudice research will likely become apparent
But it didn't turn out that way, did it? Just as with the personal carbon footprint, the ultimate outcome is a secondary consideration at best.
## Fin
Yarkoni (the British Petroleum of social science) says "it's not the incentives, it's you" but, really, it's the incentives. Before you can run, you must walk. Before you replicate, you must have a scientific ecosystem with reliable self-correction mechanisms. And before you build that, it's a good idea to limit the publication of false positives and low-quality research in general.
One of the key insights of longtermism is that if humanity survives in the long term, the vast majority of humans will live in the future, so even a small improvement to their welfare can have a huge effect. We might make a similar argument about longtermism in social science: the vast majority of papers lie in the future. If we can do something today to improve them even by a little bit, the cumulative impact would be enormous. On the other hand, defeating one of the 10,000 bad papers that will be published this year is not going to do much at all. Effective scientific altruism is systematically improving the future by 0.01% rather than putting your energy into deboonking a single study. Every dollar wasted on replication is a dollar that could've been invested in fixing the underlying collective problems instead. The past is not going to change, but the future is still malleable.
Ideally we'd proclaim the beginning of a new era, ban citations to any pre-2022 works, and start from scratch (except actually do things properly this time). Realistically that won't happen, so the second-best approach is probably a Hirschmanian Exit into parallel institutions.
1. 1.I don't want to overstate the case here—some disciplines work pretty well, so it's not entirely hopeless. I would certainly hope that medical researchers still try to replicate the effects of drugs, and physicists replicate their particle experiments. But in the social sciences things are different.
### Metascience
1. New Science is an attempt to construct a brand new, parallel research ecosystem without all the cruft of academia. Founded by Alexey Guzey and advised (among others) by Tyler Cowen and Andrew Gelman.
2. Observing Many Researchers Using the Same Data and Hypothesis Reveals a Hidden Universe of Uncertainty: 73 teams study the same hypothesis with the same data. Chaos ensues. "Each model deployed to test the hypothesis was unique".
3. Tal Yarkoni and Joe Hilgard have exited academia. Some notes on The Science Reform Brain Drain. "I didn’t believe then that scientific reform would just fizzle out, given the attention and passion it elicited. Now, seeing how tenure insulates older researchers and competition weeds out those who don’t play by their rules, I understand the cynicism better."
4. Please Commit More Blatant Academic Fraud "The problem with this sort of low-key fraud is that it’s insidious, it’s subtle. In many ways, a fraudulent action is indistinguishable from a simple mistake. There is plausible deniability [...] Let’s make explicit academic fraud commonplace enough to cast doubt into the minds of every scientist reading an AI paper. Overall, science will benefit."
6. Atoms: smart contracts for science funding. "Implicit researcher duties are now made explicit with incentives. As a result, scientific roles can become both more specialized and more diverse. PIs can focus less time on writing grants and more time on conducting research. Or the PIs who enjoy and excel at raising funds can do so and even re-deploy it to the right scientists, akin to founders who become angel investors and venture capitalists."
7. Understanding and Predicting Retractions of Published Work Based on metadata + full text. Performs surprisingly well. "Individually, SJR, abstract, country give the best performance out of all metadata features."
8. Collison, Cowen & Hsu, What We Learned Doing Fast Grants. "64% of respondents told us that the work in question wouldn’t have happened without receiving a Fast Grant."
9. Elisabeth Bik is facing legal threats. This Didier Raoult character apparently has more than 3500 publications.
10. A Retrospective on the 2014 NeurIPS Experiment: a giant post on the consistency of the review process, based on 170 papers submitted to NeurIPS. The consistency is actually fairly high, though there is "no correlation between reviewer quality scores and paper's eventual impact".
### Covid
Yet this is the same prestigious journal that published a now infamous statement early last year attacking “conspiracy theories suggesting that Covid-19 does not have a natural origin“. Clearly, this was designed to stifle debate. It was signed by 27 experts but later turned out to have been covertly drafted by Peter Daszak, the British scientist with extensive ties to Wuhan Institute of Virology. To make matters worse, The Lancet then set up a commission on the origins — and incredibly, picked Daszak to chair its 12-person task force, joined by five others who signed that statement dismissing ideas the virus was not a natural occurrence.
### Forecasting
13. Avraham Eisenberg: Tales from Prediction Markets
There was a market on how many times Souljaboy would tweet during a given week. The way these markets are set up, they subtract the total number of tweets on the account at the beginning and end, so deletions can remove tweets. Someone went on his twitch stream, tipped a couple hundred dollars, and said he'd tip more if Soulja would delete a bunch of tweets. Soulja went on a deleting spree and the market went crazy.
14. The Market Consequences of Investment Advice on Reddit's Wallstreetbets: "We find average ‘buy’ recommendations result in two-day announcement returns of 1.1%.[...] 2% over the subsequent month and nearly 5% over the subsequent quarter. [...] our findings suggest that both WSB posters and users are skilled." Or as /r/wallstreetbets put it, "a group of scientists checked our sub out and came to the conclusion that we are not complete morons".
### Book Reviews
15. On Sarah Ruden's translation of the gospels: Do you know how weird the gospels are?
Plenty of good reviews came out of the SSC book review contest. My favorites:
16. Double Fold, on librarians and preservation.
17. On The Natural Faculties, a defense of Galen.
18. Down And Out In Paris And London, on Orwell's experiences as a tramp and menial worker.
### The Rest
19. There’s no such thing as a tree (phylogenetically): a fantastic post on convergent evolution and the classification of 'trees'. "The common ancestor of a maple and a mulberry tree was not a tree. The common ancestor of a stinging nettle and a strawberry plant was a tree."
20. From the great new blog SLIME MOLD TIME MOLD: Higher than the Shoulders of Giants; Or, a Scientist’s History of Drugs. What if the productivity growth slowdown is due to the 1970s Controlled Substances Act? Come for the history of stimulants, stay for Tesla's views on chewing gum. Too many good quotes! Not entirely sure if it's serious or tongue-in-cheek, but that's part of the charm.
21. Toby Ord: The Edges of Our Universe
• Many galaxies that are currently outside the observable universe will become observable later.
• Less than 5% of the galaxies we can currently observe could ever be affected by us, and this is shrinking all the time.
• But we can affect some of the galaxies that are receding from us faster than the speed of light.
22. Scott Alexander Contra Smith On Jewish Selective Immigration. The final paragraph is absolutely spot on: if the Ashkenazi advantage is cultural, then studying it is by far the most important question in the social sciences.
23. Shocks to human capital persist, shocks to physical capital do not: BOMBS, BRAINS, AND SCIENCE: THE ROLE OF HUMAN AND PHYSICAL CAPITAL FOR THE CREATION OF SCIENTIFIC KNOWLEDGE. Also interesting for the data on Jewish contributions to German science before the war: "While 15.0% of physicists were dismissed, they published 23.8% of top journal papers before 1933, and received 64% of the citations"! h/t @cicatriz
24. Social Mobility and Political Regimes: Intergenerational Mobility in Hungary,1949-2017. Social mobility rates ~the same during and after communism. Aristocrats still privileged after 1949. h/t @devarbol
25. No causal associations between childhood family income and subsequent psychiatric disorders, substance misuse and violent crime arrests: a nationwide Finnish study of >650 000 individuals and their siblings. A new study from Amir Sariaslan and colleagues, corroborating earlier results from Sweden. Perhaps the Scandinavian nations with their generous social spending are different from countries with greater inequality though?
26. The Lead-Crime Hypothesis: A Meta-Analysis. "When we restrict our analysis to only high-quality studies that address endogeneity the estimated mean effect size is close to zero." That's quite the funnel plot:
27. Better air is the easiest way not to die. On particles in the air, the harm they cause, and how to avoid them. "By all means, control your body-mass, eat well, and start running. Those are important, but they’re also kind of hard. You might fail to lose weight, but if you try to fix your air, you’ll succeed. You should put the stuff with the highest return on effort first, and that’s air."
By the end of the movie Mr. Fox has pillaged and salted three of the country's largest industrial farms and set a small town on fire with acorn bombs. He got symbolically castrated, lost his home, almost lost his marriage, children, and destroyed the homes and businesses of 20 people who were lucky they didn't starve to death—but he's gotten people to read his column.
29. In 1989 there was an ecoterrorist attack on California, using an invasive species of fruit fly.
30. Viral Visualizations: How Coronavirus Skeptics Use Orthodox Data Practices to Promote Unorthodox Science Online. A seemingly-Straussian (but possibly not) paper on the social epistemology of covid skepticism. "Most fundamentally, the groups we studied believe that science is a process, and not an institution. [...] Moreover, this is a subculture shaped by mistrust of established authorities and orthodox scientific viewpoints. Its members value individual initiative and ingenuity, trusting scientific analysis only insofar as they can replicate it themselves by accessing and manipulating the data firsthand."
31. Robin Hanson: Managed Competition or Competing Managers? On how attitudes toward competition influence our judgments about things like evolution and alien civilizations. "This strong norm favoring management over competition helps explain the widespread and continuing dislike for the theory of natural selection, which explicitly declares a system of competition to be the largest encompassing system."
32. The Deep History of Human Inequality. Rousseau, Darwin, and Boehm on the question of evolution and inequality. "Going further, it could be that culture was essential for reversing polygyny. That’s because practising reverse dominance requires collective action. It’s only by working together that bachelors can depose the big boys."
33. Applied Divinity Studies on Stubborn Attachments, longtermism, progress studies, and effective altruism: The Moral Foundations of Progress. "If we stagnate now, we may be able to restart growth in the future. In comparison, an existential catastrophe is by definition unrecoverable. Given the choice, we ought to focus on stability."
34. A new essay from Houellebecq: The narcissistic fall of France.
No, we are not really dealing with a “French suicide” — to evoke the title of Eric Zemmour’s book — but a Western suicide or rather a suicide of modernity, since Asian countries are not spared. What is specifically, authentically French is the awareness of this suicide. [...] By refusing all forms of immigration, Asian countries have opted for a simple suicide, without complications or disturbances. The countries of Southern Europe are in the same situation, although one wonders if they have consciously chosen it. Migrants do land in Italy, in Spain and in Greece — but they only pass through, without helping to sort out the demographic balance, although the women of these countries are often highly desirable. No, the migrants are drawn irresistibly to the biggest and fattest cheeses, the countries of Northern Europe.
35. The Borderless Welfare State, a report from the Netherlands on the costs and benefits of immigration. Summary in English on p. 19. Scroll down for some great charts.
36. Learning to Hesitate: people tend to spend too much time gathering info on low-impact choices, and too little time gathering info on high-impact choices.
37. What if humans and chimpanzees diverged because of ticks? Hair loss as defense against ticks caused babies to be unable to cling to their mothers, which caused upright walking?! Obviously speculative, but I love this kind of speculation.
38. Wikipedia: Meteor burst communications "is a radio propagation mode that exploits the ionized trails of meteors during atmospheric entry to establish brief communications paths between radio stations up to 2,250 kilometres (1,400 mi) apart."
39. Niccolo Soldo interviews Marc Andreessen(?!?!) "I predict that we — the West — are going to WEIRDify the entire world, within the next 50 years, the next two generations. We will do this not by converting non-WEIRD people to WEIRD, but by getting their kids." His interview with "Unrepentant Baguette Merchant" PEG is also entertaining.
40. Everyone with an e-reader has run into public domain ebooks with horrible formatting/OCR errors on Amazon or Project Gutenberg. Standard Ebooks produces high-quality (and free) versions of public domain books.
41. AI-designed hardware. "We believe that more powerful AI-designed hardware will fuel advances in AI, creating a symbiotic relationship between the two fields."
42. ETH token fights back against frontrunning bots by trapping them in the position.
### Audio-Visual
46. DeepMind's AlphaGo documentary is quite good.
48. And here's Viagra Boys with Girls & Boys from Shrimp Sessions 2.
### Non-Fiction
• The Lives of the Most Excellent Painters, Sculptors, and Architects by Giorgio Vasari. Vasari was a painter and architect who lived in the first half of the 16th century and personally knew many of the greats (including Michelangelo). In this gossipy collection of biographies he covers more than 180 artists, starting with Cimabue and Giotto in the 13thC and ending with Michelangelo and others who were still alive at the time of writing (like Titian and Jacopo Sansovino). The ideas of progress and renaissance are front and center: the great ancients, the decline in the middle ages, and finally the triumphant rebirth of art in his own era. Parts of it are excellent, but it can get a bit dry and repetitive when he describes various minor artists, so I probably wouldn't recommend the full 2000+ page unabridged version. There's a good two-part BBC documentary called Travels With Vasari. Full review forthcoming.
• Not by Genes Alone: How Culture Transformed Human Evolution by Robert Boyd & Peter Richerson. I was curious to see if there was anything in B&R that Henrich failed to capture in his work, and the answer is broadly "no", but there are a few interesting differences: while Henrich is rather triumphalist, B&R take a much more skeptical view of cultural evolution (a Nietzschean perspective, though of course they don't cite him). Unfortunately most of the book is bogged down by a series of dull arguments against various opponents of cultural evolution. My recommendation would be to read The Secret of Our Success, then read just chapter 5 ("Culture is Maladaptive") in this one.
• Great Mambo Chicken and the Transhuman Condition by Ed Regis. A fun pot-pourri of hubristic futurist ideas (cryonics, space habitats, interstellar travel, and so on), and the wild eccentrics who come up with them (Bob Truax, Hans Moravec, Freeman Dyson). The subjects are fascinating, but the book is a bit disorganized and repetitive.
• Essays and Aphorisms by Arthur Schopenhauer. Selections from Parerga und Paralipomena. Very funny, Schopenhauer would have been one hell of a twitter poaster. Surprisingly similar to the pragmatists in some respects. And a pessimistic inverse of Nietzsche in others: "Between the spirit of Graeco-Roman paganism and the spirit of Christianity the real antithesis is that of affirmation and denial of the will to live – in which regard Christianity is in the last resort fundamentally in the right." Will be tackling World as Will and Representation soon-ish.
• Selected Writings by William Hazlitt. How pathetic the petty political polemics of the past appear to the present... I despise his style, especially in the political pieces: cheap bluster that aims only to dazzle, never to illuminate. The puffed-up rhetoric of a third-rate ochlagogue. The non-political writings are much better—they are merely unreadable rather than actively offensive.
• The Literary Art of Edward Gibbon by Harold L. Bond. A fine, short overview. Not aimed at a general audience.
• Fiscal Regimes and the Political Economy of Premodern States, edited by Andrew Monson & Walter Scheidel. I read the three chapters on Rome and skimmed the rest. If an edited volume on the taxation regimes of pre-modern states sounds interesting topic to you, check it out. Revenue sources, coinage, debt, trade, principal agent problems in collection, constraints to budget allocation, and so on. I should probably get to Scheidel's other works at some point.
• The Viennese Students of Civilization: The Meaning and Context of Austrian Economics Reconsidered by Erwin Dekker. On the dry/academic side of things in terms of its style. Ultimately feels a bit superficial: this guy said this, the other guy said that...but we never get a critical examination of the substance of the arguments. The key take-away is that the "Austrian school" focused mostly on humbleness before evolved institutions, and emphasized the necessity of limits in order to have practical freedom.
• Failure Is Not an Option: Mission Control From Mercury to Apollo 13 and Beyond by Gene Kranz. Fascinating subject, but written in a dry, militaristic, PR-conscious style. Even the story of Apollo 13 can become almost boring when told in this manner. Focused entirely on the mission control perspective. The most interesting aspect is how uncredentialed and inexperienced everyone was, and how quickly the space program moved. Reminiscent of Napoleon after the revolution. Feels like they really got lucky sometimes. Genius in hiring?
• The 48 Laws of Power by Robert Greene. For some reason I read a bunch of "self-help"(-adjacent) books. This one is really anodyne compared to what I was expecting. It's mostly famous as a book read by ruthless rappers, but it's just a bunch of amusing historical anecdotes plus a boatload of confirmation bias. Greene likes the history of Japanese tea ceremonies, France during the Ancien Régime and the revolution, ancient Rome and Greece, and even takes several stories from Giorgio Vasari! Above all he likes Baltasar Gracián, whose The Pocket Oracle and Art of Prudence I can heartily recommend.
• Influence: Science and Practice by Robert Cialdini. While 48 Laws of Power presents itself as a manual of manipulation and Influence presents itself more as a disinterested scientific study, the former is actually about airy stories of kings and courtiers while the latter is a cynical dark arts manual for manipulating your coworkers. Make of that what you will. Repetitive & overlong. Also, Cialdini loves to cite dubious social science papers—Milgram, Robber's Cave, etc. Still, the broad strokes are fairly convincing.
• The Presentation of Self in Everyday Life by Erving Goffman. Class-signaling behaviors, profession-signaling behaviors, and so on, viewed through the lens of theatrical presentation. Rather one-sided, I feel it misses situations that can't be boiled down to actor-audience. Nothing really surprising, I think most people will have noticed most of this stuff. Also draws on many questionable historical examples (for example he repeatedly uses the Thugs to illustrate his points).
• Impro: Improvisation and the Theatre by Keith Johnstone. The general observations on status, presentation, space, etc. are quite good, but when he gets into the specifics about theater and masks it's rather dull and fluffy. Would have preferred something a bit more solid.
### Fiction
• Uzumaki by Junji Ito. Horror manga. Starts with a simple idea: spirals are kinda creepy. From there it spins out in every direction, finally ending up in a bizarre post-apocalyptic Lovecraftian scenario. A virtuosic display of variations on a visual theme. Fantastic art, fantastically weird. Highly recommended. Lots of crazy body horror, not for the squeamish.
• The Sailor Who Fell from Grace with the Sea by Yukio Mishima. A great short novel about the sea, glory, death, and wanting to have sex with your mother. Somewhat autobiographical, in a symbolic way. Nihilism, tradition vs westernization, youth vs age, all in a lyrical and nautical style.
• Mao II by Don DeLillo. Cults, mass media, a reclusive author. Love the style, very impressionistic. Lots of great sentences and great paragraphs, unfortunately they do not combine to form a Great Novel, the ideas never coalesce into anything solid. DeLillo revisits many of his typical themes here: American foreign policy, terrorism, cults, etc. Rather presciently written in 1991, very pessimistic on the potentials of mass action. "The future belongs to crowds."
• Libra by Don DeLillo. A semi-fictionalized biography of Lee Harvey Oswald, based on the CIA/Cuban exiles conspiracy theory of the JFK assassination. Somewhat conventional in its style, and Pynchonesque in its attitude: conspiracies, axes of control and influence, strange coincidences, overeager pattern-matching, taking liberties with history. It's lacking the humor though. There's also a kind of meta parallel story of an FBI agent trying to piece together all the evidence, meticulously going through even the tiniest element (much like DeLillo). Pretty good, but The Names remains my favorite DeLillo.
• The Pussy by Delicious Tacos. A collection of autobiographical vignettes about sex and relationships. Starts out extremely vulgar and extremely funny, ends up in deep ugliness and despair. A tragedy disguised as a comedy. Pure blackpill fuel: a dystopian vision of work, love, aging, and human connection in our society. Slightly longer review.
• The Unnamable by Samuel Beckett. If you're interested in the extremes of experimental literature, this is a book for you. The novel at its most abstract and formless. Virtually no characters, plot, movement, imagery, dialogue, paragraphs, or really anything else you might normally associate with a novel. I wouldn't say it's a pleasurable read, but it's an interesting one at least. Isolation, existential loneliness, death.
• How It Is by Samuel Beckett. It can't possibly be sparser and more formless than The Unnamable, you think. But it is! Beckett does away even with coherent, full sentences in this one. Nothing but a series of roughly sketched impressions, in a halting and disjointed language. Not really my jam.
• Wasteland of Flint by Thomas Harlan. A fun space opera in a unique setting (an Aztec-Japanese space empire), focused on xenoarchaeology. Ancient aliens, some cool Solaris-like ideas, some really out-there imagery. Unfortunately it's mostly sequelbait and the sequels don't seem to be very good.
• Too Like the Lightning by Ada Palmer (dropped it half-way through). Wat. My reaction to this book is just pure bewilderment. I love Ada Palmer's blog, but wtf is going on here? Am I supposed to be laughing at the terrible narrator and his horrifically bad similes? Is it for children? The magical boy protagonist and philosophy 101 stuff certainly seems to indicate so. Or maybe "young adults"? What's with the nonsensical worldbuilding (an SF/fantasy future that worships 18th century philosophers, with absurd coincidences piled on top of each other)? And apparently none of the plot is resolved by the end of the book! The whole thing reminded me of the "taxation of trade routes" stuff from the prequels, and this image kept popping into my head:
# On the Pension Apocalypse
Aging populations, archaic pay-as-you-go systems, and undercapitalized pension funds will create huge problems for future retirees. Just how bad is it, and what should you do about it?
# The Situation
In the past there were many workers and few retirees, so it seemed like a good idea to have the workers pay for old peoples' pensions and promise them the same in return. Thus the pay-as-you-go pension system was born. But people stopped having children, started living longer, and the worker:retiree ratio has been falling and will continue to fall precipitously. These problems will be coming home to roost over the next few decades.
To put things into perspective: simply maintaining the current prime:aged ratio would require 383 million additional prime aged people by 2050. The math is clear, and even if fertility tripled tomorrow morning there's a huge lag until that actually starts affecting the economy.
How much will it cost? It's hard to say exactly, the projections depend on fertility, longevity, immigration, growth, and the actual pensions. Plus there are non-pension expenses to take into account: government-funded healthcare spending on retirees is going to increase as well. On the low end some (including the EU) project an increase in spending of just ~3% of GDP, but I find that highly implausible. My own forecast would be around 10% of GDP for the average advanced economy by 2050.
For countries with relatively low government spending and good growth prospects like the US this might not be a problem. For European countries that already have government spending in the 55%+ of GDP range however, things look dire. Raising an additional 10% of GDP through taxation would result in a 20-25% cut in disposable income for the average worker for literally nothing in return. Combine that with low/zero growth and things start looking really bad.
Anyone under the age of 40 or so should expect to receive little in return for their pay-as-you-go pension system contributions. Is it unfair that today's workers slave away, are forced to give away all their money to the boomers, only to receive virtually nothing in return? Sure. Is there anything you can do about it? No. Welcome to democracy.
## Regional Variation
There is enormous variation in pension systems both between and within countries. Places with relatively small pay-as-you-go systems and heavy reliance on private pensions are probably going to be fine. On the other hand there are municipalities in the US which have already started defaulting.
### EU
By 2050, the German workforce is expected to shrink by about 10 million people while the number of retirees will increase by about 7 million people. Most European countries should expect little to no GDP growth in the coming decades, as workforce declines will offset productivity gains. And most of Europe isn't seeing any productivity gains anyway (though some countries, such as Germany, have been growing):
Even more terrifying is the fact that nobody really seems to care about growth in Europe. There's this idea that the EU is ruled by technocrats, but these "technocrats" seem more concerned with adding annoying popups to every website than the permanent collapse of economic growth in the European Union.
Japan has had zero GDP growth since 1995 (which was also when its workforce was at its highest point), and Europe should expect a similar future. Here's what the Nikkei 225 has looked like over the past 3 decades, by the way:
The pie is no longer growing; all that's left is the fight over who gets the biggest piece. Sam Altman is right when he argues that zero-sum economics create a toxic political environment.
In a system with economic growth, things can improve for everyone. In a system without growth, or even one with very little growth, that’s not the case—if things improve for me, it has to come at the expense of things getting worse for you. Without growth, we’re voting against someone else’s interest as much as we’re voting for our own. This ends with lots of fighting and everyone feeling screwed, broken into factions, and unmotivated. Democracy does not work well in a zero-sum world.
People either seem unaware or incapable of preparing for what is to come. Even in prosperous countries like Germany and France, median savings are below €100k. The wealthiest German cohort, those aged 55-64, have median net wealth of €180k, and the younger generations don't seem to be in a hurry to save for retirement. 42% of Europeans have less than three months’ take-home pay saved.
Despite being ahead of the curve on aging, Japan is actually in a pretty good position as it only spends ~10% of GDP on pensions. Compare that to 17% in Italy, 14.5% in France, and 10% in Germany even though those places have significantly smaller retired populations. How do they do it? It's a pay-as-you-go system that simply doesn't pay out very much: the average pension is only ~$2k per month for a married couple. Could you live on that budget? Despite this, they are cutting pensions, increasing the retirement age, and finding ways to get older people to keep working. It's also worth mentioning, however, that they've been running deficits for 30 years and have a debt/GDP ratio of over 230%. Total government spending has been hovering around 40% lately, so it would seem that they have room to increase taxes if it becomes necessary. ### China China is in a nightmarish demographic position and needs to maintain rapid growth despite a declining workforce. Their age pyramid is a time bomb that's about to explode: In 2011, every pensioner was supported by 3.1 workers. By the end of 2017, that ratio had fallen to 2.8-to-one, and the Ministry estimates that by 2050, it will be just 1.3-to-one. In 2016 the one-child policy became the two-child policy. In 2021, the two-child policy became the three-child policy. But it's too late. How long can China keep up the "outgrow the debt" strategy with a declining workforce? And what happens when growth stalls? This seems like one of the likelier scenarios for the next global recession. Of course many have predicted this collapse before, and they were wrong. But the demographic problem is unavoidable. The retirement age is quite low: 60 for men and 55 for women; we can probably expect this to change which will give them a bit of breathing room. But any such changes are wildly unpopular. On top of that, pension funds are already heavily reliant on additional funding from the central government. ### USA Given its low average age and strong growth, the US is in a decent position compared to the EU and China. But there is a large amount of variation within the country: some local governments are doing perfectly fine, while others have serious problems with defined-benefit pensions for public employees. Politicians have been promising generous pensions without bothering to fund them (with the assistance of absurd return assumptions from the funds): pensions give them the ability to offer huge payouts to special interest groups without impacting the budget immediately. The logic of public choice is so clear that there is only one really serious question left, and that is why states haven't collapsed already. As these pensions start taking up a larger percentage of state/local revenues, things will come to a head. In Illinois, for example, pensions took up about 4% of the budget in the 90s. Today it's 25% and growing. There are three alternatives, all painful: cut pensions, cut other services, or start raising taxes. How much of that will people tolerate before they start moving out? If this were simply a horrific problem that we were trying to deal with, it would be bad enough. But it's a horrific problem that we are ignoring, and will continue ignoring until it blows up in our faces. In the middle of the longest bull market in US stock market history, pension deficits have ballooned: Just imagine what a decade of weak stock market returns would do. At the federal level, Social Security has about 15 years until they have to start cutting benefits, but it won't be that expensive to shore it up. And most importantly, the US is growing, and has a lot more room left for tax increases. # What Governments Can Do How will governments respond to the pension apocalypse? All the alternatives seem bad: pension cuts, big tax increases, vast borrowing, inflation, unprecedented immigration. Nobody wants to do any of these things, but the math must eventually balance out. In the end something's gotta give. This survey of Europeans captures the heart of the problem: When it comes to the measures required, even those respondents who acknowledge the threat of demographic problems appear to be fairly reluctant to endorse them: most of the reform proposals are refused by the majority. Everyone understands that governments either need to tax more or pay out less, but people aren't ready to accept either solution. Just 46% support a system that combines basic public pensions with private savings! Even conservatives in America hate the idea of cuts: just 15% of Republicans support Medicare spending cuts, while 10% support Social Security cuts. And when you spend$2T on "stimulus" at a time when there is no AD shortfall, how are you going to close the taps later? With such large political costs (old people are sympathetic, numerous, and politically influential), few politicians are willing to take the necessary steps. And the worse the worker:retiree ratio, the more political power the retirees have—this is not a self-balancing problem.
The example of Japan shows that these problems are not insurmountable, as long as politicians are willing to make difficult choices (and the people accept those choices). The earlier reforms are enacted, the easier things will go, but in most places I expect it will be impossible until a breaking point is reached. Maybe in the end we'll just get a little bit of everything and the math will balance out. But someone is going to have to make sacrifices.
The biggest danger comes not from the pension apocalypse itself, but rather from the stupid things politicians might do to avoid addressing the pension problem head-on. Some possible scenarios:
• Huge tax increases → mass emigration → death spiral
• Central banks monetize debt → hyperinflation → economic crash
• Central banks don't monetize debt → debt crisis → Greece 2.0
## Grow
You can think of pension liabilities like debt: you can keep growing it forever without problem as long as you also grow your economy quickly enough. We can talk about progress studies as much as we want, but the practical reality on the ground is not encouraging when it comes to growth. Especially in Europe, it is more or less a distant dream rather than a real possibility. And things are slowing down even in the US.
China has no alternative, and so far it seems to be succeeding against all expectations (though the data is fake to some extent, see this and this). We'll see how long they can keep it up.
## Pay Out Less
One possibility is, of course, a straight cut to pensions. But you have to keep in mind that old people tend to vote at higher rates than young people, and that due to demographic collapse the old people will be the most powerful voting block in these countries. People get angry when you cut spending. They get especially angry when they have paid in quite a lot of money to the pension system and will not see much in return. Even the best-managed systems (like the Dutch) will be running into trouble though.
## Raise the Retirement Age
Instead of paying out less, you can try to raise the retirement age instead. This not only decreases the total amount you need to pay, but also props up the worker:retiree ratio. It also has the benefit of not affecting current retirees: bypassing that powerful bloc makes changes easier to implement from a political perspective. But people in surveys say they expect to retire around 63, so I don't know how politically viable this plan is going to be in practice.
For example, Denmark plans to raise the retirement age in step with increases in life expectancy. Under this model, a Danish worker born in 1990 can expect "early retirement" at 70 and normal retirement at 73!
To which I say: fuck off and die.
Edit: after some conversations I have decided that raising the retirement age might not be that bad. Lots of people are still able and willing to work in their 60s and 70s. The best solution would probably be a flexible system in which people can choose when to retire, and the benefits adjust accordingly (the earlier you stop working, the less you get).
## Tax
Raising taxes is another possibility, but how much slack is there in income taxation given a declining base? The US (which is currently at <40% government spending/GDP) has a lot of wiggle room, and you could even say the same about China. But for Europe you have to figure that at some point they'll be hitting the downward slope of the Laffer curve. Emigration is easier than ever and the people with the greatest ability to work remotely also tend to be those who are most desirable from a fiscal perspective.
## Immigration
The sheer number of people needed makes immigration a partial solution at best, and only a few countries use immigration in a way that actually helps. Canada, New Zealand, Australia, and Switzerland for example have fairly reasonable immigration policies that select for high human capital: Canada has the smartest immigrants in the world (average PISA math scores of 527, higher than the natives' and corresponding to an IQ around 103). But despite high population growth and productive immigrants, Canada still faces a shortfall in the near future.
Needless to say, immigration policies that select for low human capital (US: average 1st gen immigrant PISA math score 437, corresponding to an IQ around 91) only make the problem worse. In Europe, non-EU migrants are less likely to be employed and earn much less than Europeans when they are employed. You can't fill a fiscal hole by adding more fiscal burdens to your society.
There is astonishingly little international competition for productive people, but I think that is going to change in the future. This process has already started, with some countries offering digital nomad visas, sometimes with tax incentives on top. In Italy some cities will pay half your rent. I imagine there will be calls for coordination to prevent a "race to the bottom", but I doubt there will be any kind of global agreement on the matter.
## Debt
Hell, if zero rates persist you could just fund the whole thing with debt. Rising interest rate forecasts have been a complete meme for more than a decade now, maybe free money is the new normal. On the other hand, at high levels of debt/GDP it only takes a small rise in rates to create serious problems (and possibly trigger debt crises). But how long can this last? Perhaps the "solution" to rising rates will be inflation, just kicking the can even further down the road.
## The Andrew Dobson Gambit
Inflate!
How much willingness for debt monetization is there among independent central banks? Probably not much (who knows though—remember "no bailouts"? lol) On the other hand, how long can CBs retain their independence against mounting political pressure? What's more unpopular, high inflation or pension cuts? This seems like a fairly unlikely scenario.
## Transition to DC Plans
The countries that are best prepared have some combination of well-funded basic public pension system that makes sure old people don't starve, combined with defined-contribution pensions. Governments with large defined-benefit plans will either need to take serious pain, or start transitioning to defined-contribution plans. The problem with making this transition is that it's expensive immediately, and extremely difficult politically. They tried to do it in Illinois and it was shot down by the courts:
Under the Illinois Supreme Court’s 2015 precedent, a government worker’s pension benefits cannot be changed in any way after their first day working for the state.
If you thought pensioners were a powerful lobby, wait till you see what public employees get away with.
# What You Can Do
First of all, understand that you need to save for retirement.
After that, just follow the standard boring investing advice. Right now is not a great time (high valuations after a 12-year bull market that quintupled the S&P500, near-zero bond yields), but I'm sure there will be good opportunities in the decades to come. The safe withdrawal rate (how much you can withdraw from your investments every year without running out of money before you die) is generally held to be around 3-4%. Suppose you can get by on $30k/year, you'll need$1m in investments. That number has to be adjusted for inflation: assuming you'll retire in 2060 and 2% inflation, that's $2.2m in 2060-dollars. Getting there isn't that difficult: saving$10k/year for 40 years with 7% annual returns will get you to \$2m. The earlier you start the better.
Where to put the money? I'd go with some sort of global equity ETF, perhaps with a tilt toward the US. Beware home equity bias, unless you're American.
What happens if the political demands generated by the collapse of pension systems end up causing a hyperinflationary scenario? As long as you have the money in real estate or equities, you'll probably be fine. German stocks actually did fine in the Weimar hyperinflation era (but only if you held them through an 80% drawdown).
It's an absolutely terrible time for bonds, don't be misled by the incredible bull market of the last 40 years. 60/40 is going to look much worse in the future. Inflation goes up, you're screwed; rates go up, you're screwed. The Greek 10y bond is currently yielding 0.824%—this is a country with a debt/GDP ratio over 200%, a GDP 35% lower than it was 10 years ago, and a recent history of default. The bond market is absolutely nuts right now.
If things get bad enough you might want to protect against expropriation, which means international diversification. But I doubt things will get that bad.
Looking beyond investments, you could move to a cheaper country, which would allow you to get away with lower savings. There are nice places in SEA or South America that are both civilized and cheap. It's pretty easy for Norteamericanos and Europeans with some savings to get retiree visas. If you were retiring now, Argentina would be an interesting choice: very cheap due to the currency situation but still a safe & pleasant country. As long as your investments are in a stable currency, you can go wherever you want. On the other hand your home country might be unwilling to pay out even your meager pension if you don't actually live there, so plan accordingly.
You could also have more kids. The society-wide dependency ratio is going to be pretty bad, but if you have enough kids your family dependency ratio could be relied on instead. It's probably a bad idea to have kids as a retirement strategy, but if you're leaning in that direction already why not read Caplan's book and pop out another one?
# What to Expect
Some countries have a political culture that allows for tough decisions to be made and accepted, but for everyone else I think the most likely course is to kick the can down the road and muddle along while the problems accumulate, until a crisis erupts.
## Worst-Case Scenario
Weimar, then war? Probably not. Societies filled with old people don't do revolution or war. The age of bangs is over; we only have whimpers to look forward to. At worst we'll see a death spiral of stagnation, brain drain, expropriation, and perhaps devaluation/inflation. Think South America. They won't let you starve, but it won't be very nice either. If the catastrophe isn't global, and you manage to keep your portfolio out of their hands, you'll be fine.
## Best-Case Scenario
Cheap fusion energy or friendly superhuman general artificial intelligence? If we get a significant increase in growth, the pension problems disappear.
1. 1.I believe the first such system was set up in Germany in 1889.
2. 2.They could theoretically cut spending on other things to compensate, but good luck with that.
3. 3.In Japan 28% of the population is >65 years old. That number is 23% in Italy and 21.5% in Germany.
4. 4.When 30%+ of the population is retirees, nobody's getting elected without their vote.
5. 5."Expenditure cuts carry a significant risk of increasing the frequency of riots, anti-government demonstrations, general strikes, political assassinations, and attempts at revolutionary overthrow of the established order. [...] Once unrest erupts, governments quickly reverse course and increase spending in the following year".
6. 6.European immigration policy is such a mystery to me it might as well be a supranatural phenomenon. In the US at least you can explain it through the political motive. But what about Germany? Poor, unemployed migrants obviously don't vote CDU. If there is any intentionality at all behind European immigration policy (and there probably isn't) it must be based on a fundamental misunderstanding of how the welfare state works.
7. 7.Or, you know...the JvNs.
|
Pre/post echoes are a disturbing artifact in transform coding. These artifacts are most notable when there is a sudden increase or decrease in signal energy. Because of its affect on the quality of the communication, standards such as G.729.1 have a module to reduce the echoes present in the decoded speech. Building on, The Cause of Echoes in Coding, one can begin to decide on how to reduce the affect of echoes.
There are two approaches to handling pre-echoes: in encoder and/or in the decoder. Both cases rely on detector for determining the transitions regions that cause echo. G.729.1, which has reduction built in the decoder, has the advantage of the time-domain coded signal (which will not have any echo in it) as reference signal for reducing the echo in the Modified Discrete Cosine Transform (MDCT) enhancement layers of the coder. In this system there are two levels of detection. First, two frames of the synthesized MDCT signal are concatenated. This concatenated signal is split up into 8 subframes of 5ms. If one of the subframes has significantly more energy than its neighboring subframes, this frame has a potential for having generated pre/post echo. The next level of detection compares the level of signal energy between the decoded time-domain and transform domain signals. If the signal energy of the MDCT signal is greater than that of the time-domain, then an echo region can safely be claimed as an artifact. The gain used to modified the data is:
$g(n) = \left( \frac{E_{TD}}{E_{MDCT}} \right)^{\frac{1}{2}}$
,where ETD is the energy of the time-domain signal and EMDCT is the energy of the transformed signal.
Methods for reducing echoes in the encoder use psychoacoustic properties of the human ear and adaptive window lengths. The detector in an encoder also observes the energy in small subframes and looks ahead for significant changes in energy. More advanced detectors will make use of some pre-filtering to ensure any significant transients likely to cause echoes are detected. For example, if there is steady low frequency signal source, and a high frequency signal source is added to it, then the high frequency source will not get detected when the low frequency signal energy dominates. Therefore, the signal should be high-pass filtered so high frequency transients can be detected.
|
A periodic decimal expansion
Let us suppose that $\{\alpha_{n}\}_{n \in \mathbb{N}}$ is a strictly increasing sequence of natural numbers and that the number obtained by concatenating the decimal representations of the elements of $\{\alpha_{n}\}_{n \in \mathbb{N}}$ after the decimal point, i.e.,
$0.\alpha_{1}\alpha_{2}\alpha_{3}\ldots$
has period $s$ (e.g., $0.12 \mathbf{12} \mathrm{121212}...$ has period 2).
If $a_{k}$ denotes the number of elements in $\{\alpha_{n}\}_{n \in \mathbb{N}}$ with exactly $k$ digits in their decimal representation, does the inequality
$a_{k} \leq s$
always hold?
What would be, in your opinion, the right way to approach this question? I've tried a proof by exhaustion without much success. I'd really appreciate any (self-contained) hints you can provide me with.
-
From your question it is not fully clear whether you know if the answer is yes, or no, or if you want to prove that $\alpha_k \leq s$ actually holds. If you don't know, the best idea would seem to be to look for a counterexample. Also, it may be more accurate to say that the concatenated sequence has length $\leq s$ – gary Jul 27 '11 at 15:59
Also, if understood the problem well, it would seem you can get a counterexample by using s=1, and fiding sequences that generate irrational numbers; I think concatenating the elements of an arithmetic progression would do. – gary Jul 27 '11 at 16:06
@gary: I think you misunderstood the question; see the two answers given. – joriki Jul 27 '11 at 16:28
All numbers with exactly $k$ digits are consecutive in $\{\alpha_{n}\}_{n \in \mathbb{N}}$. If $a_k>s$, then string the first $s$ numbers with $k$ digits together. The resulting string has $ks$ digits, which is exactly $k$ periods, so you're now at the same point in the period as you were at the beginning of the first number with $k$ digits. By peridocity, the next number with $k$ digits would have to be the same as the first, which isn't possible since the sequence strictly increases.
If the period is $s$ then there are essentially $s$ starting places in the recurring decimal for a $k$-digit integer - begin at the first digit of the decimal, the second etc - beyond $s$ you get the same numbers coming round again. If you had $a_k > s$ then two of your $\alpha_n$ with $k$ digits would be the same by the pigeonhole principle.
|
# polar: Polar line of point with respect to a conic In RConics: Computations on Conics
## Description
Return the polar line l of a point p with respect to a conic with matrix representation C. The polar line l is defined by l = Cp.
## Usage
1 polar(p, C)
## Arguments
p a (3 \times 1) vector of the homogeneous coordinates of a point. C a (3 \times 3) matrix representation of the conic.
## Details
The polar line of a point p on a conic is tangent to the conic on p.
## Value
A (3 \times 1) vector of the homogeneous representation of the polar line.
## References
Richter-Gebert, J<c3><bc>rgen (2011). Perspectives on Projective Geometry - A Guided Tour Through Real and Complex Geometry, Springer, Berlin, ISBN: 978-3-642-17285-4
## Examples
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 # Ellipse with semi-axes a=5, b=2, centered in (1,-2), with orientation angle = pi/5 C <- ellipseToConicMatrix(c(5,2),c(1,-2),pi/5) # line l <- c(0.25,0.85,-1) # intersection conic C with line l: p_Cl <- intersectConicLine(C,l) # if p is on the conic, the polar line is tangent to the conic l_p <- polar(p_Cl[,1],C) # point outside the conic p1 <- c(5,-3,1) l_p1 <- polar(p1,C) # point inside the conic p2 <- c(-1,-4,1) l_p2 <- polar(p2,C) # plot plot(ellipse(c(5,2),c(1,-2),pi/5),type="l",asp=1, ylim=c(-10,2)) # addLine(l,col="red") points(t(p_Cl[,1]), pch=20,col="red") addLine(l_p,col="red") points(t(p1), pch=20,col="blue") addLine(l_p1,col="blue") points(t(p2), pch=20,col="green") addLine(l_p2,col="green") # DUAL CONICS saxes <- c(5,2) theta <- pi/7 E <- ellipse(saxes,theta=theta, n=50) C <- ellipseToConicMatrix(saxes,c(0,0),theta) plot(E,type="n",xlab="x", ylab="y", asp=1) points(E,pch=20) E <- rbind(t(E),rep(1,nrow(E))) All_tangant <- polar(E,C) apply(All_tangant, 2, addLine, col="blue")
RConics documentation built on May 30, 2017, 5:22 a.m.
|
# American Institute of Mathematical Sciences
October 2018, 23(8): 3275-3296. doi: 10.3934/dcdsb.2018244
## Stochastic non-autonomous Holling type-Ⅲ prey-predator model with predator's intra-specific competition
1 Department of Mathematics, Indian Institute of Engineering Science and Technology, Shibpur, Howrah, West Bengal 711103, India 2 Department of Mathematics, Vivekananda College, Thakurpukur, Kolkata-700063, India
Received December 2017 Revised February 2018 Published October 2018 Early access August 2018
The objective of this article is to study the significance of dynamical properties of non-autonomous deterministic as well as stochastic prey-predator model with Holling type-Ⅲ functional response. Firstly, uniform persistence of the deterministic model has been demonstrated. Secondly, stochastic non-autonomous prey-predator system with Holling type-Ⅲ functional response is proposed. The existence of a global positive solution has been derived. Sufficient conditions for non-persistence in mean, weakly persistence in mean, extinction have been derived. Moreover the sufficient conditions for permanence of the system have been established. The analytical results are verified by numerical simulation.
Citation: Sampurna Sengupta, Pritha Das, Debasis Mukherjee. Stochastic non-autonomous Holling type-Ⅲ prey-predator model with predator's intra-specific competition. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 3275-3296. doi: 10.3934/dcdsb.2018244
##### References:
show all references
##### References:
Numerical simulation for the deterministic system (1) with initial condition (0.2, 0.3) by $a_1(t) = 0.1+0.01 \sin t, \ a_2(t) = 0.02+0.01\sin t$ shows the stable behavior of prey and predator
Numerical simulation for the deterministic system (1) with initial condition (0.2, 0.3) by $a_1(t) = 2+0.1 \sin t,\ a_2(t) = 1+0.1\sin t$ shows the unstable behavior of prey and predator
Numerical simulation for the deterministic system (1) with (0.2, 0.3) by $r(t) = 5 + 2.5 \sin t,~b(t) = 0.22+0.02\sin t,~c(t) = 0.01+0.005\sin t,~d(t)$ $= 0.2+0.01\sin t,~a_1(t) = 0.1+0.1\sin t,~a_2(t) = 1+0.1\sin t$ shows that system is persistent
Numerical simulation for the system (5) with $\frac{\sigma_1^2}{2} = \frac{\sigma_2^2}{2} = 0.21+0.02\sin t$ shows that both prey and predator population goes to extinction
Numerical simulation for the system (5) with $\frac{\sigma_1^2}{2} = 0.19+0.02\sin t,~\frac{\sigma_2^2}{2} = 0.09+0.02\sin t$ shows weakly persistence in the mean of prey and extinction of predator
Numerical simulation for the system (5) with $r(t) = 2.2+0.01 \sin t , \ \sigma_1 = \sigma_2 = 0.02+0.01\sin t$ shows permanence of both prey and predator
[1] Aniello Buonocore, Luigia Caputo, Enrica Pirozzi, Amelia G. Nobile. A non-autonomous stochastic predator-prey model. Mathematical Biosciences & Engineering, 2014, 11 (2) : 167-188. doi: 10.3934/mbe.2014.11.167 [2] Safia Slimani, Paul Raynaud de Fitte, Islam Boussaada. Dynamics of a prey-predator system with modified Leslie-Gower and Holling type Ⅱ schemes incorporating a prey refuge. Discrete & Continuous Dynamical Systems - B, 2019, 24 (9) : 5003-5039. doi: 10.3934/dcdsb.2019042 [3] Dan Li, Jing'an Cui, Yan Zhang. Permanence and extinction of non-autonomous Lotka-Volterra facultative systems with jump-diffusion. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 2069-2088. doi: 10.3934/dcdsb.2015.20.2069 [4] Xia Wang, Shengqiang Liu, Libin Rong. Permanence and extinction of a non-autonomous HIV-1 model with time delays. Discrete & Continuous Dynamical Systems - B, 2014, 19 (6) : 1783-1800. doi: 10.3934/dcdsb.2014.19.1783 [5] Jian Zu, Wendi Wang, Bo Zu. Evolutionary dynamics of prey-predator systems with Holling type II functional response. Mathematical Biosciences & Engineering, 2007, 4 (2) : 221-237. doi: 10.3934/mbe.2007.4.221 [6] Shuping Li, Weinian Zhang. Bifurcations of a discrete prey-predator model with Holling type II functional response. Discrete & Continuous Dynamical Systems - B, 2010, 14 (1) : 159-176. doi: 10.3934/dcdsb.2010.14.159 [7] Yang Lu, Xia Wang, Shengqiang Liu. A non-autonomous predator-prey model with infected prey. Discrete & Continuous Dynamical Systems - B, 2018, 23 (9) : 3817-3836. doi: 10.3934/dcdsb.2018082 [8] Kazuhiro Oeda. Positive steady states for a prey-predator cross-diffusion system with a protection zone and Holling type II functional response. Conference Publications, 2013, 2013 (special) : 597-603. doi: 10.3934/proc.2013.2013.597 [9] Kexin Wang. Influence of feedback controls on the global stability of a stochastic predator-prey model with Holling type Ⅱ response and infinite delays. Discrete & Continuous Dynamical Systems - B, 2020, 25 (5) : 1699-1714. doi: 10.3934/dcdsb.2019247 [10] Miljana JovanoviĆ, Marija KrstiĆ. Extinction in stochastic predator-prey population model with Allee effect on prey. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2651-2667. doi: 10.3934/dcdsb.2017129 [11] Zhijun Liu, Weidong Wang. Persistence and periodic solutions of a nonautonomous predator-prey diffusion with Holling III functional response and continuous delay. Discrete & Continuous Dynamical Systems - B, 2004, 4 (3) : 653-662. doi: 10.3934/dcdsb.2004.4.653 [12] Shanshan Chen, Junping Shi, Junjie Wei. The effect of delay on a diffusive predator-prey system with Holling Type-II predator functional response. Communications on Pure & Applied Analysis, 2013, 12 (1) : 481-501. doi: 10.3934/cpaa.2013.12.481 [13] Fathalla A. Rihan, Hebatallah J. Alsakaji. Stochastic delay differential equations of three-species prey-predator system with cooperation among prey species. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020468 [14] J. Gani, R. J. Swift. Prey-predator models with infected prey and predators. Discrete & Continuous Dynamical Systems, 2013, 33 (11&12) : 5059-5066. doi: 10.3934/dcds.2013.33.5059 [15] Tôn Việt Tạ. Non-autonomous stochastic evolution equations in Banach spaces of martingale type 2: Strict solutions and maximal regularity. Discrete & Continuous Dynamical Systems, 2017, 37 (8) : 4507-4542. doi: 10.3934/dcds.2017193 [16] Pengyu Chen, Xuping Zhang. Non-autonomous stochastic evolution equations of parabolic type with nonlocal initial conditions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (9) : 4681-4695. doi: 10.3934/dcdsb.2020308 [17] Jing-An Cui, Xinyu Song. Permanence of predator-prey system with stage structure. Discrete & Continuous Dynamical Systems - B, 2004, 4 (3) : 547-554. doi: 10.3934/dcdsb.2004.4.547 [18] Zhifu Xie. Turing instability in a coupled predator-prey model with different Holling type functional responses. Discrete & Continuous Dynamical Systems - S, 2011, 4 (6) : 1621-1628. doi: 10.3934/dcdss.2011.4.1621 [19] Kolade M. Owolabi. Dynamical behaviour of fractional-order predator-prey system of Holling-type. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 823-834. doi: 10.3934/dcdss.2020047 [20] Willian Cintra, Carlos Alberto dos Santos, Jiazheng Zhou. Coexistence states of a Holling type II predator-prey system with self and cross-diffusion terms. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021211
2020 Impact Factor: 1.327
|
Chapter Contents Previous Next
RCHART Statement
## Constructing Range Charts
The following notation is used in this section:
process standard deviation (standard deviation of the population of measurements) Ri range of measurements in i th subgroup ni sample size of i th subgroup d2(n) expected value of the range of n independent normally distributed variables with unit standard deviation d3(n) standard error of the range of n independent observations from a normal population with unit standard deviation Dp(n) 100p th percentile of the distribution of the range of n independent observations from a normal population with unit standard deviation
### Plotted Points
Each point on an R chart indicates the value of a subgroup range (Ri). For example, if the tenth subgroup contains the values 12, 15, 19, 16, and 14, the value plotted for this subgroup is R10=19-12=7.
### Central Line
By default, the central line for the i th subgroup indicates an estimate of the expected value of Ri, which is computed as , where is an estimate of .If you specify a known value () for , the central line indicates the value of .Note that the central line varies with ni.
### Control Limits
You can compute the limits in the following ways:
• as a specified multiple (k) of the standard error of Ri above and below the central line. The default limits are computed with k=3 (these are referred to as limits).
• as probability limits defined in terms of , a specified probability that Ri exceeds the limits
The following table provides the formulas for the limits:
Table 39.21: Limits for R Charts
Control Limits LCL = lower limit UCL = upper limit =
Probability Limits LCL = lower limit UCL = upper limit
The formulas assume that the data are normally distributed. Note that the control limits vary with ni and that the probability limits for Ri are asymmetric around the central line. If a standard value is available for , replace with in Table 39.21.
You can specify parameters for the limits as follows:
• Specify k with the SIGMAS= option or with the variable _SIGMAS_ in a LIMITS= data set.
• Specify with the ALPHA= option or with the variable _ALPHA_ in a LIMITS= data set.
• Specify a constant nominal sample size for the control limits with the LIMITN= option or with the variable _LIMITN_ in a LIMITS= data set.
• Specify with the SIGMA0= option or with the variable _STDDEV_ in a LIMITS= data set.
Chapter Contents Previous Next Top
|
• Purpose of Statistics Package Exercises : The Probability & Statistics course focuses on the processes you use to convert data into useful information. This involves
1. Collecting data,
2. Summarizing data, and
3. Interpreting data.
• In addition to being able to apply these processes, you can learn how to use statistical software packages to help manage, summarize, and interpret data. The statistics package exercises included throughout the course provide you the opportunity to explore a dataset and answer questions based on the output using R, Statcrunch, TI Calculator, Minitab, or Excel. In each exercise, you can choose to view instructions for completing the activity in R, Statcrunch, TI Calculator, Minitab, or Excel, depending on which statistics package you choose to use.
• The statistics package exercises are an extension of activities already embedded in the course and require you to use a statistics package to generate output and answer a different set of questions.
1. To download R, a free software environment for statistical computing and graphics, go to: https://www.r-project.org/ This link opens in a new tab and follow the instructions provided.
• Using R
1. Throughout the statistics package exercises, you will be given commands to execute in R. You can use the following steps to avoid having to type all of these commands in by hand:
2. Highlight the command with your mouse.
3. On the browser menu, click "Edit," then "Copy."
4. Click on the R command window, then at the top of the R window, click "Edit," then "Paste."
5. You may have to press to execute the command.
• R Version
1. The R instructions are current through version 3.2.5 released on April 14, 2016. Instructions in these statistics package exercises may not work with newer releases of R.
2. For help with installing R for MAC OS X or Windows click here
• The objectives of this activity are:
1. To give you guided practice in carrying out the z-test for the population proportion (p).
2. To learn how to use statistical software to help you carry out the test.
• Background: This activity is based on the results of a recent study on the safety of airplane drinking water that was conducted by the U.S. Environmental Protection Agency (EPA). A study found that out of a random sample of 316 airplanes tested, 40 had coliform bacteria in the drinking water drawn from restrooms and kitchens. As a benchmark comparison, in 2003 the EPA found that about 3.5% of the U.S. population have coliform bacteria-infected drinking water. The question of interest is whether, based on the results of this study, we can conclude that drinking water on airplanes is more contaminated than drinking water in general.
1. Explanation :
Ho: p = 0.035 Ha: p > 0.035 As usual, Ho claims that "nothing special is going on" with the drinking water in airplanes—the contamination rate is the same as the contamination rate in drinking water in general. Ha represents what we suspect, or what we want to check. In this case, we want to check whether drinking water on airplanes is more contaminated than drinking water in general.
1. Explanation :
Let's check the conditions. The sample of airplanes is random. • n * po = 316 * 0.035 = 11.06 > 10. • n * (1 - po) = 316 * .965 = 304.94 > 10. Yes, it is safe to use the test
• R Instructions
1. In R , the default command for inference for proportions,
1.
prop.test()
2. does not use the traditional z-test , but instead uses a related test called a chi-square (χ2) test. Therefore, to conduct the z-test for the population proportion (p) using R, we must modify the output to acquire the z-test statistic.
3. From the background , we know that there are n=316 total airplanes, x=40 contaminate samples, and the baseline comparison (null value) is p=0.035 . Here are the basic commands:
1.
p = prop.test(x=40,n=316,p=0.035,alternative="greater",conf.level=0.95, correct=FALSE);p
4. The parameter alternative= may take on the values "greater" , "less" , or "two.sided" depending on the alternative hypothesis.
5. Notice that X-squared can be pulled from the output by entering the command
1.
p$statistic 6. , referred to as the chi-square2) test statistic. The χ2 test statistic is equivalent to z2, so we can determine the z-test statistic by calculating $$z = \sqrt{X^2}$$ ). 1. z = sqrt(p$statistic);z
7. The provided p-value is equivalent to the p-value we might find from the z-test we hand calculate for proportions.
8. Note: The χ2-test statistic will always be positive so its square root will be positive in calculation, but that does not mean that the z-test statistic is positive. If the sample proportion is greater than the null proportion then the z-test statistic is positive. If the sample proportion is less than the null proportion then the z-test statistic is negative.
1. Explanation :
A lot of information is returned; let's review it item by item. The data we entered is echoed back: we have a random sample of 316 airplanes, out of which 40 were found to have contaminated water. The null hypothesis was that the proportion of planes with contaminated water is 0.035. The chi-squared statistic for the test was 78.472, with one degree of freedom. The most important result is next: the p-value of the test was 2.2e - 16, which is essentially 0. The alternate hypothesis for our test was that the proportion of planes with contaminated water was greater than 0.035.
1. Explanation :
In our case: • n = 316 • p̂ = 40/316 ≈ .127 • po = .035 And therefore, z = 8.9 means that assuming that the Ho is true (i.e., that the proportion of contaminated drinking water on airplanes is indeed .035, the same as drinking water in general), the results of our study provided a sample proportion that is 8.9 standard deviations above that proportion. Recall that the standard deviation rule for normal distributions tells us that 99.7% of normal values fall within 3 standard deviations of the mean. A sample proportion that falls 8.9 standard deviations above the true proportion is, therefore, extremely unlikely. As you'll see, this fact will also be expressed by the p-value.
1. Explanation :
A p-value that is so close to 0 tells us that it would be almost impossible to get a sample proportion of 12.5% (or larger) of contaminated drinking water had the true proportion been 3.5%. In other words, the airline industry cannot claim that this just happened to be a "bad" sample that occurred by chance. A p-value that is essentially 0 tells us that it is highly unlikely that such a sample happened just by chance. Our conclusion is therefore that we have an extremely strong reason to reject Ho and conclude that the proportion of contaminated drinking water on airplanes is higher than the proportion in general. On a technical level, the p-value is smaller than any significance level that we are going to set, so Ho can be rejected. Comment: In the original study, there were 158 randomly chosen airplanes, and in 20 of them the drinking water was found to be contaminated. As we mentioned, we based the context of this activity on these results, and we simply doubled the counts. Instead of 158 airplanes, we had 316; instead of 20 airplanes with contaminated drinking water, we had 40). We did that because technically, otherwise the conditions under which this test can be used would not have been met. Practically, the results of this study are so extreme that the fact that not all the conditions were met has no effect on the actual conclusion.
• The purpose of this activity is to give you guided practice exploring the effect of sample size on the significance of sample results, and help you get a better sense of this effect. Another important goal of this activity is to help you understand the distinction between statistical significance and practical importance.
• Background: - For this activity, we will use example 1. Here is a summary of what we have found:
• The results of this study 64 defective products out of 400—were statistically significant in the sense that they provided enough evidence to conclude that the repair indeed reduced the proportion of defective products from 0.20 (the proportion prior to the repair). Even though the results—a sample proportion of defective products of 0.16—are statistically significant, it is not clear whether the results indicate that the repair was effective enough to meet the company's needs, or, in other words, whether these results have a practical importance. If the company expected the repair to eliminate defective products almost entirely, then even though statistically, the results indicate a significant reduction in the proportion of defective products, this reduction has very little practical importance, because the repair was not effective in achieving what it was supposed to. To make sure you understand this important distinction between statistical significance and practical importance, we will push this a bit further.
• Consider the same example, but suppose that when the company examined the 400 randomly selected products, they found that 78 of them were defective (instead of 64 in the original problem):
• R Instructions
1. From the background we know that there are n=400, x=78 , and the null value is p=0.20
2. . Here are the basic commands:
1.
p = prop.test(x=78,n=400,p=0.20,alternative="less",conf.level=0.95, correct=FALSE);p
3. To calculate z , enter the following command. (Since the sample proportion is less than p = 0.2 , we know that z will be negative, so we take the negative square root.)
1.
z = -sqrt(p\$statistic);z
4. The provided p-value is equivalent to the p-value we might find from the z-test we hand calculate for proportions.
1. Explanation :
Based on the large p-value (0.401) we conclude that the results are not statistically significant. In other words, the data do not provide evidence to conclude that the proportion of defective products has been reduced.
• Consider now another variation on the same problem. Assume now that over a period of a month following the repair, the company randomly selected 20,000 products, and found that 3,900 of them were defective. Note that the sample proportion of defective products is the same as before, 0.195, which as we established before, does not indicate any practically important reduction in the proportion of defective products.
1. Explanation :
Even though the sample results are similar to what we got before (sample proportion of 0.195), since they are based on a much larger sample (20,000 compared to 400) now they are statistically significant (at the .05 level, since 0.039 is less than 0.05). In this case, we can therefore reject Ho and conclude that the repair reduced the proportion of defective products to below 0.20. Summary: This is perhaps an "extreme" example, yet it is effective in illustrating the important distinction between practical importance and statistical significance. A reduction of 0.005 (or .5%) in the proportion of defective products probably does not carry any practical importance, however, because of the large sample size, this reduction is statistically significant. In general, with a sufficiently large sample size you can make any result that has very little practical importance statistically significant. This suggests that when interpreting the results of a test, you should always think not only about the statistical significance of the results but also about their practical importance.
|
# How to find a section start and finish
I'm working with a document that originated in Word and it has a number of Word-style "Section breaks", which appear to work more like a "super pagebreak" than a Libreoffice section.
In "Edit Sections" I see numbered sections but these don't seem to relate to anything in the document. I can't see the section breaks, they seem to have turned into something more like page breaks, which in turn just show as indistinct dotted lines.
Clearly there is still some kind of break in the document as the footer changes as it goes from one page to the next, but I cannot see the specific point where this happens, and there is a block of text that I would quite like to mark as a section distinct from the rest of the document but I cannot see how to do this.
How can I tell where Sections 1, 2 and 3 in the sections menu actually appear in the text?
Clarification: I did have formatting marks enabled.
edit retag close merge delete
If there are real sections you could apply a background or area color - then the section's text area will be filled with it.
Nevertheless your "document that originated in Word" is derived and converted so there can be some weird problems nobody will understand. If I were you I would erase each of the converted sections and then set them anew if required. Check the page styles (you can see the current one where the cursor is located in the status bar).
For more help upload anonymized file.
( 2018-11-21 14:57:36 +0200 )edit
If the footer changes this is a sign of changed page style. Typical for Writer.
( 2018-11-21 15:00:29 +0200 )edit
You didn't tell if you enabled View>Formatting marks, though I suppose you did it from your remark … show as indistinct dotted lines. This helps a lot to see the structure of the document.
( 2018-11-21 16:58:19 +0200 )edit
So it could just be differences between Word and Writer at work here: As far as I can tell a Word "section" only has one page style, two if you count the first page being different from those following, so a "section" boundry marks the transition from one page style to another. This does mean the footer can change mid-page (I can't recall which footer gets used when it splits like that).
It looks as if a Writer "section" means something totally different more like an embedded doc
( 2018-11-22 09:58:08 +0200 )edit
A section doesn't cause a transition of page style, only a page break can. A section is a subpart of page, allowing to change some "geometric" properties (like number of colums, the main usage of it) but not header nor footer which are exclusive properties of page styles.
As @Grantler noted, if you have a footer change, then you have some kind of page break associated with a page style change, unless footer content is generated from fields referring to some heading bookmark.
( 2018-11-22 10:12:36 +0200 )edit
Can you attach a reduced version of your document exhibiting the issue?
( 2018-11-22 10:13:53 +0200 )edit
@ajlittoz: The OP may need a bit of "karma" to be able to accept your suggestion.
I'm going to upvote the question therefore.
( 2018-11-22 10:52:58 +0200 )edit
I appreciate the offer but it is a bit of a monster. I think it needs to stay in Word for the immediate future, there's just too much clean-up required.
( 2018-11-22 11:54:20 +0200 )edit
Sort by » oldest newest most voted
When you are inside a section, first Ctrl+A select the content of the section. It won't work that way, if you are in a table, though (after selecting the cell and the table, the following Ctrl+A selects the whole document - I believe this to be a bug). When you are in a nested section. first Ctrl+A selects the inner one; next one selects the outer one; eventually, you select the whole document.
more
|
[OS X TeX] separators in label names
Ross Moore ross.moore at mq.edu.au
Sun Oct 13 23:40:08 CEST 2019
Hi Murray,
On 14 Oct 2019, at 7:50 am, Murray Eisenberg <murrayeisenberg at gmail.com<mailto:murrayeisenberg at gmail.com>> wrote:
I need crossreftools in order to pull apart cross-references to theorem-like constructs (with thmtools) that produce environments with outputs beginning
Theorem (an important result of Newton}
or
Fundamental Theorem of Calculus
so as to create macros that give the parenthesized name in the first type, preserving the upper-case letters of marked names (via \NoCaseChange from textcase), and the name in the second type but with initial letters lower-cased (again, with the exception of marked names; and so that the entries from all such environments in the \listoftheorems will keep that lower-casing except for the very first letter.
So, for example, in the first example, if it has label, say \label{thm:important}, then my macro \thmnameref* has output
an important result of Newton
from \thmnameref*{thm:important}; and if the second example has label \label{thm:FTC}, then my macro \thmref* gives output
fundamental theorem of calculus
from \thmref*{thm:FTC}. And in the list of theorems created by \listoftheorems, those two theorem-like environments will produce entries:
An important result of Newton
Fundamental theorem of calculus
This is all, among other reasons, due to customary Amer Math Soc and Math Assoc America math book (and journal) styles that prefer, or insist upon, lower-casing such theorem names, at least when referred to in the body of the text.
This all sounds really good, to be able to construct extra meaningful text,
by taking hints from the \label strings.
I’m sure this will be extremely useful for accessibility; e.g.,
even if the theorem displays on-screen as: Theorem (Newton)
there can be alternative text that can be passed to a screen-reader to say:
“an important theorem of Newton” .
And of course if the visual text says ‘FTC’ there can be an internal expansion of the acronym
to pass the full phrase ‘Fundamental Theorem of Calculus’ (with or without capitalisation)
to the screen reader or Braille-based assistive technology.
These extra non-displayed strings need to be created using (La)TeX macros,
and stored in the appropriate places within the PDF being constructed.
It is then up to PDF reader software to detect their presence and access them when appropriate,
perhaps according to personalised preferences or key-strokes, provided by the (visually impaired) human reader.
This is a direction in which mathematical publishing really does need to go,
to properly support Accessibility in highly technical documents.
So when you think you are close to having a well-worked and robust set of macros,
I’d be very interested in using your package, and creating the extra coding needed
to build a fully-tagged accessible PDF from some real-world example documents.
On 13 Oct2019, at 3:44 PM, jfbu <jfbu at free.fr<mailto:jfbu at free.fr>> wrote:
Hi Piet,
I wonder why Murray needs this package, but my hint at \if at safe@actives babel toggle was under such circumstances misleading as it can't be applied without breaking expandability.
I of course did that without having read the numerous exchanges on tex.sx, and here, and belatedly understood crossreftools is a package providing expandable macros.
Seems Murray got it solved by David, anyway,
Best,
Jean-François
However, I found that I had made an error in my code. There was a r@{#1} that should be r@#1.
Yep. That’s a mistake that’s really easy to make.
There are contexts where {#1} and #1 do exactly the same thing; but not here. :-)
So the code should be:
\renewcommand{\@@crtextr at ct@ref}[2]{%
\expandafter\@@@crtextr at ct@ref\expandafter{\detokenize{#2}}{#1}%
}
\newcommand{\@@@crtextr at ct@ref}[2]{%
\expandafter\ifx\csname r@#1\endcsname\relax
\crt at refundefined%
\else
\expandafter\expandafter\csname crt at ref@splitter@#2\endcsname\csname r@#1\endcsname%
\fi
}
---
Murray Eisenberg murrayeisenberg at gmail.com<mailto:murrayeisenberg at gmail.com>
503 King Farm Blvd #101 Home (240)-246-7240
Rockville, MD 20850-6667 Mobile (413)-427-5334
All the best.
Ross
Dr Ross Moore
Department of Mathematics and Statistics
12 Wally’s Walk, Level 7, Room 734
Macquarie University, NSW 2109, Australia
T: +61 2 9850 8955 | F: +61 2 9850 8114
M:+61 407 288 255 | E: ross.moore at mq.edu.au<mailto:ross.moore at mq.edu.au>
http://www.maths.mq.edu.au
[cid:image001.png at 01D030BE.D37A46F0]
CRICOS Provider Number 00002J. Think before you print.
Please consider the environment before printing this email.
This message is intended for the addressee named and may
contain confidential information. If you are not the intended
recipient, please delete it and notify the sender. Views expressed
in this message are those of the individual sender, and are not
necessarily the views of Macquarie University. <http://mq.edu.au/>
<http://mq.edu.au/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://tug.org/pipermail/macostex-archives/attachments/20191013/5a001206/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 4605 bytes
Desc: image001.png
URL: <https://tug.org/pipermail/macostex-archives/attachments/20191013/5a001206/attachment-0001.png>
-------------- next part --------------
----------- Please Consult the Following Before Posting -----------
TeX FAQ: http://www.tex.ac.uk/faq
List Reminders and Etiquette: https://sites.esm.psu.edu/~gray/tex/
List Archives: http://dir.gmane.org/gmane.comp.tex.macosx
https://email.esm.psu.edu/pipermail/macosx-tex/
TeX on Mac OS X Website: http://mactex-wiki.tug.org/
List Info: https://email.esm.psu.edu/mailman/listinfo/macosx-tex
|
# Harmonic Functions
• March 6th 2011, 04:28 PM
mulaosmanovicben
Harmonic Functions
Show by example that a harmonic function need not have an analytic completion in a multiply connected domain. [HINT: Consider ln(|z|), z a complex number]
well I considered u=ln(x^2+y^2) where z=x+iy
and I figured out it was harmonic (second partial derivitives are 0) but I do not know where to go from there.
• March 6th 2011, 06:03 PM
xxp9
what is the definition of analytic completion?
• March 7th 2011, 12:39 PM
mulaosmanovicben
Quote:
Originally Posted by xxp9
what is the definition of analytic completion?
Analytic completion for a function u is when there exists a harmonic function v in a simply connected domain such that u+iv is analytic.
• March 7th 2011, 06:01 PM
xxp9
So let u=ln|z|, u is hormonic in the domain of punctured plane $R^2 - {0}$.
Since an analytic function is determined in any open sub-set of the plane the analytic completion is unique( up to a constant), if it exists.
So the only possible analytic completion for u would be f=lnz=u + i argz
While f can only be defined on a plane where a half line is cut.
|
• n. [数] 弧度
«
1 / 10
»
"angle subtended at the center of a circle by an arc equal in length to the radius," 1879, from radius.
1. The radian of her graceful neck particularly conquers artists'inspiration.
2. Excellent radian , beautiful decoration, the artwork of the war!
3. W is the radian frequency deviation from the center frequency.
w是角频率与中心频率的偏离值.
4. Each vertex in average distance of each radian located in circumference.
5. Does sunglass radian damage eyesight greatly too?
|
Blue wavelength of light reflects from it, and the others are hidden because they are absorbed. Electrodynamics is the physics of electromagnetic radiation, and electromagnetism is the physical phenomenon associated with the theory of electrodynamics. Electromagnetic Radiation Last updated; Save as PDF Page ID 1779; Contributors and Attributions; As you read the print off this computer screen now, you are reading pages of fluctuating energy and magnetic fields. The electromagnetic spectrum of an object has a different meaning: it is the characteristic distribution of electromagnetic radiation emitted or absorbed by that particular object. wavelength. shortest distance between equivalent points on a continuous waâ¦. Chemistry Chapter 4 electromagnetic radiation. Electromagnetic Radiation Click card to see definition ð Energy that is propagated through space-time with magnetic and electrical fields running perpendicular to each other and perpendicular to the flow of energy⦠Electromagnetic radiation characteristics. what the angular momentum quantum tells... how many oppositely spinning electrons an orbital can hold, how many types of orbitals an energy level can have, number of electrons per energy level formula, no two same electrons can have the same set of quantum numbers (same address), when there is an orbital of multiple types (p,d,s,f), electrons will all be spinning in one direction in each orbital before they double up, electrons will be in the lowest possible energy level that will receive them, a hand written chart that can help remember the order in which orbitals are filled in an atom, a way to write how many electrons fill each orbital including arrows and blanks, the amount of (valence) electrons on the outermost energy level, all the electrons that aren't in the outermost energy level, amount of electron pairs of 2 oppositely spinning electrons in the same blank in orbital notation, electrons that don't share a blank with an electron spinning in the opposite way in orbital notation, a way to quickly write energy level, sub level, and the number of electrons in the sub levels. Electromagnetic radiation can be defined as a form of energy that is produced by the movement of electrically charged particles traveling through a matter or vacuum or by oscillating magnetic and electric disturbance. As against this, radiation indicates how heat travels through places having no molecules. amplitude. 94 μm Light is electromagnetic radiation composed of electric and magnetic field components. (Figure 8.1.3). Start a free trial of Quizlet Plus by Thanksgiving | Lock in 50% off all year Try it free Electromagnetic field produce electromagnetic radiation also referred to as EM radiation. Modern Chemistry Chapter 4 Vocab Flashcards | Quizlet Taken from the book Modern Chemistry by Holt, Rinehart, and Winston on Chapters 4 and 5, which deals with electrons and the periodic table. Oh no! ex-1s² 2s² 2p⁶ 3s², when electrons (usually transition metals) switch to a lower energy level, block groups 13, 14, 15, 16, 17, and 18 are in, what block helium in even though it's a noble gas, way to write the orbitals of an atom by writing a noble gas (instead of all the orbitals it contains) and then add the rest to make electron configuration notation shorter ex- [Ne] 3s¹, helium, neon, argon, krypton, xenon, radon. Typically use SI prefixes and meters, so that the number is between 1 and 999. electromagnetic radiation energy that travels through space as a wave that doesn't require a medium gamma, x-ray, ultraviolet, visible light waves, infrared heat rays, microwaves, tv and fm, short wave radio, long wave radio members of the electromagnetic spectrum from longest to shortest Because the speed of all EM waves are constant. To ensure the best experience, please update your browser. Properties of Electromagnetic Radiation $c=\lambda \nu \label{6.2.1}$ c= 3.00 ×10 8 m/s (2.99792458 × 10 8 m/s), which is 1.86 × 10 5 mi/s and about a million times faster than the speed of sound. Conduction shows, how heat is transferred between objects in direct contact, but Convection reflects how heat travels through liquids and gases. Electromagnetic waves include things like light, infrared, ultraviolet, radio waves, microwaves, and gamma rays. In a vacuum, these waves travel at the speed of light (which is itself a form of electromagnetic radiation). What determines the wavelength and frequency of the photon emitted? Electromagnetic Radiation Last updated; Save as PDF Page ID 1779; Contributors and Attributions; As you read the print off this computer screen now, you are reading pages of fluctuating energy and magnetic fields. These kinds of energies include some that you will recognize and some that will sound strange. Change in direction of waves when they bounce off an object, Change in the direction of waves as they pass from one medium to another, when energy is absorbed then electrons move to higher energy orbitals (more unstable), When charged particles move between quantum energy levels, the energy released is in the form of, 1. collision with a moving particle (excites atom) 2. Light as a wave form. Definition of EM Radiation. Astronomers can observe infrared using telescopes placed on mountains. The electromagnetic spectrum extends from below the low frequencies used for modern radio communication to gamma radiation at the short-wavelength (high-frequency) end, covering wavelengths from thousands of kilometers down to a f⦠The electromagnetic spectrum includes light with a range of frequencies, wavelengths, and energies. The acceleration of electric charges (such as alternating current in a radio transmitter) gives rise to electromagnetic radiation. An energy source may be capable of emitting radiation, but if the energy doesn't propagate outward, it's not radiating. What is electromagnetic radiation? Although all kinds of electromagnetic radiation are released from the Sun, our atmosphere stops some kinds from getting to us. These waves can travel through a vacuum at a constant speed of ⦠In classical language, ν is the frequency of the temporal changes in an electromagnetic wave.The frequency of a wave is related to its speed c and wavelength λ in the following way. Oh no! When electromagnetic radiation interacts with single atoms and molecules, its behavior also depends on the amount of energy per quantum (photon) it carries. The behavior of electromagnetic radiation depends on its wavelength. $$\lambda$$ - Wavelength, unit depends on the part of the Electromagnetic Spectrum you are interested in. It looks like your browser needs an update. Electromagnetic radiation above 2500 × 10 6 MHz is mostly referred to as ionizing radiation. Typically use SI prefixes and meters, so that the number is between 1 and 999. Emission Spectra (atomic spectra) is generated from. In this video Paul Andersen describes some of the properties of electromagnetic radiation. Electromagnetic waves include things like light, infrared, ultraviolet, radio waves, microwaves, and gamma rays. Radiation is the transfer of energy through electromagnetic waves. Take, for example, a magnetic field. Learn vocabulary, terms, and more with flashcards, games, and other study tools. this set is a train wreck, but hopefully it's a helpful train wreck, energy that travels through space as a wave that doesn't require a medium, gamma, x-ray, ultraviolet, visible light waves, infrared heat rays, microwaves, tv and fm, short wave radio, long wave radio, members of the electromagnetic spectrum from longest to shortest, how much higher the trough or lower the crest is from the equilibrium of a wave, What is amplitude² directly proportional to, distance between 2 consecutive points on a wave, how many waves or cycles or whatever can pass in a given amount of time, what is frequency inversely proportional to, _____ of a type of wave will be constant for that type of wave, how long it takes to get to one wave, cycle, vibration, etc, discovered E is directly proportional to f, Light with a high quantum energy knocks electrons off a metal that complete a circuit which can light a light bulb or work an elevator, sensors on the doors use light when they aren't blocked to close/open the doors. Radio, Microwave, Infrared, Visible, Ultraviolet, X-ray, Gamma Ray. Speed of Light: c=$$\lambda \nu$$ c= 3.00 ×10 8 m/s (2.99792458 × 10 8 m/s), which is 1.86 × 10 5 mi/s and about a million times faster than the speed of sound. Gamma rays are produced in the disintegration of radioactive atomic nuclei and in the decay of certain subatomic particles.The commonly accepted definitions of the gamma-ray and X-ray regions of the electromagnetic spectrum include some wavelength overlap, with gamma-ray radiation having wavelengths that are generally shorter than a few tenths of an angstrom (10 â10 metre) and gamma ⦠Jumps to higher energy level 3. drops back down through the release of a light photon. Electric and magnetic fields obey the properties of superposition.Thus, a field due to any particular particle or time-varying electric or magnetic field contributes to the fields present in the same space due to other causes. Electromagnetic radiation is commonly referred to as "light", EM, EMR, or electromagnetic waves. the number of waves that pass a given point per second. Sets | Quizlet Chemistry Chapter 5. electromagnetic radiation. What is the definition of electromagnetic radiation and list five examples -a form of energy that exhibits wavelike behavior as it travels through space -visible light, x-rays, ultraviolet and infrared light, microwaves, and radio waves See more. Range of electromagnetic radiation between 70nm and 400nm. To ensure the best experience, please update your browser. The electromagnetic spectrum is a continuous range of wavelengths. The number ν is shared by both the classical and the modern interpretation of electromagnetic radiation. Electromagnetic wave theory This theory was put forward by James clark Maxwell in 1864.. Light, electricity, and magnetism are all different forms of electromagnetic radiation. Light, electricity, and magnetism are all different forms of electromagnetic radiation. $$\lambda$$ - Wavelength, unit depends on the part of the Electromagnetic Spectrum you are interested in. Nuclear radiation includes gamma rays, x-rays, and the more energetic portion of the electromagnetic spectrum. All electromagnetic radiation from space cannot reach the surface of the Earth, although RF, visible light and some UV can. In a Transverse Wave Oscillations are perpendicular to the direction of energy transport whereas Longitudinal Waves have oscillations parallel to the direction of energy transport. This page is a basic introduction to the electromagnetic spectrum sufficient for chemistry students interested in UV-visible absorption spectroscopy. If you are looking for any sort of explanations suitable for physics courses, then I'm afraid this isn't the right place for you. The number of waves that pass a given point per second ( hertz to! Moving charged particles, does not need a medium - composed of both electric and waves. At the speed of light things like light, electricity, and more with flashcards, games, and study..., we will discuss electromagnetic radiation ensure the best experience, please update your browser contact, but Convection how! But if the energy between the two energy levels from longest to shortest does not need medium. Article, we will discuss electromagnetic radiation composed of both electric and magnetic waves a range. Emitting radiation, but if the energy does n't propagate outward, it 's radiating... Given point per second what is the range of wavelengths energy that propagated! Transverse magnetic and electrical fields running perpendicular to the flow of energy 1 and 999 Andersen electromagnetic radiation definition chemistry quizlet some the... More energetic portion of the electromagnetic spectrum you are interested in to us energy levels or a to. The chapter vocabulary and a few cycles per second ( hertz ) to more than 10^20.! Chapter vocabulary and a few cycles per second the heat transfer mechanism, in which the transition place! Telescopes placed on mountains field components reflects how heat is transferred between objects in contact... Number is between 1 and 999 wavelengths and reflect the wavelength of light the wavelength of light from... To ensure the best experience, please update your browser electrons in orbitals of an element release specific wavelengths in... Reach the surface of the Earth, although RF, visible, ultraviolet, radio waves, microwaves and. The wavelength of light reflects from it, and gamma rays, although RF, visible,,! Placed on mountains electromagnetic ( EM ) radiation is the range of all EM waves are constant called.! Radiation indicates how heat travels through liquids and gases the part of the electromagnetic spectrum you are in. Heat transfer mechanism, in which the transition takes place through electromagnetic include. Tell the kinds of energies include some that you will recognize and some that sound! Running perpendicular to each other perpendicular to each other and perpendicular to the flow of energy through waves... Oscillate at frequencies that can vary from electromagnetic radiation definition chemistry quizlet few cycles per second ( )... Is a continuous range of wavelengths electrodynamics is the transfer of energy through electromagnetic waves our stops! Takes place through electromagnetic waves include things like light, electricity, and electromagnetism the... Energy through electromagnetic waves include things like light, infrared, ultraviolet X-ray... Moving charged particles, does not need a medium to travel, travels the... The energy between the two energy levels is propagated through space-time with magnetic and electrical fields running perpendicular to other! With a range of all possible frequencies of electromagnetic radiation composed of both electric and magnetic waves, EMR or... Travels in waves lets us measure the different kind by wavelength or how long the in. Energies include some that you will recognize and some UV can × 10 6 is. Definition of radiation radiation from space can not reach the surface of the photon emitted it 's not radiating it. Of electromagnetic radiation definition chemistry quizlet, wavelengths, and magnetism are all different forms of electromagnetic radiation depends on the part the. Transfer mechanism, in which the transition takes place through electromagnetic waves can! To the flow of energy - through space or a medium to travel, travels at the of... Best experience, please update your browser used to describe all the different kinds energies... Electric waves cycles per second ( electromagnetic radiation definition chemistry quizlet ) to more than 10^20 hertz a form transverse! Vary from a few cycles per second ( hertz ) to more than 10^20 hertz, heat!, we will discuss electromagnetic radiation in the form of energy through electromagnetic waves medium - of... Transmitter ) gives rise to electromagnetic radiation, and magnetism are all forms. From each other energy between the two energy levels other useful things all EM waves.... In this video Paul Andersen describes some of the electromagnetic spectrum is a continuous range of all possible of. ) to more than 10^20 hertz apart from each other and perpendicular to the flow energy! Called photons at frequencies that can vary from a few electromagnetic radiation definition chemistry quizlet useful things,. No molecules some UV can that exhibits wavelike behavior as it travels.... Spectra ( atomic Spectra ) is generated from to the flow of energy through waves. Rays, x-rays, and magnetism are all different forms of electromagnetic radiation 2500... Describe the electromagnetic spectrum is a term used to describe all the different kind by wavelength or how long waves. That electromagnetic radiation in the energy does n't propagate outward, it 's not radiating shows, heat... Emr, or electromagnetic waves time with equations ( EM ) radiationis the movement of energy - through space a! Different kind by wavelength or how long the waves are constant Earth although! Red and blue wavelengths and reflect the wavelength and frequency of the electromagnetic spectrum light! Is a term used to describe all the different kinds of energies released into space by stars such the... Transverse magnetic and electric waves the excitation of electrons in orbitals of element. The form of transverse magnetic and electric waves space or a medium - composed of electric charges ( such alternating... Gamma Ray, gamma Ray SI prefixes and meters, so that the number of that! Wavelengths of electromagnetic radiation is the transfer of energy through electromagnetic waves more than 10^20 hertz electrodynamics... Time with equations the Sun through the release of a light photon magnetic and electric waves the of! Is the physical phenomenon associated with the definition of radiation apart from each other perpendicular... - wavelength, unit depends on its wavelength 700nm to 400nm creates what color light from... As light '', EM, EMR, or electromagnetic waves travels in waves lets us the... Travels through places having no molecules space-time with magnetic and electrical fields running perpendicular each! Jumps to higher energy level 3. drops back down through the release of a photon. From it, and magnetism are all different forms of electromagnetic radiation above 2500 × 10 6 MHz mostly. Pass a given point per second ( hertz ) to more than 10^20.... Greater frequency = greater energy, Name the waves in an electromagnetic spectrum you are interested in telescopes. Other and perpendicular to each other and perpendicular to the flow of that... Or electromagnetic waves generated from as the Sun, our atmosphere stops some kinds from getting to us through... And energies be capable of emitting radiation, and the more energetic portion of the spectrum... But Convection reflects how heat travels through places having no molecules blue wavelength light... Wavelength and frequency of the Earth, although RF, visible light and some can! Of waves that pass a given point per second ( hertz ) to more than 10^20 hertz telescopes! Waves in an electromagnetic spectrum you are interested in - wavelength, unit depends on its wavelength -... Combination of all possible frequencies of electromagnetic radiation with the definition of radiation flow of energy and reflect the and. Oscillate at frequencies that can vary from a few other useful things particles called photons surface of properties! Hertz ) to more than 10^20 hertz, games, and gamma rays is mostly referred to as light... The different kinds of energies released into space by stars such as the Sun, our atmosphere stops some from! Ib Chemistry SL â YouTube this time with equations as it travels t⦠few other useful.... Flow of energy through electromagnetic waves energy source may be capable of emitting,! To ensure the best experience, please update your browser the flow of energy through waves. Having no molecules both electric and magnetic field components, microwaves, and more with flashcards, games, the... The difference between a transverse Wave and a few cycles per second ( hertz ) to more than 10^20.... Other useful things update your browser gamma Ray of electrodynamics between equivalent points on a waâ¦! Of particles called photons is electromagnetic radiation from space can not reach the surface of the electromagnetic spectrum are. From space can not reach the surface of the electromagnetic spectrum ) rise... Frequency = greater frequency = greater energy, Name the waves in an electromagnetic spectrum is a continuous.... Radio transmitter ) gives rise to electromagnetic radiation number is between 1 and 999 than... Games, and magnetism are all different forms of electromagnetic radiation depends on its wavelength source may capable. Convection reflects how heat travels through liquids and gases and reflect the wavelength of green,., EM, EMR, or electromagnetic waves 6 MHz is mostly referred to as ''!
|
In the condition is said, that the years of Ivancho are 18, so when declaring the variable years we assign it an initial value of 18. We read the other variables from the console.
|
# Proving $r!$ divides the product of r succesive positive integers
I have to prove the following theorem:
Prove that the product of $r$ consecutive positive integers in divisible by $r!$
I am having a hard time getting a generalization down for the full set of real numbers, if I start from 1 and work up to r, I have the following:
$$r!k=\prod_{i=1}^{r}n_i$$
Can easily prove the base case of this, (n=1), and then go in to prove:
$$(r+1)!k=\prod_{i=1}^{r+1}n_i$$
Expand that out and get:
$$(r+1)r!k=n(n+1)(n+2).....(n+r)(n+r+1)$$
Can say that the product of the first $r$ elements in equal to $r!k$ by our base case. Leaving using with:
$$(r+1)k=(n+r+1)$$
Not sure where I can go from here, n is the integer that we start at, so how can I get it to work out to be equal to our induction hypothesis?
-
Hint: ${n+r\choose r}$ is an integer. – vadim123 Feb 4 '14 at 17:23
Thanks for the hint, got it now. – Richard P Feb 7 '14 at 3:34
You can do this by simultaneous induction on $r$ and $n$. Note that
\begin{align} (n+1)\cdots(n+r)&=(n+1)\cdots(n+r-1)n\quad+\quad(n+1)\cdots(n+r-1)r\\ &=((n-1)+1)\cdots((n-1)+r)\quad+\quad(n+1)\cdots(n+(r-1))r \end{align}
(I inserted a little extra space around the central plus signs to make the key pieces easier to see.) By induction on $n$, $r!$ divides $((n-1)+1)\cdots((n-1)+r)$, and by induction on $r$, $(r-1)!$ divides $(n+1)\cdots(n+(r-1))$, hence $r!$ divides $(n+1)\cdots(n+(r-1))r$.
(Please note, I'm glossing over all the fine points of getting the inductions started.)
-
$${n\pars{n + 1}\ldots\pars{n + r - 1} \over r!} = {n + r - 1 \choose r} \quad\mbox{which is an integer !!!}$$
-
|
## Stream: new members
### Topic: Feedback (Heine Borel in progress)
#### Guillermo Barajas Ayuso (Sep 15 2018 at 23:43):
Hi, I have uploaded some code in the link https://github.com/ImperialCollegeLondon/xena-UROP-2018/blob/master/src/Topology/Heine-Borel%20(incomplete) , I'll leave it here in case you want to give me some feedback. Thank you for your time! :-)
#### Kevin Buzzard (Sep 16 2018 at 08:03):
theorem for_all_not_all {α : Type u} (P Q R: α → Prop):
(∀ x (H : R x), ¬ (P x ∧ Q x)) ↔ ∀ x (H : R x), P x → ¬ Q x :=
⟨λ Hnand x Hx, not_and.mp $Hnand x Hx, λ Hton x Hx, not_and.mpr$ Hton x Hx⟩
Mathlib would prefer that kind of style to your tactic proof. I always suspect that such results are either in mathlib already or easily deducible. Looking at the proof I feel like it's one of those ones which could be shortened with some magic use of function.comp like in https://xenaproject.wordpress.com/2018/05/19/function-composition/ .
Oh wait --
theorem for_all_not_all {α : Type u} (P Q R: α → Prop):
(∀ x (H : R x), ¬ (P x ∧ Q x)) ↔ ∀ x (H : R x), P x → ¬ Q x := by simp [not_and]
#### Kevin Buzzard (Sep 16 2018 at 08:06):
The simp proof -- the proof takes 50% longer to process but the parser takes far less time parsing :-) End result is that both versions run in about the same time.
#### Kevin Buzzard (Sep 16 2018 at 08:10):
I have an error at line 431 by the way, and there are 6 sorrys. Do you need help filling them in?
#### Kevin Buzzard (Sep 16 2018 at 09:01):
Re the argument on line 431: there is already nat.lt_pow_self.
theorem le_pow (n : ℕ) : (n : ℝ) ≤ (2 : ℝ) ^ n :=
begin
show (n : ℝ) ≤ ((2 : ℕ) : ℝ) ^ n,
rw ←nat.cast_pow,
rw nat.cast_le,
exact le_of_lt (nat.lt_pow_self (dec_trivial) n),
end
#### Kevin Buzzard (Sep 16 2018 at 09:06):
notation ⟦a,b] := closed_interval a b
This is a hilarious idea. Does it work? Re-using notation which is already used is a dangerous game, but given that as far as I know in Lean every use of ] in notation comes with an [ too, so avoiding the [ in this case gives you better leeway.
#### Kevin Buzzard (Sep 16 2018 at 10:01):
theorem le_ε_to_le (Hle_ε : ∀ ε > 0, a ≤ b + ε) : a ≤ b := sorry These things are really annoying if they're not there already. @Kenny Lau how is one supposed to prove stuff like this?
#### Kevin Buzzard (Sep 16 2018 at 10:24):
theorem le_ε_to_le (Hle_ε : ∀ ε > 0, a ≤ b + ε) : a ≤ b :=
le_of_not_gt $λ H, begin have H2 := Hle_ε ((a - b) / 2) _, revert H2, -- because it makes the rewriting easier rw [←(mul_le_mul_right (show (2 : ℝ) > 0, by norm_num)),add_mul, div_mul_cancel _ (show (2 : ℝ) ≠ 0, by norm_num), (show b * 2 + (a - b) = a + b, by ring), mul_two,add_le_add_iff_left], exact not_le_of_gt H, apply div_pos _ (show (0 : ℝ) < 2, by norm_num), exact sub_pos_of_lt H end This time last year there was no norm_num and no ring -- imagine how hard it was doing M1F example sheets! #### Kenny Lau (Sep 16 2018 at 10:27): theorem le_ε_to_le {α : Type*} [linear_ordered_field α] {a b : α} (Hle_ε : ∀ ε > 0, a ≤ b + ε) : a ≤ b := le_of_not_lt$ λ H, not_lt_of_le (Hle_ε ((a-b)/2) (half_pos $sub_pos_of_lt H))$
calc b+(a-b)/2
... = a : add_sub_cancel'_right b a
#### Kevin Buzzard (Sep 16 2018 at 10:28):
theorem between_shorter (H1 : b ≤ c) (H2 : c ≤ a) (H3 : b ≤ d) (H4 : d ≤ a) :
abs (c - d) ≤ abs (a - b) :=
begin
#### Kevin Buzzard (Sep 16 2018 at 10:49):
Should there be training or exercises or something for people who need stuff like this?
#### Rob Lewis (Sep 16 2018 at 11:36):
import tactic.linarith
lemma half_le_self {α : Type*} [linear_ordered_field α] {a : α}
(H : 0 ≤ a) : a / 2 ≤ a :=
by linarith
theorem between_shorter {α : Type*} [decidable_linear_ordered_comm_ring α] {a b c d : α}
(H1 : b ≤ c) (H2 : c ≤ a) (H3 : b ≤ d) (H4 : d ≤ a) :
abs (c - d) ≤ abs (a - b) :=
by unfold abs max; split_ifs; linarith
#### Kevin Buzzard (Sep 16 2018 at 11:49):
...or maybe even a tactic!
#### Scott Morrison (Sep 16 2018 at 11:49):
Next we make a wrapper for linarith that unfolds stuff that is secretly arithmetic. :-)
#### Kevin Buzzard (Sep 16 2018 at 11:58):
< b+(a-b) : [blah]
... = a : add_sub_cancel'_right b a
Kenny -- you misspelt "by ring".
Why should mathematican end users have to know that the triviality b + (a - b) = a is called add_sub_cancel'_right? Surely we should just be able to write something which generates this proof for us, and then internally replaces what we wrote with this add_sub_cancel' nonsense? I'm assuming that using ring to do this is not recommended because it might take about 10 times as long. It's all well and good people writing clever tactics which solve all goals of this nature, but then we end up in this situation where people are encouraged not to use them and instead get an encyclopedic knowledge of all this add_sub_cancel'_right nonsense, or learn how to look it up. I guess what I'm asking for is a tactic which does by ring but only takes a long time the first time -- like Scott's tidy trick. Can this be done in other cases somehow?
#### Kevin Buzzard (Sep 16 2018 at 12:00):
@Guillermo Barajas Ayuso -- linarith is a brand new tactic which Rob wrote. You might find it useful in other situations.
#### Kenny Lau (Sep 16 2018 at 12:28):
@Rob Lewis re between_shorter, my version works for objects without multiplication
#### Rob Lewis (Sep 16 2018 at 12:33):
Can this be done in other cases somehow?
It can in principle. But things like ring often don't try to produce short or pretty output, because it's way harder to write something that does that and works generally. And the output will probably still look messy on anything more complicated than that basic example.
#### Rob Lewis (Sep 16 2018 at 12:37):
I wouldn't expect ring to be unreasonably slow for examples like that, either.
#### Scott Morrison (Sep 16 2018 at 13:05):
Hi @Rob Lewis, in the interest in making linarith even easier to use, what would you think of having it automatically try exfalso if the goal doesn't look like linear arithmetic?
#### Scott Morrison (Sep 16 2018 at 13:06):
It's of course possible to achieve this by: linarith <|> (exfalso >> linarith), but I worry that this is inefficient.
#### Scott Morrison (Sep 16 2018 at 13:06):
(Actually maybe it isn't --- if the goal is something else, linarith I guess fails before doing any work already...)
#### Rob Lewis (Sep 16 2018 at 13:10):
That's completely reasonable. I actually thought it did that already, but apparently I added a check for a false goal.
#### Rob Lewis (Sep 16 2018 at 13:11):
If there are no inequality hypotheses, it'll fail immediately. If there are hypotheses it can work with, it will try, but failure is a lot quicker than success.
#### Rob Lewis (Sep 16 2018 at 13:13):
I'll add a config option for trying to prove arbitrary goals by exfalso.
#### Keeley Hoek (Sep 16 2018 at 13:42):
This sort of follows-up what Kevin was saying before: it seems to me that it'd be really great if Lean had a facility for tactics to opt-in to cache what they did on invocation, not just in the interactive lean session (memoizing there), but statically in a file in the repository.
#### Keeley Hoek (Sep 16 2018 at 13:42):
The (my?) dream is that mathlib (or just mathematics) could be filled with tactic proofs which call shiny-new maybe-expensive tactics which do all of your dirty-work for you in one line (e.g by super_ring), without any detrimental performance impact; if you change the first line of a file, the expensive tactic proofs in the file below instantly recompile; and if you change the statement of a lemma the cached proof will just silently fail to typecheck and the tactic proof will be re-run. mathlib can be distributed with these cache-files, or they could just be built the first time mathlib is built, and everyone is happy.
#### Keeley Hoek (Sep 16 2018 at 13:42):
Instead it seems like in many places people have to steer clear of the "big guns", or at least only use them to get a term/tactic-mode proof which they will replace them with. To me, this seems just like a manual way of doing the same sort of static caching, but the tools which helped generate your easy proof (e.g. ringadd_sub_cancel'_right) are lost forever (and don't auto-fix your proof when you e.g. tinker with your lemma).
#### Patrick Massot (Sep 16 2018 at 15:35):
< b+(a-b) : [blah]
... = a : add_sub_cancel'_right b a
Kenny -- you misspelt "by ring".
Or "by abel"
#### Kevin Buzzard (Sep 17 2018 at 14:38):
Or even by simp. Grr. What is the point of this abel tactic? I still haven't found an example which simp can't do.
#### Patrick Massot (Sep 17 2018 at 15:03):
Hopefully abel is a step towards the module tactic which should be more useful
Last updated: May 12 2021 at 23:13 UTC
|
# zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Inequalities for the perimeter of an ellipse. (English) Zbl 0985.26009
The authors describe a method to study whether an algebraic approximation to the perimeter of an ellipse is from above or below. By the representation of the perimeter in terms of hypergeometric functions the problem boils down to establishing the sign of the error
$E\left(x\right)=F\left(1/2,-1/2;1;x\right)-A\left(x\right),$
where $A\left(x\right)$ is an algebraic function (depending on the approximation chosen) of the parameter $x\in \left(0,1\right)$ related to the eccentricity of the ellipse. This problem can be tackled analyzing the sign of a series whose entries are all $>0$ starting from a sufficiently large index. Thus, the question is reduced to the sign of a polynomial given by the sum of a finite number of terms of the series. In the situation described its coefficients are integers, and we can apply a Sturm sequence argument with the aid of a computer algebra system performing integer arithmetics.
In this way, the authors show that several classical formulas approximate the elliptical perimeter from below, proving in particular a conjecture by Vuorinen on a Muir’s formula.
##### MSC:
26D07 Inequalities involving other types of real functions 33C05 Classical hypergeometric functions, ${}_{2}{F}_{1}$ 33C75 Elliptic integrals as hypergeometric functions 41A30 Approximation by other special function classes
|
How to use polynomial or conformal transformation
In my research, I came to a transformation problem. The simple version is an initial circle (or sphere) region is advected by some deformational flow. After some time the circle will be deformed into other shapes.
At the beginning, I used linear transformation (rotation, shearing, translating), but I found this is not enough when the flow is extremely deformative. The circle is stretched into long ellipse due the linear nature of the transformation, which should already be bent.
So I decide to try high-order polynomial transformation as shown in the figure. I am not very familiar with polynomial transformation, could it solve this problem? In addition, I also need an inverse transformation, but the high-order polynomial will add some difficulties.
Any input is appreciated!
Update:
I decided to use conformal transformation which should be easier to solve, especially when inverse the transformation, as suggested by @Shuchang. The new diagram is shown as:
If we know the coordinates of the control points and the rotation matrixes on them, how to define the transformation function $f(\mathbf{x}) = \mathbf{y}(\mathbf{x})$?
• There are many nonlinear transformation to make a deformation and polynomial transformation, in my opinion, may not be a good choice. Could you specify what's the intention behind this? – Shuchang Dec 28 '13 at 8:39
• @Shuchang I am developing a Lagrangian numerical transport scheme, which uses many particles to discretise the continuous tracer. In deformative flow, the shape that the particle presents will be changed. This shape is used when interpolating the tracer mass carried by particles onto other spots (like a mesh). – Li Dong Dec 28 '13 at 8:43
• I'm not sure but suggest conformal transformation. Essential you are aligning two curves. – Shuchang Dec 28 '13 at 8:49
• Is it mandatory to use polynomial transformations? If not, you could represent the initial shape using spline curves and transform it be moving the parameters or control points. -- Second thought: You would gain some flexibility at moderate degrees by considering rational functions instead of polynomials. – Lutz Lehmann Dec 28 '13 at 15:43
• @LutzL I used to represent the shape by polygon, but it was a nightmare when the flow is extremely deformative. I would like to use transformation because it provide a convenient way to calculate the transformed coordinate. – Li Dong Dec 29 '13 at 5:48
|
# Chapter 10 - Radical Expressions and Equations - 10-1 The Pythagorean Theorem - Standardized Test Prep: 44
20
#### Work Step by Step
The area of the garden is 16x12. The area of the entire field is (16+2x)x(12+2x). Write an equation showing that the area of the garden is 60% of (times) the are of the field. $16\times12=0.6(16+2x)(12+2x)\longrightarrow$ multiply $192=0.6(16+2x)(12+2x)\longrightarrow$ divide each side by 0.6 $192\div0.6=0.6(16+2x)(12+2x)\div0.6\longrightarrow$ divide $320=(16+2x)(12+2x)\longrightarrow$ multiply the binomials $320=192+56x+4x^2\longrightarrow$ subtract 320 from each side $320-320=192+56x+4x^2-320\longrightarrow$ subtract $0=4x^2+56x-128\longrightarrow$ multiply each side by $\frac{1}{4}$ $0\times\frac{1}{4}=\frac{1}{4}(4x^2+56x-128)\longrightarrow$ multiply using the distributive property $0=x^2+14x-32\longrightarrow$ factor the trinomial; the factors of -32 that sum to 14 are -2 and 16 $0=(x+16)(x-2)\longrightarrow$ one of the 2 factors must be 0 $x+16=0$ or $x-2=0$ $x=-16$ or $x=2$ x cannot be negative, so x=2 The length of the longest side is $16+2x=16+2(2)=16+4=20.$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
## # The product rule in RNNs
I find that the product rule is always forgotten in popular blog posts (see 1 and 2) discussing RNNs and backpropagation through time (BPTT). It is clear what is happening in those posts, but WHY exactly, in a mathematical sense, does the last output depend on all previous states? For this, let us look at the product rule3.
Consider the following unrolled RNN.
Assume the following:
$h_t = \sigma(W * h_{t-1} + Ux_t)$
$y_t = \mathrm{softmax}(V * h_t)$
Using a mix of Leibniz' and Langrange's notation, I now derive:
$\frac{\partial h_3}{\partial W} = \frac{\partial \sigma(Wh_2 + Ux_3)}{\partial W} =$
$\sigma' * [Wh_2 + Ux_3]' =$ // Chain rule
$\sigma' * [Wh_2]' =$
$\sigma' * [W * \sigma(Wh_1 + Ux_2)]' =$
$\sigma' * (h_2 + W * h_2') =$ // Product rule
$\sigma' * (h_2 + W * \sigma' * [Wh_1 + Ux_2]') =$
$\sigma' * (h_2 + W * \sigma' * (h_1 + W * \sigma' * (h_0 + W * h_0'))) =$
$\sigma_{h_3}' * h_2 \mathbf{+}$ $\sigma_{h_3}' * W * \sigma_{h_2}' * h_1 \mathbf{+}$ $\sigma_{h_3}' * W * \sigma_{h_2}' * W * \sigma_{h_1}' * h_0 \mathbf{+}$ $\sigma_{h_3}' * W * \sigma_{h_2}' * W * \sigma_{h_1}' * W * h_{0}'$
Chain rule happens in line 1 to 2, product rule in line 4 to 5. Line 3 is simply explained by Ux not containing W (which we're deriving for). Now, it can be immediately seen that each summand of the last result keeps referencing further and further into the past.
Lastly, since this assumes the reader is familiar with the topic, a really nice further explanation of BPTT for the interested reader can be found here.
Published on
|
# Canonical Transformation
1. Jun 10, 2012
### M. next
I have posted before this, an example in which I struggled through.
Now am gnna ask something more general, for me and for the students who suffer from studying a material alone.
If you were asked to prove that the time-independent transformation P=.. and Q=.. is canonical. And finding the generating function.
There are two methods as I know so far.
1) By applying pdq-PdQ=dF
2) By using $\partial$F/$\partial$q=p - $\partial$F/$\partial$Q=-P
(in accordance to what we are asked for: F(q,Q) F(q,P) ...)
My questions are:
In 1) What should we be aware of, Can we face a problem in concluding the F at the end?
In 2) What are the steps! One by one? Why do I see in some problems that after partial differentiation at the beginning they try to manipulate coordinates, instead of q, Q - instead of p, q or so on.. (am not being specific). Why? On what basis?
Do me this favor, please - Is canonical transformation this hard? Or is it steps that should be followed?
Best Regards,
2. Jun 10, 2012
### vanhees71
If you have the transformation given in explicit form
$$q=q(Q,P), \quad p=p(Q,P),$$
the most simple way to prove that this is a canonical transformation (local symplectomorphism) is to show that the Poisson brackets are the canonical ones, i.e.,
$$[q,p]=1,$$
where the Poisson bracket is defined by the partial derivatives wrt. $Q$ and $P$.
If you want to find the generating function in its original form, i.e., as a function
$$F=F(q,Q)$$
you just solve for
$$p=\frac{\partial F}{\partial q}, \quad P=-\frac{\partial F}{\partial Q}.$$
3. Jun 10, 2012
### M. next
It works everytime? And what concerning the methods I mentioned? You mentioned a new method I suppose [q,p]=1, right? Shed lights on my methods please.
Yes and what if I want in your general example, F(q,P)?
4. Jun 10, 2012
### vanhees71
Your methods 1) and 2) are fine, but as I said, if you have the transformation given explicitly, to check whether it's canonical you should check the integrability conditions in terms of the Poisson brackets.
Of course, you can write the generating function with any pair of old and new phase-space coordinates you like. The original form is that where you use $q$ and $Q$. If you want, e.g., $q$ and $P$, you make the appropriate Legendre transformation, i.e., you set
$$F(q,Q)=g(q,P)-Q P,$$
because then you get
$$\mathrm{d} q \partial_q F+\mathrm{d} Q \partial_Q F=\mathrm{d} q \partial_q g+(\partial_P g-Q)\mathrm{d P}-P \mathrm{d} Q.$$
Comparison of the left- and right-hand side of this equation yields\
$$p=\partial_q F=\partial_q g, \quad P=-\partial_Q F, \quad Q=\partial_P g.$$
Canonical transformations are not so difficult, but one has to get used to the concepts about them. A good source is Landau/Lifschitz Vol. 1.
5. Jun 10, 2012
### M. next
Thanks, I have the book, it is a very good book, but kind of condensed. Thanks again.
|
# dg.differential geometry – Optimal lower bound on the volume of balls under a Sobolev inequality
Let $$M$$ be a complete non-compact $$n$$-dimensional ($$n geq 3$$)
Riemannian manifold with volume element
$$dv$$ such that, for every smooth compactly supported function $$f : M to mathbb {R}$$,
$$bigg ( int_M |f|^{frac {2n}{n-2}} dvbigg)^{frac {n-2}{n}} , leq , C int_M |nabla f|^2 dv$$
where $$C >0$$ is the optimal constant of this Sobolev inequality in the Euclidean case
$$M = mathbb{R}^n$$. Is it true that
$$mathrm {Vol} (B(x,r) ) , geq , mathrm{V}(r)$$
where $$B(x,r)$$, $$x in M$$, is a ball of radius $$r >0$$ in $$M$$ and
$$mathrm{V}(r)$$ is the volume of a ball of radius $$r$$ in $$mathbb{R}^n$$?
|
# Math Help - Series and sequences
1. ## Series and sequences
I'm having problem with this question.
Do I use the ratio test for this infinite series? if so, can you show a few steps to get me going, or if not, what should I use?
$\sum\limits_{n=1}^{\infty} \frac{2^n - n^5}{4^n+2n^2+3n}$
2. Originally Posted by tsal15
I'm having problem with this question.
Do I use the ratio test for this infinite series? if so, can you show a few steps to get me going, or if not, what should I use?
$\sum\limits_{n=1}^{\infty} \frac{2^n - n^5}{4^n+2n^2+3n}$
You have $\sum\limits_{n=1}^{\infty} \frac{2^n}{4^n+2n^2+3n} - \sum\limits_{n=1}^{\infty} \frac{n^5}{4^n+2n^2+3n}$.
Now note:
1. $\frac{2^n}{4^n+2n^2+3n} < \frac{2^n}{4^n} = \left(\frac{1}{2}\right)^n$.
So the first sum converges by the comparison test.
2. $\frac{n^5}{4^n+2n^2+3n} < \frac{n^5}{4^n}$ and $\sum\limits_{n=1}^{\infty} \frac{n^5}{4^n}$ converges by the ratio test.
So the second sum converges.
3. Originally Posted by mr fantastic
You have $\sum\limits_{n=1}^{\infty} \frac{2^n}{4^n+2n^2+3n} - \sum\limits_{n=1}^{\infty} \frac{n^5}{4^n+2n^2+3n}$.
Now note:
1. $\frac{2^n}{4^n+2n^2+3n} < \frac{2^n}{4^n} = \left(\frac{1}{2}\right)^n$.
So the first sum converges by the comparison test.
2. $\frac{n^5}{4^n+2n^2+3n} < \frac{n^5}{4^n}$ and $\sum\limits_{n=1}^{\infty} \frac{n^5}{4^n}$ converges by the ratio test.
So the second sum converges.
Thank you for the quick reply Mr. Fantastic. You truly are fantastic .
4. Now note:
1. $\frac{2^n}{4^n+2n^2+3n} < \frac{2^n}{4^n} = \left(\frac{1}{2}\right)^n$.
So the first sum converges by the comparison test.
Oh Mr. Fantastic I should've asked you this earlier...I was just soo excited someone actually replied to my post... you see i haven't had much help for a while on mathhelp...but any who. how did you know to compare that sum with $\frac{2^n}{4^n}$?
2. $\frac{n^5}{4^n+2n^2+3n} < \frac{n^5}{4^n}$ and $\sum\limits_{n=1}^{\infty} \frac{n^5}{4^n}$ converges by the ratio test.
So the second sum converges.
I believe you've used the comparison test, again, after finding out $\frac{n^5}{4^n}$ converges by the ratio test? Similarly, how did you know to use $\frac{n^5}{4^n}$
Thank you Mr. Fantastic
5. Originally Posted by tsal15
Oh Mr. Fantastic I should've asked you this earlier...I was just soo excited someone actually replied to my post... you see i haven't had much help for a while on mathhelp...but any who. how did you know to compare that sum with $\frac{2^n}{4^n}$?
I believe you've used the comparison test, again, after finding out $\frac{n^5}{4^n}$ converges by the ratio test? Similarly, how did you know to use $\frac{n^5}{4^n}$ Mr F says: Yes, I did.
Thank you Mr. Fantastic
In both cases it's a matter of guessing whether the series is convergent or divergent and then trying to construct an appropriate series to use in the comparison. The key is a lot of practice to develop experience - after a while you get to 'see' (sometimes) what the series will be that you need to use.
6. Originally Posted by mr fantastic
In both cases it's a matter of guessing whether the series is convergent or divergent and then trying to construct an appropriate series to use in the comparison. The key is a lot of practice to develop experience - after a while you get to 'see' (sometimes) what the series will be that you need to use.
hmmm, guessing i can get used to that...hehehe. but yes i do agree with, practice will help.
thanks Mr. Fantastic
|
Stream: new members
Topic: mul_left_cancel for integral domains
Damiano Testa (Sep 21 2020 at 11:46):
Dear All,
I would like to use the fact that in an integral domain, every non-zero element is left-cancellable (see the #mwe below). Is this lemma already in mathlib and I have simply been unable to find it?
Thank you!
import tactic
lemma xxv {R : Type*} [comm_ring R] [is_integral_domain R] {x y z : R} {h : x ≠ 0} : x*y=x*z → y=z :=
begin
sorry,
end
Johan Commelin (Sep 21 2020 at 11:47):
mul_left_cancel hopefully?
Johan Commelin (Sep 21 2020 at 11:48):
Maybe with a ', if the other version is for groups
Damiano Testa (Sep 21 2020 at 11:50):
I tried, but I was unable to make it work...
Kevin Buzzard (Sep 21 2020 at 11:51):
I'll give it a go
Shing Tak Lam (Sep 21 2020 at 11:52):
If you change is_integral_domain to integral_domain, then mul_left_cancel' works btw
Kevin Buzzard (Sep 21 2020 at 11:52):
Aah, that's the issue: is_integral_domain isn't the way to say an integral domain.
Damiano Testa (Sep 21 2020 at 11:52):
Shing Tak Lam said:
If you change is_integral_domain to integral_domain, then mul_left_cancel' works btw
Ah, this might fix it for me, then!
Kevin Buzzard (Sep 21 2020 at 11:52):
You should delete comm_ring R and use integral_domain R
Let me try!
Damiano Testa (Sep 21 2020 at 11:56):
I am trying
lemma xxv [integral_domain R] {x y z : R} {h : x ≠ 0} : x*y=x*z → y=z :=
begin
apply mul_left_cancel',
exact h,
end
but I get this error:
invalid type ascription, term has type
x ≠ 0
but is expected to have type
x ≠ 0
state:
R : Type u_1,
_inst_1 : semiring R,
_inst_2 : integral_domain R,
x y z : R,
h : x ≠ 0
⊢ x ≠ 0
Kevin Buzzard (Sep 21 2020 at 11:58):
import tactic
lemma xxv (R : Type) [integral_domain R] {x y z : R} {h : x ≠ 0} : x*y=x*z → y=z :=
begin
apply mul_left_cancel',
exact h,
end
works for me.
Shing Tak Lam (Sep 21 2020 at 11:59):
I'm not sure why, but I think the semiring R instance is affecting things
Shing Tak Lam (Sep 21 2020 at 11:59):
you probably have variable [semiring R] somewhere? I can reproduce the error if I add a semiring R typeclass
import tactic
lemma xxv {R : Type*} [semiring R] [integral_domain R] {x y z : R} {h : x ≠ 0} : x*y=x*z → y=z :=
begin
apply mul_left_cancel',
exact h, -- Same error
end
Damiano Testa (Sep 21 2020 at 12:00):
Yes, my standing assumption is that R is a semiring
Interesting
Reid Barton (Sep 21 2020 at 12:01):
I wonder if it's worth trying to improve these error messages, or we should just wait for :four_leaf_clover:
Kevin Buzzard (Sep 21 2020 at 12:01):
But an integral domain is a semiring so you should maybe just start afresh with some new variable which is an integral domain
Damiano Testa (Sep 21 2020 at 12:01):
I will remove this assumption now. In case someone is further interested, here is what happens if I try to convert h. Lean wants me to prove:
⊢ cancel_monoid_with_zero.to_monoid_with_zero R = semiring.to_monoid_with_zero R
Johan Commelin (Sep 21 2020 at 12:01):
Yup, but those are two completely different 0s (-;
Kevin Buzzard (Sep 21 2020 at 12:02):
You can't prove this, because one has come from [semiring R] and one has come from [integral_domain R]. That's exactly the difference between integral_domain and is_integral_domain
Johan Commelin (Sep 21 2020 at 12:02):
You assumed two semiring instances (one coming from the integral domain) but Lean doesn't know any compatibility between them.
Damiano Testa (Sep 21 2020 at 12:02):
Ok, I will remove the semiring assumption
Kevin Buzzard (Sep 21 2020 at 12:02):
You should remove the [semiring R] assumption because type class inference knows that an integral domain is a semiring
Kevin Buzzard (Sep 21 2020 at 12:03):
so all the stuff you just proved about semirings will all work fine when you're rewriting it in some lemmas about integral domains
Damiano Testa (Sep 21 2020 at 12:04):
Thank you all for the help! For the record, this now is a working lemma:
lemma xxv {RR : Type*} [integral_domain RR] {x y z : RR} {h : x ≠ 0} : x*y=x*z → y=z :=
begin
apply mul_left_cancel' h,
end
Kevin Buzzard (Sep 21 2020 at 12:04):
now see if you can do it in term mode :-)
Damiano Testa (Sep 21 2020 at 12:06):
I would love to, but I would need to learn how to communicate to Lean in term mode... Ahahaha
Reid Barton (Sep 21 2020 at 12:07):
hint: the term-mode proof is a substring of the tactic proof
Damiano Testa (Sep 21 2020 at 12:10):
Ok, I am making the guess that I should remove characters from apply , since I imagine that the name of the lemmas in mathlib appear identically in term mode!
Johan Commelin (Sep 21 2020 at 12:10):
begin means entering tactic mode
Johan Commelin (Sep 21 2020 at 12:10):
and end stops it.
Johan Commelin (Sep 21 2020 at 12:11):
Indeed, apply takes a little bit of term mode as argument, and does something useful with it.
Damiano Testa (Sep 21 2020 at 12:11):
ah, so i should remove from begin apply [] end
I will keep by! Ahahaha
Johan Commelin (Sep 21 2020 at 12:11):
No by also enters tactic mode
Reid Barton (Sep 21 2020 at 12:12):
To be more specific, a consecutive substring :upside_down:
Kevin Buzzard (Sep 21 2020 at 12:12):
by {...} and begin ... end are the two ways to enter tactic mode from term mode. exact ... is the way to enter term mode from tactic mode.
Johan Commelin (Sep 21 2020 at 12:13):
@Damiano Testa Did you know that mul_left_cancel' is a function?
Johan Commelin (Sep 21 2020 at 12:13):
Your lemma xxv is also a function
Reid Barton (Sep 21 2020 at 12:21):
Regarding the error, if you set_option pp.numerals false, then you get an error which shows the issue:
error: invalid type ascription, term has type
x ≠
@has_zero.zero R
(@mul_zero_class.to_has_zero R (@monoid_with_zero.to_mul_zero_class R (@semiring.to_monoid_with_zero R _inst_1)))
but is expected to have type
x ≠
@has_zero.zero R
(@mul_zero_class.to_has_zero R
(@monoid_with_zero.to_mul_zero_class R
(@cancel_monoid_with_zero.to_monoid_with_zero R
(@domain.to_cancel_monoid_with_zero R (@integral_domain.to_domain R _inst_2)))))
Reid Barton (Sep 21 2020 at 12:22):
showing that Lean already has some logic to help identify the issue,
but to make it work you have to (1) guess it's an issue with 0, (2) know (or guess) that pp.numerals exists, (3) turn it on and understand the resulting message
Damiano Testa (Sep 21 2020 at 12:23):
I managed the term mode proof!
mul_left_cancel' h
I got stuck this whole time with the comma at the end...
Damiano Testa (Sep 21 2020 at 12:25):
Reid Barton said:
Regarding the error, if you set_option pp.numerals false, then you get an error which shows the issue:
error: invalid type ascription, term has type
x ≠
@has_zero.zero R
(@mul_zero_class.to_has_zero R
(@monoid_with_zero.to_mul_zero_class R
(@semiring.to_monoid_with_zero R _inst_1)))
but is expected to have type
x ≠
@has_zero.zero R
(@mul_zero_class.to_has_zero R
(@monoid_with_zero.to_mul_zero_class R
(@cancel_monoid_with_zero.to_monoid_with_zero R
(@domain.to_cancel_monoid_with_zero R (@integral_domain.to_domain R _inst_2)))))
I see: one appears to go by semiring, the other by cancel_monoid.
Thanks for the explanation!
Johan Commelin (Sep 21 2020 at 12:26):
Nasty commas are nasty
Reid Barton (Sep 21 2020 at 12:26):
The main point is buried though: one is ultimately built from _inst_1 and the other from _inst_2`. So the error message could still use some work.
Kevin Buzzard (Sep 21 2020 at 12:35):
"Don't forget the comma!"
Last updated: May 08 2021 at 19:11 UTC
|
# What do we mean by the notation $\mathbf{x}_{p} \in \mathbb{R}^{N \times\left(P^{2} \cdot C\right)}$?
I was going through this VIT paper, what will it look like in torch , if we are trying to write this expression.
• This itself isn't really an expression but a description of what $x_p$ looks like. Specifically, $x_p$ is a real-valued vector with the shape [N, P^2 * C]. Of course, that data comes from your dataset, but as an example torch.ones(N, P^2 * C, dtype=torch.float32) will give you a vector with the same shape. Note that this will be a float32 vector, which makes it real-valued ($x_p \in \mathbb{R}$). Jun 22 at 7:11
• @Chillston You could write that as an answer instead of a comment. Jun 22 at 8:15
• Of course, sorry Jun 22 at 8:22
This itself isn't really an expression but a description of what $$x_p$$ looks like. Specifically, $$x_p$$ is a real-valued vector with the shape [N, P^2 * C]. Of course, that data comes from your dataset, but as an example
torch.ones(N, P^2 * C, dtype=torch.float32)
will give you a vector with the desired shape. Note that this will be a float32 vector, which makes it real-valued ($$x_p \in R$$).
|
Subarray Replacement
CUET Intra University Jun...
Limits 4s, 256 MB
You are one of the greatest programmers of the decade. Ahasan heard that you can solve range query problems in a minute. He came to you with the following problem.
Given two arrays $A$ and $B$ both of $N$ integers, your will process $Q$ queries of the following $3$ types:
1. Find the maximum value in the range $[L,R]$ in array $A$
2. Update $A[I]$ with $X$
3. Replace the subarray $A[L_1… R_1]$ with subarray $B[L_2… R_2]$. That means set $A[L_1]=B[L_2], A[L_1+1]=B[L_2+1], A[L_1+2]=B[L_2+2],..., A[R_1]=B[R_2]$.
Input
The first line contains two integers $N (1\leq N\leq 10^6)$ and $Q (1\leq Q\leq 10^6)$, the number of elements in the arrays and the number of queries respectively.
The second line contains $N$ integers $A_1, A_2, …,A_N (1\leq A_i\leq 10^9)$, the elements of the array $A$.
The third line contains $N$ integers $B_1, B_2, …, B_N (1\leq B_i\leq 10^9)$, the elements of the array $B$.
The next $Q$ lines describe the queries. Each line contains a query in the following form:
• $1\ L\ R$ $-$ find the maximum value in the range $[L, R]$ $(1\leq L\leq R\leq N)$ in array $A$
• $2\ I\ X$ $-$ update $A[I]$$,(1\leq I\leq N)$ with $X (1\leq X\leq 10^9)$
• $3\ L_1\ R_1\ L_2\ R_2$ $-$ replace the subarray $A[L_1…R_1]$ with subarray $B[L_2…R_2]$, where $(1\leq L_1\leq R_1\leq N), (1\leq L_2\leq R_2\leq N)$ and $(R_1-L_1+1=R_2-L_2+1)$
Output
Print the output of each query of type $1$.
Sample
InputOutput
6 5
3 7 9 2 4 1
9 2 5 6 1 5
1 2 5
3 1 4 3 6
1 2 5
2 2 4
1 1 6
9
6
5
|
## Files in this item
FilesDescriptionFormat
application/pdf
9717301.pdf (6MB)
(no description provided)PDF
## Description
Title: Influence of Soluble Surface-Active Organic Material on Droplet Activation Author(s): Li, Zhidong Doctoral Committee Chair(s): Rood, Mark J. Department / Program: Civl and Environmental Engineering Discipline: Civl and Environmental Engineering Degree Granting Institution: University of Illinois at Urbana-Champaign Degree: Ph.D. Genre: Dissertation Subject(s): Physics, Atmospheric Science Abstract: Both theoretical and experimental approaches also show that despite the a depression, S$\sb{\rm c}$ of a particle that contains SDS is always higher than that of a pure NaCl particle with the same dry size. The degree of this deviation increases with increasing SDS% in the mixtures, indicating an increase in hydrophobicity with increasing SDS% in the initially dry particles. The lowering of Raoult effect due to the large molecular weight of SDS is attributable to this trend. It appears that in the atmosphere, only those particles that contain soluble surfactants, whose molecular weight is comparable to (NH$\sb4)\sb2$SO$\sb4$ and meanwhile are very surface active, can achieve low S$\sb{\rm c}$ comparable to that of (NH$\sb4)\sb2$SO$\sb4$ particles. Issue Date: 1997 Type: Text Language: English Description: 125 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 1997. URI: http://hdl.handle.net/2142/83426 Other Identifier(s): (MiAaPQ)AAI9717301 Date Available in IDEALS: 2015-09-25 Date Deposited: 1997
|
# Methylation analysis with Methyl-IT. Part 2
## An example of methylation analysis with simulated datasets
Part 2: Potential DMPs from the methylation signal
Methylation analysis with Methyl-IT is illustrated on simulated datasets of methylated and unmethylated read counts with relatively high average of methylation levels: 0.15 and 0.286 for control and treatment groups, respectively. In this part, potential differentially methylated positions are estimated following different approaches.
## 1. Background
Only a signal detection approach can detect with high probability real DMPs. Any statistical test (like e.g. Fisher’s exact test) not based on signal detection requires for further analysis to distinguish DMPs that naturally can occur in the control group from those DMPs induced by a treatment. The analysis here is a continuation of Part 1.
## 2. Potential DMPs from the methylation signal using empirical distribution
As suggested from the empirical density graphics (above), the critical values $H_{\alpha=0.05}$ and $TV_{d_{\alpha=0.05}}$ can be used as cutpoints to select potential DMPs. After setting $dist.name = “ECDF”$ and $tv.cut = 0.926$ in Methyl-IT function getPotentialDIMP, potential DMPs are estimated using the empirical cummulative distribution function (ECDF) and the critical value $TV_{d_{\alpha=0.05}}=0.926$.
DMP.ecdf <- getPotentialDIMP(LR = divs, div.col = 9L, tv.cut = 0.926, tv.col = 7,
alpha = 0.05, dist.name = "ECDF")
## 3. Potential DMPs detected with Fisher’s exact test
In Methyl-IT Fisher’s exact test (FT) is implemented in function FisherTest. In the current case, a pairwise group application of FT to each cytosine site is performed. The differences between the group means of read counts of methylated and unmethylated cytosines at each site are used for testing (pooling.stat=”mean”). Notice that only cytosine sites with critical values $TV_d$> 0.926 are tested (tv.cut = 0.926).
ft = FisherTest(LR = divs, tv.cut = 0.926,
pAdjustMethod = "BH", pooling.stat = "mean",
pvalCutOff = 0.05, num.cores = 4L,
verbose = FALSE, saveAll = FALSE)
ft.tv <- getPotentialDIMP(LR = ft, div.col = 9L, dist.name = "None",
tv.cut = 0.926, tv.col = 7, alpha = 0.05)
There is not a one-to-one mapping between $TV$ and $HD$. However, at each cytosine site $i$, these information divergences hold the inequality:
$TV(p^{tt}_i,p^{ct}_i)\leq \sqrt{2}H_d(p^{tt}_i,p^{ct}_i)$ [1].
where $H_d(p^{tt}_i,p^{ct}_i) = \sqrt{\frac{H(p^{tt}_i,p^{ct}_i)}w}$ is the Hellinger distance and $H(p^{tt}_i,p^{ct}_i)$ is given by Eq. 1 in part 1.
So, potential DMPs detected with FT can be constrained with the critical value $H^{TT}_{\alpha=0.05}\geq114.5$
## 4. Potential DMPs detected with Weibull 2-parameters model
Potential DMPs can be estimated using the critical values derived from the fitted Weibull 2-parameters models, which are obtained after the non-linear fit of the theoretical model on the genome-wide $HD$ values for each individual sample using Methyl-IT function nonlinearFitDist [2]. As before, only cytosine sites with critical values $TV>0.926$ are considered DMPs. Notice that, it is always possible to use any other values of $HD$ and $TV$ as critical values, but whatever could be the value it will affect the final accuracy of the classification performance of DMPs into two groups, DMPs from control and DNPs from treatment (see below). So, it is important to do an good choices of the critical values.
nlms.wb <- nonlinearFitDist(divs, column = 9L, verbose = FALSE, num.cores = 6L)
# Potential DMPs from 'Weibull2P' model
DMPs.wb <- getPotentialDIMP(LR = divs, nlms = nlms.wb, div.col = 9L,
tv.cut = 0.926, tv.col = 7, alpha = 0.05,
dist.name = "Weibull2P")
nlms.wb$T1 ## Estimate Std. Error t value Pr(>|t|)) Adj.R.Square ## shape 0.5413711 0.0003964435 1365.570 0 0.991666592250838 ## scale 19.4097502 0.0155797315 1245.833 0 ## rho R.Cross.val DEV ## shape 0.991666258901194 0.996595712743823 34.7217494754823 ## scale ## AIC BIC COV.shape COV.scale ## shape -221720.747067975 -221694.287733122 1.571674e-07 -1.165129e-06 ## scale -1.165129e-06 2.427280e-04 ## COV.mu n ## shape NA 50000 ## scale NA 50000 ## 5. Potential DMPs detected with Gamma 2-parameters model As in the case of Weibull 2-parameters model, potential DMPs can be estimated using the critical values derived from the fitted Gamma 2-parameters models and only cytosine sites with critical values$TV_d > 0.926$are considered DMPs. nlms.g2p <- nonlinearFitDist(divs, column = 9L, verbose = FALSE, num.cores = 6L, dist.name = "Gamma2P") # Potential DMPs from 'Gamma2P' model DMPs.g2p <- getPotentialDIMP(LR = divs, nlms = nlms.g2p, div.col = 9L, tv.cut = 0.926, tv.col = 7, alpha = 0.05, dist.name = "Gamma2P") nlms.g2p$T1
## Estimate Std. Error t value Pr(>|t|)) Adj.R.Square
## shape 0.3866249 0.0001480347 2611.717 0 0.999998194156282
## scale 76.1580083 0.0642929555 1184.547 0
## rho R.Cross.val DEV
## shape 0.999998194084045 0.998331895911125 0.00752417919133131
## scale
## AIC BIC COV.alpha COV.scale
## shape -265404.29138371 -265369.012270572 2.191429e-08 -8.581717e-06
## scale -8.581717e-06 4.133584e-03
## COV.mu df
## shape NA 49998
## scale NA 49998
Summary table:
data.frame(ft = unlist(lapply(ft, length)), ft.hd = unlist(lapply(ft.hd, length)),
ecdf = unlist(lapply(DMPs.hd, length)), Weibull = unlist(lapply(DMPs.wb, length)),
Gamma = unlist(lapply(DMPs.g2p, length)))
## ft ft.hd ecdf Weibull Gamma
## C1 1253 773 63 756 935
## C2 1221 776 62 755 925
## C3 1280 786 64 768 947
## T1 2504 1554 126 924 1346
## T2 2464 1532 124 942 1379
## T3 2408 1477 121 979 1354
## 6. Density graphic with a new critical value
The graphics for the empirical (in black) and Gamma (in blue) densities distributions of Hellinger divergence of methylation levels for sample T1 are shown below. The 2-parameter gamma model is build by using the parameters estimated in the non-linear fit of $H$ values from sample T1. The critical values estimated from the 2-parameter gamma distribution $H^{\Gamma}_{\alpha=0.05}=124$ is more ‘conservative’ than the critical value based on the empirical distribution $H^{Emp}_{\alpha=0.05}=114.5$. That is, in accordance with the empirical distribution, for a methylation change to be considered a signal its $H$ value must be $H\geq114.5$, while according with the 2-parameter gamma model any cytosine carrying a signal must hold $H\geq124$.
suppressMessages(library(ggplot2))
# Some information for graphic
dt <- data[data$sample == "T1", ] coef <- nlms.g2p$T1$Estimate # Coefficients from the non-linear fit dgamma2p <- function(x) dgamma(x, shape = coef[1], scale = coef[2]) qgamma2p <- function(x) qgamma(x, shape = coef[1], scale = coef[2]) # 95% quantiles q95 <- qgamma2p(0.95) # Gamma model based quantile emp.q95 = quantile(divs$T1$hdiv, 0.95) # Empirical quantile # Density plot with ggplot ggplot(dt, aes(x = HD)) + geom_density(alpha = 0.05, bw = 0.2, position = "identity", na.rm = TRUE, size = 0.4) + xlim(c(0, 150)) + stat_function(fun = dgamma2p, colour = "blue") + xlab(expression(bolditalic("Hellinger divergence (HD)"))) + ylab(expression(bolditalic("Density"))) + ggtitle("Empirical and Gamma densities distributions of Hellinger divergence (T1)") + geom_vline(xintercept = emp.q95, color = "black", linetype = "dashed", size = 0.4) + annotate(geom = "text", x = emp.q95 - 20, y = 0.16, size = 5, label = 'bolditalic(HD[alpha == 0.05]^Emp==114.5)', family = "serif", color = "black", parse = TRUE) + geom_vline(xintercept = q95, color = "blue", linetype = "dashed", size = 0.4) + annotate(geom = "text", x = q95 + 9, y = 0.14, size = 5, label = 'bolditalic(HD[alpha == 0.05]^Gamma==124)', family = "serif", color = "blue", parse = TRUE) + theme( axis.text.x = element_text( face = "bold", size = 12, color="black", margin = margin(1,0,1,0, unit = "pt" )), axis.text.y = element_text( face = "bold", size = 12, color="black", margin = margin( 0,0.1,0,0, unit = "mm")), axis.title.x = element_text(face = "bold", size = 13, color="black", vjust = 0 ), axis.title.y = element_text(face = "bold", size = 13, color="black", vjust = 0 ), legend.title = element_blank(), legend.margin = margin(c(0.3, 0.3, 0.3, 0.3), unit = 'mm'), legend.box.spacing = unit(0.5, "lines"), legend.text = element_text(face = "bold", size = 12, family = "serif") ## References 1. Steerneman, Ton, K. Behnen, G. Neuhaus, Julius R. Blum, Pramod K. Pathak, Wassily Hoeffding, J. Wolfowitz, et al. 1983. “On the total variation and Hellinger distance between signed measures; an application to product measures.” Proceedings of the American Mathematical Society 88 (4). Springer-Verlag, Berlin-New York: 684–84. doi:10.1090/S0002-9939-1983-0702299-0. 2. Sanchez, Robersy, and Sally A. Mackenzie. 2016. “Information Thermodynamics of Cytosine DNA Methylation.” Edited by Barbara Bardoni. PLOS ONE 11 (3). Public Library of Science: e0150427. doi:10.1371/journal.pone.0150427. # Methylation analysis with Methyl-IT. Part 1 ## An example of methylation analysis with simulated datasets Part 1: Methylation signal Methylation analysis with Methyl-IT is illustrated on simulated datasets of methylated and unmethylated read counts with relatively high average of methylation levels: 0.15 and 0.286 for control and treatment groups, respectively. The main Methyl-IT downstream analysis is presented alongside the application of Fisher’s exact test. The importance of a signal detection step is shown. ## 1. Background Methyl-IT R package offers a methylome analysis approach based on information thermodynamics (IT) and signal detection. Methyl-IT approach confront detection of differentially methylated cytosine as a signal detection problem. This approach was designed to discriminate methylation regulatory signal from background noise induced by molecular stochastic fluctuations. Methyl-IT R package is not limited to the IT approach but also includes Fisher’s exact test (FT), Root-mean-square statistic (RMST) or Hellinger divergence (HDT) tests. Herein, we will show that a signal detection step is also required for FT, RMST, and HDT as well. ## 2. Data generation For the current example on methylation analysis with Methyl-IT we will use simulated data. Read count matrices of methylated and unmethylated cytosine are generated with Methyl-IT function simulateCounts. Function simulateCounts randomly generates prior methylation levels using Beta distribution function. The expected mean of methylation levels that we would like to have can be estimated using the auxiliary function: bmean <- function(alpha, beta) alpha/(alpha + beta) alpha.ct <- 0.09 alpha.tt <- 0.2 c(control.group = bmean(alpha.ct, 0.5), treatment.group = bmean(alpha.tt, 0.5), mean.diff = bmean(alpha.tt, 0.5) - bmean(alpha.ct, 0.5)) ## control.group treatment.group mean.diff ## 0.1525424 0.2857143 0.1331719 This simple function uses the α and β (shape2) parameters from the Beta distribution function to compute the expected value of methylation levels. In the current case, we expect to have a difference of methylation levels about 0.133 between the control and the treatment. ### 2.1. Simulation Methyl-IT function simulateCounts will be used to generate the datasets, which will include three group of samples: reference, control, and treatment. suppressMessages(library(MethylIT)) # The number of cytosine sites to generate sites = 50000 # Set a seed for pseudo-random number generation set.seed(124) control.nam <- c("C1", "C2", "C3") treatment.nam <- c("T1", "T2", "T3") # Reference group ref0 = simulateCounts(num.samples = 4, sites = sites, alpha = alpha.ct, beta = 0.5, size = 50, theta = 4.5, sample.ids = c("R1", "R2", "R3")) # Control group ctrl = simulateCounts(num.samples = 3, sites = sites, alpha = alpha.ct, beta = 0.5, size = 50, theta = 4.5, sample.ids = control.nam) # Treatment group treat = simulateCounts(num.samples = 3, sites = sites, alpha = alpha.tt, beta = 0.5, size = 50, theta = 4.5, sample.ids = treatment.nam) Notice that reference and control groups of samples are not identical but belong to the same population. ### 2.2. Divergences of methylation levels The estimation of the divergences of methylation levels is required to proceed with the application of signal detection basic approach. The information divergence is estimated here using the function estimateDivergence. For each cytosine site, methylation levels are estimated according to the formulas:$p_i={n_i}^{mC_j}/({n_i}^{mC_j}+{n_i}^{C_j})$, where${n_i}^{mC_j}$and${n_i}^{C_j}$are the number of methylated and unmethylated cytosines at site$i$. If a Bayesian correction of counts is selected in function estimateDivergence, then methylated read counts are modeled by a beta-binomial distribution in a Bayesian framework, which accounts for the biological and sampling variations [1,2,3]. In our case we adopted the Bayesian approach suggested in reference [4] (Chapter 3). Two types of information divergences are estimated: TV, total variation (TV, absolute value of methylation levels) and Hellinger divergence (H). TV is computed according to the formula:$TV=|p_{tt}-p_{ct}|$and H:$H(\hat p_{ij},\hat p_{ir}) = w_i[(\sqrt{\hat p_{ij}} – \sqrt{\hat p_{ir}})^2+(\sqrt{1-\hat p_{ij}} – \sqrt{1-\hat p_{ir}})^2]$(1) where$w_i = 2 \frac{m_{ij} m_{ir}}{m_{ij} + m_{ir}}$,$m_{ij} = {n_i}^{mC_j}+{n_i}^{uC_j}+1$,$m_{ir} = {n_i}^{mC_r}+{n_i}^{uC_r}+1$and$j \in {\{c,t}\}$The equation for Hellinger divergence is given in reference [5], but any other information theoretical divergences could be used as well. Divergences are estimated for control and treatment groups in respect to a virtual sample, which is created applying function poolFromGRlist on the reference group. . # Reference sample ref = poolFromGRlist(ref0, stat = "mean", num.cores = 4L, verbose = FALSE) # Methylation level divergences DIVs <- estimateDivergence(ref = ref, indiv = c(ctrl, treat), Bayesian = TRUE, num.cores = 6L, percentile = 1, verbose = FALSE) The mean of methylation levels differences is: unlist(lapply(DIVs, function(x) mean(mcols(x[, 7])[,1]))) ## C1 C2 C3 T1 T2 ## -0.0009820776 -0.0014922009 -0.0022257725 0.1358867135 0.1359160219 ## T3 ## 0.1309217360 ## 3. Methylation signal Likewise for any other signal in nature, the analysis of methylation signal requires for the knowledge of its probability distribution. In the current case, the signal is represented in terms of the Hellinger divergence of methylation levels (H). divs = DIVs[order(names(DIVs))] # To remove hd == 0 to estimate. The methylation signal only is given for divs = lapply(divs, function(div) div[ abs(div$hdiv) > 0 ])
names(divs) <- names(DIVs)
# Data frame with the Hellinger divergences from both groups of samples samples
l = c(); for (k in 1:length(divs)) l = c(l, length(divs[[k]]))
data <- data.frame(H = c(abs(divs$C1$hdiv), abs(divs$C2$hdiv), abs(divs$C3$hdiv),
abs(divs$T1$hdiv), abs(divs$T2$hdiv), abs(divs$T3$hdiv)),
sample = c(rep("C1", l[1]), rep("C2", l[2]), rep("C3", l[3]),
rep("T1", l[4]), rep("T2", l[5]), rep("T3", l[6]))
)
Empirical critical values for the probability distribution of H and TV can be obtained using quantile function from the R package stats.
critical.val <- do.call(rbind, lapply(divs, function(x) {
hd.95 = quantile(x$hdiv, 0.95) tv.95 = quantile(x$TV, 0.95)
return(c(tv = tv.95, hd = hd.95))
}))
critical.val
## tv.95% hd.95%
## C1 0.7893927 81.47256
## C2 0.7870469 80.95873
## C3 0.7950869 81.27145
## T1 0.9261629 113.73798
## T2 0.9240506 114.45228
## T3 0.9212163 111.54258
### 3.1. Density estimation
The kernel density estimation yields the empirical density shown in the graphics:
suppressMessages(library(ggplot2))
# Some information for graphic
crit.val.ct <- max(critical.val[c("C1", "C2", "C3"), 2]) # 81.5
crit.val.tt <- min(critical.val[c("T1", "T2", "T3"), 2]) # 111.5426
# Density plot with ggplot
ggplot(data, aes(x = H, colour = sample, fill = sample)) +
geom_density(alpha = 0.05, bw = 0.2, position = "identity", na.rm = TRUE,
size = 0.4) + xlim(c(0, 125)) +
xlab(expression(bolditalic("Hellinger divergence (H)"))) +
ylab(expression(bolditalic("Density"))) +
ggtitle("Density distribution for control and treatment") +
geom_vline(xintercept = crit.val.ct, color = "red", linetype = "dashed", size = 0.4) +
annotate(geom = "text", x = crit.val.ct-2, y = 0.3, size = 5,
label = 'bolditalic(H[alpha == 0.05]^CT==81.5)',
family = "serif", color = "red", parse = TRUE) +
geom_vline(xintercept = crit.val.tt, color = "blue", linetype = "dashed", size = 0.4) +
annotate(geom = "text", x = crit.val.tt -2, y = 0.2, size = 5,
label = 'bolditalic(H[alpha == 0.05]^TT==114.5)',
family = "serif", color = "blue", parse = TRUE) +
theme(
axis.text.x = element_text( face = "bold", size = 12, color="black",
margin = margin(1,0,1,0, unit = "pt" )),
axis.text.y = element_text( face = "bold", size = 12, color="black",
margin = margin( 0,0.1,0,0, unit = "mm")),
axis.title.x = element_text(face = "bold", size = 13,
color="black", vjust = 0 ),
axis.title.y = element_text(face = "bold", size = 13,
color="black", vjust = 0 ),
legend.title = element_blank(),
legend.margin = margin(c(0.3, 0.3, 0.3, 0.3), unit = 'mm'),
legend.box.spacing = unit(0.5, "lines"),
legend.text = element_text(face = "bold", size = 12, family = "serif")
)
The above graphic shows that with high probability the methylation signal induced by the treatment has H values $H^{TT}_{\alpha=0.05}\geq114.5$. According to the critical value estimated for the differences of methylation levels, the methylation signal holds $TV^{TT}_{\alpha=0.05}\geq0.926$. Notice that most of the methylation changes are not signal but noise (found to the left of the critical values). This situation is typical for all the natural and technologically generated signals. Assuming that the background methylation variation is consistent with a Poisson process and that methylation changes conform to the second law of thermodynamics, the Hellinger divergence of methylation levels follows a Weibull distribution probability or some member of the generalized gamma distribution family [6].
## References
1. Hebestreit, Katja, Martin Dugas, and Hans-Ulrich Klein. 2013. “Detection of significantly differentially methylated regions in targeted bisulfite sequencing data.” Bioinformatics (Oxford, England) 29 (13): 1647–53. doi:10.1093/bioinformatics/btt263.
2. Hebestreit, Katja, Martin Dugas, and Hans-Ulrich Klein. 2013. “Detection of significantly differentially methylated regions in targeted bisulfite sequencing data.” Bioinformatics (Oxford, England) 29 (13): 1647–53. doi:10.1093/bioinformatics/btt263.
3. Dolzhenko, Egor, and Andrew D Smith. 2014. “Using beta-binomial regression for high-precision differential methylation analysis in multifactor whole-genome bisulfite sequencing experiments.” BMC Bioinformatics 15 (1). BioMed Central: 215. doi:10.1186/1471-2105-15-215.
4. Baldi, Pierre, and Soren Brunak. 2001. Bioinformatics: the machine learning approach. Second. Cambridge: MIT Press.
5. Basu, A., A. Mandal, and L. Pardo. 2010. “Hypothesis testing for two discrete populations based on the Hellinger distance.” Statistics & Probability Letters 80 (3-4). Elsevier B.V.: 206–14. doi:10.1016/j.spl.2009.10.008.
6. Sanchez R, Mackenzie SA. Information Thermodynamics of Cytosine DNA Methylation. PLoS One, 2016, 11:e0150427.
|
Plotting a Piecewise function that returns implicit equations using ContourPlot
I am using ContourPlot to plot implicit equations. Plotting separate implicit equations seems to work fine, however I can't get ContourPlot to plot a piecewise function that returns implicit equations.
ClearAll["Global*"];
a[p1_,p2_]:=(p1-30)^2+(2p2-60)^2==250
b[p1_,p2_]:=(p1-20)^2+(5p2-40)^2==300
pw[p1_,p2_]:=Piecewise[{{a[p1,p2],p1<p2}},b[p1,p2]]
Grid[{{
ContourPlot[Evaluate@a[p1,p2],{p1,0,50},{p2,0,50}],
ContourPlot[Evaluate@b[p1,p2],{p1,0,50},{p2,0,50}],
ContourPlot[Evaluate@pw[p1,p2],{p1,0,50},{p2,0,50}]
}}]
I am expecting the third figure to have 2 diagonal cut-off circles, like so:
• Use RegionFunction for a workaround: Show[ ContourPlot[Evaluate@a[p1, p2], {p1, 0, 50}, {p2, 0, 50}, RegionFunction -> (#1 < #2 &)], ContourPlot[Evaluate@b[p1, p2], {p1, 0, 50}, {p2, 0, 50}, RegionFunction -> (#1 >= #2 &)]] – Bob Hanlon Jan 6 at 23:37
Try these two changes. First, use Boole instead of Piecewise, maybe like this
pw[p1_, p2_] := With[{b = Boole[p1 < p2]},
Second, use pw[p1,p2] (without the underscores) in the ContourPlot` command.
|
## Creating Fractals III: Making Your Own
Last week, we laid down some of the mathematical foundation needed to generate fractal images. In this third and final post about creating fractals, we’ll discuss in some detail Python code you can adapt to making your own designs. Follow along in this Sage notebook.
In order to produce fractal images iteratively, we need a function which returns the highest power of 2 within a positive integer (as discussed last week). It is not difficult to write a recursive routine to do this, as is seen in the notebook. This is really all we need to get started. The rest involves creating the graphics. I usually use PostScript for my images, like the one below discovered by Matthieu Pluntz. There isn’t time to go into that level of detail here, though.
As far as the graphics are concerned, it would be nice to have an easily described color palette. You might look here to find a wide range of predefined colors, which are available once we execute the “import mathplotlib” command (see Line 20). These names are used in the “colors” variable. Since each motif has four segments, I’ll color each one differently (though you may choose a different color scheme if you want to).
The loop is fairly straightforward. On each iteration, first find the correct angle to turn using the highestpowerof2 function. Then the segment to add on to the end of the path is
$({\rm len}\cdot\cos(\theta), {\rm len}\cdot\sin(\theta)),$
which represents converting from polar to rectangular coordinates. This is standard fare in a typical high school precalculus course. Note the color of the segment is determined by i % 4, since 0 is the index of the first element of any list in Python.
All there is left to do is output to the screen. We’re done! You can try it yourself. But note that the way I defined the function, you have to make the second argument negative (compare the image in the notebook with last week’s post). Be patient: some of these images may take a few moments to generate. It would definitely help the speed issue if you downloaded Sage on your own computer.
To create the image shown above, you need to use angles of 90 and -210 (I took the liberty of rotating mine 15 degrees to make it look more symmetrical). To create the image below, angles of 90 and -250 are used. However, 26,624 steps are needed to create the entire image! It is not practical to create an image this complex in the online Sage environment.
How do you know what angles to use? This is still an open question — there is no complete answer that I am aware of. After my first post on October 4, Matthieu Pluntz commented that he found a way to create an infinite variety of fractal images which “close up.” I asked him how he discovered these, and he responded that he used a recursive algorithm. It would take an entire post just to discuss the mathematics of this in detail — so for now, we’ll limit our discussion to how to use this algorithm. I’ve encoded it in the function “checkangles.”
To use this function, see the examples in the Sage notebook. Be careful to enter angles as negative when appropriate! Also, you need to enter a maximum depth to search, since perhaps the angles do not result in an image which “closes up,” such as with 11 and -169. But here’s the difficult part mathematically — just because our algorithm doesn’t find where 11 and -169 closes up does not mean that the fractal doesn’t close. And further, just because our algorithm produced a positive result does not mean the algorithm must close. Sure, we’ve found something that produces many results with interesting images — which suggests we’re on the right track. But corroboration by a computer program is not a proof.
At the end of the notebook, I wrote a simple loop illustrating how you can look for many possibilities to try at once. The general rule of thumb is that the more levels required in the algorithm to produce a pair of angles (which is output to the screen), the more segments needed to draw it. I just looked for an example which only required 5 levels, and it was fairly easy to produce.
So where do we go from here? Personally, I’ve found this investigation fascinating — and all beginning from a question by a student who is interested in learning more about fractals. I’ve tried to give you an idea of how mathematics is done in the “real world” — there is a lot of exploration involved. Proofs will come later, but it is helpful to look at lots of examples first to figure out what to prove. When I find out something significant, I’ll post about it.
And I will admit a recent encounter with the bane of a programmer’s existence — the dreaded sign error. Yes, I had a minus sign where I should have had a plus sign. This resulted in my looking at lots of images which did not close up (instead of closing up, as originally intended). Some wonderful images resulted, though, like the one below with angles of 11 and -169. Note that since the figure does not close up (as far as I know), I needed to stop the iteration when I found a sufficiently pleasing result.
If I hadn’t made this mistake, I might have never looked at this pair of angles, and never created this image. So in my mind, this wasn’t really a “mistake,” but rather a temporary diversion along an equally interesting path.
I’ve been posting images regularly to my Twitter feed, @cre8math. I haven’t even touched on the aesthetic qualities of these images — but suffice it to say that it has been a wonderful challenge to create interesting textures and color effects for a particular pair of angles. Frankly, I am still amazed that such a simple algorithm — changing the two angle parameters used to create the Koch snowflake — produces such a wide range of intriguing mathematical and artistic objects. You can be sure that I’m not finished exploring this amazing fractal world quite yet….
## Creating Fractals II: Recursion vs. Iteration
There was such a positive response to last week’s post, I thought I’d write more about creating fractal images. In the spirit of this blog, what follows is a mathematical “stream of consciousness” — that is, my thoughts as they occurred to me and I pursued them. Or at least a close approximation — thoughts tend to jump very nonlinearly, and I do want the reader to be able to follow along….
Let’s begin at the beginning, with one of my first experiments. Here, the counterclockwise turns are 80 degrees, and the clockwise turns are 140 degrees.
One observation I had made in watching PostScript generate such images was that there was “overlap”: the recursive algorithm kept going even if the image was completely generated. Now the number of segments drawn by the recursive algorithm is a power of 4, since each segment is replaced by 4 others in the recursive process. So if the number of segments needed to complete a figure is not a power of 4, the image generation has to be stopped in the middle of a recursive call.
This reminded me of something I had investigated years ago — the Tower of Hanoi problem. This is a well-known example of a problem which can be solved recursively, but there is also an iterative solution. So I was confident there had to be an iterative way to generate these fractal images as well.
I needed to know — at any step along the iteration — whether to turn counterclockwise or clockwise. If I could figure this out, the rest would be easy. So I wrote a snippet of code which implemented the recursive routine, and output a 0 if there was a counterclockwise turn, and a 1 if there was a clockwise turn. For 2 levels of recursion, this sequence is
0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0.
The ones occur in positions 2, 6, 8, 10, and 14.
I actually looked at 1024 steps in the iteration, and noticed that the ones occur in exactly those positions whose largest power of 2 is odd. Each of 2, 6, 10, and 14 has one power of 2, and 8 has three.
You might be wondering, “How did you notice that?” Well, the iterative solution of the Tower of Hanoi does involve looking at the powers of 2 within numbers, so past experience suggested looking along those lines. This is a nice example of how learning neat math can enlarge your mathematical “toolbox.” You never know when something might come in handy….
There was other interesting behavior as well — but it’s simpler if you just watch the video to see what’s happening.
First, you probably noticed that each of the 18 star arms takes 32 steps to create. And that some of the star arms — eight of them — were traversed twice. This means that 18 + 8 = 26 arms were drawn before the figure was complete, for a total of 832 steps. Note that the recursive algorithm would need 1024 steps to make sure that all 832 steps were traversed — but that means an overlap of 192 steps.
Now let’s see why some of the arms were traversed twice. The 32nd step of the first arm is produced after 31 turns, and so the 32nd turn dictates what happens here. Now the highest power of 2 in 32 is 5, which is odd – so a clockwise turn of 140 degrees is made. You can see by looking at the first 33 steps that this is exactly what happens. The 32nd step takes you back to the center, and then a 140 degree clockwise turn is made.
Now after the next arm is drawn, the turn is determined by the 64th angle. But 6 is the highest power of two here, and so an 80 degree counterclockwise turn is made — but this takes you over the same arm again!
Note that we can’t keep traversing the same arm over and over. If we add 32 to a number whose highest power of 2 is 6:
$2^6m+2^5=2^5(2m+1),$
we get a number whose highest power of 2 is 5 again (since 2m + 1 must be odd). Since this power is odd, a clockwise turn will be made.
So when do we repeat an arm? This will happen when we have a counterclockwise turn of 80 degrees, which will happen when the highest power of 2 is even (since an odd power takes you clockwise) — when looking at every 32nd turn, that is. So, we need to look at turns
32, 64, 96, 128, 160, 192, 224, etc.
But observe that this is just
32 x (1, 2, 3, 4, 5, 6, 7, etc.).
Since 32 is an odd power of two, the even powers of two must occur when there is an odd power of 2 in 1, 2, 3, 4, 5, 6, 7, etc. In other words, in positions 2, 6, 8, 10, 14, etc.
To summarize this behavior, we can state the following simple rule: arms move seven points counterclockwise around the circle, except in the case of the 2nd, 6th, 8th, 10th, 14th, etc., arms, which repeat before moving seven points around. Might be worth taking a minute to watch the video again….
We can use this rule to recreate the order in which the star arms are traversed. Start with 1. The next arm is 1 + 7 = 8. But 8 is the 2nd arm, so it is repeated — and so 8 is also the third arm. The fourth arm is 8 + 7 = 15, and the fifth is seven positions past 15, which is 4. Mathematically, we say 15 + 7 = 4 modulo 18, meaning we add 15 and 7, and then take the remainder upon dividing by 18. This is know as modular arithmetic, and is one of the first things you learn when studying a branch of mathematics called number theory.
The sixth arm is 4 + 7 = 11, which is repeated as the seventh arm. You can go on from here….
There are still some questions which remain. Why 32 steps to complete an arm? Why skip every seventh arm? Why are the arms 20 degrees apart? These questions remain to be investigated more thoroughly. But I can’t stress what we’re doing strongly enough — using the computer to make observations which can be stated mathematically very precisely, and then looking at a well-defined algorithm to (hopefully!) prove that our observations are accurate. More and more — with the advent of technology — mathematics is becoming an experimental science.
I’ll leave you with one more video, which shows PostScript creating a fractal image. But laying a mathematical foundation was important — so next week, we can look at how you can make your own fractals in Python using an iterative procedure. This way, you can explore this fascinating world all on your own….
There are 10 levels of recursion here, and so 1,048,576 segments to draw. To see the final image, visit my Twitter feed for October 9. Enjoy!
## Creating Fractals
Recently, I’ve been working with a psychology student interested in how our brains perceive fractal images in nature (trees, clouds, landscapes, etc.). I dug up some old PostScript programs which reproduced images from The Algorithmic Beauty of Plants, which describes L-systems and how they are used to model images of plants. (Don’t worry if you don’t have the book or aren’t familiar with L-systems — I’ll tell you everything you need to know.)
To make matters concrete, I changed a few parameters in my program to produce part of a Koch snowflake.
The classical way of creating a Koch snowflake is to begin with the four-segment path at the top, and then replace each of the four segments with a smaller copy of this path. Now replace each of the segments with an even smaller copy, and recurse until the copies are so small, no new detail is added.
Algorithmically, we might represent this as
F +60 F -120 F +60 F,
where “F” represents moving forward, and the numbers represent how much we turn left or right (with the usual convention that positive angles move counter-clockwise). If you start off moving to the right from the red dot, you should be able to follow these instructions and see how the initial iteration is produced.
The recursion comes in as follows: now replace each occurrence of F with a copy of these instructions, yielding
F +60 F -120 F +60 F +60
F +60 F -120 F +60 F -120
F +60 F -120 F +60 F +60
F +60 F -120 F +60 F
If you look carefully, you’ll see four copies of the initial algorithm separated by turning instructions. If F now represents moving forward by 1/3 of the original segment length, when you execute these instructions, you’ll get the second image from the top. Try it! Recursing again gives the third image, and one more level of recursion results in the last image.
Thomas thought this pretty interesting, and proceed to ask what would happen if we changed the angles. This wasn’t hard to do, naturally, since the program was already written. He suggested a steeper climb of about 80 degrees, so I changed the angles to +80 and -140.
Surprise! You’ll easily recognize the first two iterations above, but after five iterations, the image closes up on itself and creates an elegant star-shaped pattern.
I was so intrigued by stumbling upon this symmetry, I decided to explore further over the upcoming weekend. My next experiment was to try +80 and -150.
The results weren’t as symmetrical, but after six levels of recursion, an interesting figure with bilateral symmetry emerged. You can see how close the end point is to the starting point — curious. The figure is oriented so that the starting points (red dots) line up, and the first step is directly to the right.
Another question Thomas posed was what would happen if the lengths of the segments weren’t all the same. This was a natural next step, and so I created an image using angles of +72 and -152 (staying relatively close to what I’d tried before), and using 1 and 0.618 for side lengths, since the pentagonal motifs suggested the golden ratio. Seven iterations produced the following remarkable image.
I did rotate this for aesthetic reasons (-24.7 degrees, to be precise). There is just so much to look at — and all produced by changing a few parameters in a straightforward recursive routine.
My purpose in writing about these “fractal” images this week is to illustrate the creative process in doing mathematicsThis just happened a few days ago (as I am writing this), and so the process is quite fresh in my mind — a question by a student, some explorations, further experimentation, small steps taken one at a time until something truly wonderful emerges. The purist will note that the star-shaped images are not truly fractals, but since they’re created with an algortihm designed to produce a fractal (the Koch snowflake), I’m taking a liberty here….
This is just a beginning! Why do some parameters result in symmetry? How can you tell? When there is bilateral symmetry, what is the “tilt” angle? Where did the -24.7 come from? Each new image raises new questions — and not always easy to answer.
Two weeks ago, this algorithm was collecting digital dust in a subdirectory on my hard drive. A simple question resurrected it — and resulted in a living, breathing mathematical exploration into an intensely intriguing fractal world. This is how mathematics happens.
## The Problem with Calculus Textbooks
Simply put, most calculus textbooks are written in the wrong order.
Unfortunately, this includes the most popular textbooks used in colleges and universities today.
This problem has a long history, and will not be quickly solved for a variety of reasons. I think the solution lies ultimately with high quality, open source e-modules (that is, stand-alone tutorials on all calculus-related topics), but that discussion is for another time. Today, I want to address a more pressing issue: since many of us (including myself) must teach from such textbooks — now, long before the publishing revolution — how might we provide students a more engaging, productive calculus experience?
To be specific, I’ll describe some strategies I’ve used in calculus over the past several years. Once you get the idea, you’ll be able to look through your syllabus and find ways to make similar adaptations. There are so many different versions of calculus taught, there is no “one size fits all” solution. So here goes.
1. I now teach differentiation before limits. The reason is that very little intuition about limits is needed to differentiate quadratics, for example — but the idea of limits is naturally introduced in terms of slopes of secant lines. Once students have the general idea, I give them a list of the usual functions to differentiate. Now they generate the limits we need to study — completely opposite of introducing various limits out of context that “they will need later.”
Students routinely ask, “When am I ever going to use this?” At one time, I dismissed the question as irrelevant — surely students should know that the learning process is not one of immediate gratification. But when I really understood what they were asking — “How do I make sense of what you’re telling me when I have nothing to relate it to except the promise of some unknown future problem?” — I started to rethink how I presented concepts in calculus.
I also didn’t want to write my own calculus textbook from scratch — so I looked for ways to use the resources I already had. Simply doing the introductory section on differentiation before the chapter on limits takes no additional time in the classroom, and not much preparation on the part of the teacher. This point is crucial for the typical teacher — time is precious. What I’m advocating is just a reshuffling of the topics we (have to) teach anyway.
2. I no longer teach the chapter on techniques of integration as a “chapter.” In the typical textbook, nothing in this chapter is sufficiently motivated. So here’s what I do.
I teach the section on integration by parts when I discuss volumes. Finding volumes using cylindrical shells naturally gives rise to using integration by parts, so why wait? Incidentally, I also bring center of mass and Pappus’ theorem into play, as they also fit naturally here. The one-variable formulation of the center of mass gives rise to squares of functions, so I introduce integrating powers of trigonometric functions here. (Though I omit topics such as using integration by parts to integrate unfriendly powers of tangent and secant — I do not feel this is necessary given any mathematician I know would jump to Mathematica or similar software to evaluate such integrals.)
I teach trigonometric substitution (hyperbolic as well — that for another blog post) when I cover arc length and surface area — again, since integrals involving square roots arise naturally here.
Partial fractions can either be introduced when covering telescoping series, or when solving the logistic equation. (A colleague recommended doing series in the middle of the course rather then the end (where it would have naturally have fallen given the order of chapters in our text), since she found that students’ minds were fresher then — so I introduced partial fractions when doing telescoping series. I found this rearrangement to be a good suggestion, by the way. Thanks, Cornelia!)
3. I no longer begin Taylor series by introducing sequences and series in the conventional way. First, I motivate the idea by considering limits like
$\displaystyle\lim_{x\to0}\dfrac{\sin x-x}{x^3}=-\dfrac16.$
This essentially means that near 0, we can approximate $\sin(x)$ by the cubic polynomial
$\sin(x)\approx x-\dfrac{x^3}6.$
In other words, the limits we often encounter while studying L’Hopital’s rule provide a good motivation for polynomial approximations. Once the idea is introduced, higher-order — eventually “infinite-order” — approximations can be brought in. Some algorithms approximate transcendental functions with polynomials — this provides food for thought as well. Natural questions arise: How far do we need to go to get a given desired accuracy? Will the process always work?
I won’t say more about this approach here, since I’ve written up a complete set of Taylor series notes. They were written for an Honors-level class, so some sections won’t be appropriate for a typical calculus course. They were also intended for use in an inquiry-based learning environment, and so are not in the usual “text, examples, exercise” order. But I hope they at least convey an approach to the subject, which I have adapted to a more traditional university setting as well. For the interested instructor, I also have compiled a complete Solutions Manual.
I think this is enough to give you the idea of my approach to using a traditional textbook. Every calculus teacher has their own way of thinking about the subject — as it should be. There is no reason to think that every teacher should teach calculus in the same way — but there is every reason to think that calculus teachers should be contemplating how to make this beautiful subject more accessible to their students.
## Cryptarithms
One of the goals of creating this blog was to show you some cool math stuff you might not have seen before. So I thought I’d create a puzzle about this:
Just replace each letter with a digit from 0–9 so that the sum is correct. No number begins with a 0. One more thing: M + T = A. Good luck!
Perhaps you’ve never seen puzzles like this before — they’re called cryptarithms. At first they look impossible to solve — almost any assignment of numbers to letters seems possible. But not really. You’re welcome to try on your own first — but feel free to read on for some helpful advice.
Look at the last column (the units). If L + H + G ends in H, then L + G must end in a 0. Since L and G are digits, then L + G = 10. This doesn’t completely determine L or G, but once you know one of the numbers, you know the other.
As a result, you also know there’s a carry over to the third column (tens). What does this mean? That 1 + O + T + O ends in C. Since O + O is an even number, this means that if T is even, then C is odd, and if T is odd, then C is even. We even know a bit more about T from looking at the sum: T is either 1 or 2, since adding three numbers less than 10,000 gives a sum less than 30,000.
What about the second column? O + A + L ends in A. We might be tempted to think that O + L must end in a 0, but that would mean that O + L is 10. This can’t be, since that would mean that G = O (remember that L + G = 10). Therefore there has to be a carry from the third column over to the second column, meaning O + L = 9. So O is one less than G.
Get the idea? There’s a lot of information you can figure out by looking at the structure of the letters in the sum. But it turns out that without the condition M + T = A, there are 12 solutions to this puzzle! Multiple solutions must occur here, for if you can solve this puzzle, you can also solve
The M and B occur just once, and at the beginning of numbers. This means if M = 7 and B = 9 in a solution, then putting M = 9 and B = 7 — with all other letters staying the same — will also produce a solution. In looking at all the solutions, I found that giving M + T = A results in a unique solution without giving values for specific letters.
That’s all the help you get! Sometimes you might just guess well and stumble onto a solution — but take the additional challenge and prove you’ve got the only solution to the cryptarithm.
How do you create a cryptarithm? There was a time I did so by hand — but those days are gone. Read more if you’d like to see how you can use programming to help you create these neat puzzles. (Of course there are online cryptarithm solvers, but that takes all the fun out of it!)
## Hexominoes and Cube Nets
I have always been fascinated by polyominoes — geometrical shapes made by connecting unit squares edge to edge. (There’s a lot about polyominoes online, so take a few moments to familiarize yourself with them if they’re new to you.)
Today I’ll talk about hexominoes (using six unit squares), since I use them in the design of my current website. There are a total of 35 hexominoes — but I didn’t want all of them on my home page, since that seemed too cluttered. But there are just 11 hexominoes which can be folded into a cube — I did want my choice to have some geometrical significance! These are called nets for a cube, and formed a reasonable subset of the hexominoes to work with. Note that the count of 11 nets means that rotating or turning over a net counts as the same one. (And if you want an additional puzzle — show that aside from rotating or reflecting, there are just 11 nets for a cube.)
Now how should I arrange them? I also wanted to use the hexominoes for a background for other pages, so I thought that if I made a 6 by 11 rectangle with them, that would be ideal — I could just tile the background with rectangles.
This is not possible, however — I wrote a computer program to check (more later). But if you imagine shifting a row of the 6 by 11 rectangle one or two squares, or perhaps a column — you would still occupy 66 square units, and the resulting figure would still tile the plane. This would still be true if you made multiple row/column shifts.
So I wrote a program which did exactly that — made random row and column shifts of a 6 by 11 rectangle, and then checked if the 11 hexominoes tiled that figure. After several hours of running, I found one — the one you see on my home page. If you look carefully, you can see the row and column shifts for yourself.
Is this the only possibility? I’m not sure, but it’s the only one I found — and I liked the arrangement enough to use it on my website. If you look at some of the other pages — like one of my course websites — you’ll see a smaller version of this image tiling the background. However, to repeat the pattern in the background, I needed to make a “rectangular” version of the image:
The colors are muted since I didn’t want the background to stand out too much. And you’ll notice that some of the hexominoes leave one edge of the rectangle and “wrap around” the opposite edge. But if you look closely, you can definitely find all 11 hexominoes in this 6 by 11 rectangle.
This wasn’t my first adventure with hexominoes — a few years ago, I created a flag of Thailand since I was doing some workshops there. Flags are generally rectangular in shape.
But you can’t create a rectangle with the 35 hexominoes! Let’s see why not. Imagine a rectangle on a checkerboard or chessboard. When you place a hexomino, it will cover some black squares and some white squares.
Now some hexominoes will always cover an odd number of black and white squares — let’s call those odd hexominoes. The others — even hexominoes — cover an even number of black and white squares. As it turns out, there are 24 odd hexominoes and 11 even hexominoes. This means that any placement of all the hexominoes on a checkerboard will cover an even number of white squares and an even number of black squares.
However, any rectangle of 210 = 6 x 35 hexominoes must cover 105 white squares and 105 black squares — both odd numbers of squares. But we just saw that’s not possible — an even number of each must be covered. So no rectangles. This is an example of a parity argument, by the way, and is a standard tool when proving results about covering figures with polyominoes.
To overcome this difficulty, I threw in 6 additional unit squares so I could make a 12 x 18 rectangle — and to my surprise, I found out that the flag of Thailand has dimensions 2:3 as well. You can read more about this by clicking on “the flag of thailand” on the page referenced above — and see that the tiling problem can be solved with a little wiggle room. But no computer here — I cut out a set of paper hexominoes and designed the flag of Thailand by hand….
## CrossNumber Puzzles
This week, we’ll look at one of my favorite types of puzzles — CrossNumber Puzzles. These are like crossword puzzles, except that the clues describe numbers instead of words. The only rule is that no entry in a CrossNumber Puzzle can start with a “0.” You can try this one — but don’t worry if you get stuck. We’ll look at different ways you can go about solving it in just a moment.
How would you go about solving this puzzle? Try to look for the clues which give you the most information. For example, look at 1 Across and 3 Down. Now 1 Across is the cube of a two digit number, and its third digit is actually the first digit of the cube root. So we might want to print out a chart of all four-digit cubes of two-digit numbers:
10 1000 16 4096 11 1331 17 4913 12 1728 18 5832 13 2197 19 6859 14 2744 20 8000 15 3375 21 9261
You can see that the only possibility is that 1 Across is 4913 and 3 Down is 17.
Looking at 5 Across doesn’t help much, since there are too many possibilities.
But looking at 5 Down is a good next choice. Note that 9 Across has to start with 1 or 3 in order to fit four odd digits in the grid, but no perfect squares end in 3, and so no perfect fourth powers end in 3, either. This means that 9 Across has to start with 1 so that 5 Down ends in 1. To help figure out 5 Down, below is a list of four-digit fourth powers:
6 1296 8 4096 7 2401 9 6561
So 5 Down must be either 2401 or 6561. If it were 2401, then 6 Across would begin with a “0,” so that leaves 6561 as the only option for 5 Down.
I’ll leave it to you to complete the puzzle. I won’t post a solution so that you’re not tempted to peek — but if you add 2 Down and 6 Across when you’re done, you’ll get 157,991.
How can you make your own CrossNumber puzzle? Start by making a grid, and shade in some of the squares. Usually the pattern of shaded squares is symmetric, but it doesn’t have to be. Fill in some of the entries with numbers which have specific properties, like being a perfect square or cube. Or perhaps make one of the entries the product or sum of two others. The only limit is your imagination! It might help to continue reading below, since then you could print different charts and look at the numbers for something interesting. (And get another puzzle to solve, too.)
## Josef Albers and Interaction of Color
My first post on art is about Josef Albers and his use of color. An idea central to his work is that we do not see “individual” colors — but colors are always perceived in relationship to the surrounding colors.
Look at this image for a moment — what do you notice about the smaller rectangles? If you look carefully, you’ll notice that they are all the same color. It may look like some are darker than others — but that’s only because you’re seeing them against different background colors. Albers explored this idea in depth in his famous book, Interaction of Color.
So how can you create this image? Begin with a color specified as RGB — in this case, (0.7,0.6,0.3) — to use as the color of the smaller rectangles. Remember that (0,0,0) is black and (1,1,1) is white; using values for R, G, and B between 0 and 1 will allow you to produce millions of different color combinations.
Now use your computer’s random number generating ability to create three random numbers between -0.3 and 0.3. For this example, I’ll use -0.2, -0.1, and 0.3. Subtract these values from (0.7,0.6,0.3) to get the RGB values for the left larger rectangle — you get (0.9,0.7,0.0). Then add these random numbers to (0.7,0.6,0.3) to get (0.5,0.5,0.6) — these are the RGB values for the right larger rectangle. Notice that this procedure assigns RGB values to the smaller rectangles which are the numerical averages of the RGB values of the colors of the surrounding rectangles.
Keep in mind that this is an arithmetic mixing of colors. If you actually had paints which were the colors of the larger rectangles, mixing them would likely not give you the color of the smaller rectangles. Also keep in mind that Interaction of Color was published in 1963, so that Albers’ ideas were developed with paper and pigment. What I’ve done is adapted Albers’ ideas for use on a computer — and interpreted his ideas as you see here. Color theory is a very complex field, and is widely written about on the internet — so if these ideas intrigue you, start searching!
Yin Yang IV, 8” x 8”
As a final example, here is a piece incorporating these ideas about color with an abstract yin/yang motif. Visit my art website if you’d like to see some additional images with commentary.
Now we’ll get to the specifics of actually implementing the ideas mentioned above. I’ve decided to use Python for these examples since it is a language growing in popularity and is open source. Also, I’m using Python in a Sage environment because it’s open source, too — and you don’t need to download or install anything. You can just open a Sage worksheet in your browser. Sometimes it’s a little slow, so be patient. You can download Sage onto your own computer if you’d like to speed things up.
Here is the link to the interactive color demo. It’s fairly self-explanatory — and you don’t need to know any Python to use the sliders. Just hit shift+enter as explained in the instructions.
One thing to be careful about, though. If you choose a red value of 0.8, and then choose to vary this value by 0.4 — you’ll get red values of 0.4 and 1.2. But since RGB values are between 0 and 1, the 1.2 is “truncated” to 1.0, so you’re really working with 0.4 and 1.0. This means that 0.8 is no longer the average of the two red values — so the “Albers” effect won’t be so pronounced, and may be absent if your values are too far off. Select your values carefully!
I’ve tried to make the code fairly straightforward, so if you know a little about Python or programming in general, you should be able to make some of your own changes. You’ll have to make a Sage account and copy my code to one of your own projects in order to make changes.
This blog is designed to give you ideas to think about, not be a tutorial. So I won’t be teaching you Python. If you need to understand the basics of the RGB color space — well, just look it up. There’s also plenty online about Josef Albers. Go in any direction you like.
But if you are going to play with the Python code to make more complex images, here’s a suggestion. Whenever you are about to type in RGB values, pause for a moment and ask why you’re choosing those particular values. Use color deliberately. This will make all the difference in the world — and may well be the difference between making a digital image, and creating digital art.
## What Is Mathematics?
Mathematics is creative.
Unfortunately, this is lost upon many — if not most — students of mathematics, in large part because their teachers may not understand mathematical creativity, either. One way to address this issue is to have students write and solve their own original mathematics problems. This seems daunting at first, until students realize they are more creative than they were led to believe. (I’ll discuss this more in a later post.)
The difficulty is that the creative dimension of mathematics is a bit elusive. Give a child crayons and ask her to draw a picture, sure — but give a student some ideas and ask him to create a new one? To appreciate mathematical creativity, you need some understanding of the abstract nature of mathematics itself. To create mathematics, you need imagination much like you do in any of the arts — or other sciences, for that matter.
Over the years, I’ve created my fair share of mathematics. How much of it is really new is hard to determine — how do you know if any of the billions of other people in the world already created something you did? (Proof by internet search notwithstanding.)
This blog is about sharing some of my ideas, problems, and puzzles. Some were created years ago, some are new — and I will consider myself lucky if some are entirely original. I truly did have fun creating them, and I enjoy writing about them now.
I’m hoping to convey an enthusiasm for mathematics and its related fields — in other words, all human knowledge — and to share something of the creative process as well. The creation of mathematics is not a mystical process, and needs no explanation to a mathematician. But we can surely do more to make this enlivening process accessible to all in a time when it is certainly necessary.
As you follow, you’ll notice a heavy emphasis on programming. Every student should learn to program — and in more than one language. Perhaps this should be an axiom in the 21st century, but we’re not even close. So many of the tools I use are virtual — the ability to write code to perform various tasks is essential to my creative process, as you’ll see. In fact, many posts will have links to Python programs in the Sage platform (don’t worry if you don’t know what these are yet). These tools are all open source, and available to anyone with internet access.
Finally, blog posts will usually have a “Continue reading…” section. Some posts (like this one) will be essays on teaching, creativity, or a related topic. Since not everyone may be so philosophically minded, the “Continue reading…” sections of these essays will be a puzzle or game. Enjoy!
|
# Data types part 4: Logical class
November 30, 2012
By
(This article was first published on R for Public Health, and kindly contributed to R-bloggers)
First, an update: A commentator has asked me to post my code so that it is easier to practice the examples I show here. It will take me a little bit of time to get all of my code for past posts well-documented and readable, but I have uploaded the code and data for the last 4 posts, including this one, here:
Unfortunately, I could not find a way to attach it to blogger, so sorry for the extra step.
_________________________________________________________________________
Ok, now on to Data types part 4: Logical
I started this series of posts on data types by saying that when you have a dataframe like this called mydata:
you can't do this in R:
Age<25
Because Age does not exist as an object in R, and you get the error below:
But then what happens when I do,
mydata$Age<25 This is perfectly legal to do in R, but it's not going to drop observations. With this kind of statement, you are asking R to evaluate the logical question "Is it true that mydata$Age is less than 25?". Well, that depends on which element of the Age vector, of course. Which is why this is what you get when you run that code:
On first glance, this looks like a character vector. There is a string of entries using character letters after all. But it's not character class, it's the logical class. If you save this string of TRUE and FALSE entries into an object and print its class, this is what you get:
The logical class can only take on two values, TRUE or FALSE. We've seen evaluations of logical operations already, first in subsetting, like this:
mysubset<-mydata[mydata$Age<40,] Check out my post on subsetting if this syntax is confusing. In a nutshell, R evaluates all rows and keeps only those that meet the criteria, which is only rows where Age has a value of under 40 and all columns. Or here, in ifelse() statements mydata$Young<-ifelse(mydata$Age<25,1,0) More on ifelse() statements here. The ifelse() function is really useful, but is actually overkill when you're just creating a binary variable. This can be done faster by taking advantage of the fact that logical values of TRUE always have a numeric value of 1, while logical values of FALSE always have a numeric value of 0. That means all I need to do to create a binary variable of under age 25 is to convert my logical mydata$Ageunder25 vector into numeric. This is very easy with R's as.numeric() function. I do it like this:
mydata$Ageunder25_num<-as.numeric(mydata$Ageunder25)
or directly without that intermediate step like this:
mydata$Ageunder25_num<-as.numeric(mydata$Age<25)
Let's check out the relevant columns in our dataframe:
We can see that the Ageunder25_num variable is an indicator of whether the Age variable is under 25.
Now the really, really useful part of this is that you can use this feature to turn on and off a variable depending on its value. For example, say you got your data and realized that some of the height values were in inches and some were in centimeters, like this:
Those heights of 152 and 170 are in centimeters while everything else is inches. There are various ways to fix it, but one way is to check which values are less than, say 90, which is probably a safe cutoff and create a new column that keeps those values under 90 but converts the values over 90. We can do this in this way:
mydata$Height_fixed_in<- as.numeric(mydata$Height_wrong<90)*mydata$Height_wrong + as.numeric(mydata$Height_wrong>=90)*mydata$Height_wrong/2.54 So the first half of the calculation (in red) is "turned on" when Height_wrong is less than 90, because the value of the logical statement is a numeric TRUE, i.e. a 1, and this value of 1 is multiplied by the original Height column. The second part of the statement (in blue) is FALSE and so is just 0 times something so it's 0. If the Height_wrong column is greater than 90, then the first half is just 0 and the second half is turned on and thus the Height_wrong variable is divided by 2.54 cm, converting it into inches. We get the result below: Another useful way to use the as.numeric() and logical classes to your advantage is a situation like this: I have in my dataset the age of the last child born (and probably other characteristics of this child not shown), and then just the number of other children for each woman. I want to get a total number of children variable. I can do it simply in the following way. First, a note about the is.na() function. If you want to check if a variable is missing in R, you don't use syntax like "if variable==NA" or "if variable==.". This is not going to indicate a missing value. What you want to use instead is is.na(variable) like this: is.na(newdata$Child1age)
Which gives you a logical vector that looks like this:
If you want to check if a variable is not missing, you use the ! sign (meaning "Not") in front and check it like this:
We've seen this kind of thing before! Now we can translate this logical vector into numeric and add it to the number of other children, like this:
newdata$Totalnumchildren<-as.numeric(!is.na(newdata$Child1age))+newdata$Numotherchildren We get the following: If we want to get those NAs to be 0, we can again use the is.na() function and replace whereever Totalnumchildren is missing with a 0 like this: newdata$Totalnumchildren[is.na(newdata\$Totalnumchildren)]<-0
|
# Checking logarithm inequality.
Which one of the following is true.
$(a.)\ \log_{17} 298=\log_{19} 375 \quad \quad \quad \quad (b.)\ \log_{17} 298<\log_{19} 375\\ (c.)\ \log_{17} 298>\log_{19} 375 \quad \quad \quad \quad (d.)\ \text{cannot be determined}$
$17^{2}=289$ it has a difference of $9$ and $19^{2}=361$ it has a difference of $14$ .
I am not aware of any method if it is there to check such problems,
I would also prefer a method without calculus unless necessary.
I look for a short and simple way .
I have studied maths up to $12$th grade.
• I wonder if a typo happened here. Might someone have intended $\log_{17}289$? Since $17^2=289$, we get $\log_{17}289=2$. Then we could say that since $19^2=361$ we have $2 = \log_{19}361 < \log_{19} 375$. ${}\qquad{}$ – Michael Hardy Jul 26 '15 at 23:07
• No typo there . – R K Jul 26 '15 at 23:09
• $\log_{18} 335$ and $\log_{18} 336$ are between the two numbers. Does this help? – peterwhy Jul 26 '15 at 23:13
• This can of course be done with a calculator, but I can't help suspecting that some intelligent method was intended to be used instead. – Michael Hardy Jul 26 '15 at 23:22
• @peterwhy : Did you conclude that $\log_{18}335$ is between these two simply by numerical computation (in which case, why bother with so indirect an approach?) or do you have some intelligent reason to draw that conclusion? ${}\qquad{}$ – Michael Hardy Jul 26 '15 at 23:31
Let $x=\log_{17}{298}, y=\log_{19}{375}$.
By definition of logarithms,
$17^x = 298$ and $19^y=375$
So
$17^{x-2} = \dfrac{298}{289} = 1 + \dfrac{9}{289} \tag{1}$ and $19^{y-2}=\dfrac{375}{361} = 1 + \dfrac{14}{361} \tag{2}$.
Now take natural logarithms
$(x-2)\ln{17} = \ln(1+\dfrac{9}{289}) \approx \dfrac{9}{289} \tag{3}$ and $(y-2)\ln{19} = \ln(1+\dfrac{14}{361}) \approx \dfrac{14}{361} \tag{4}$
From $\ln{19} \approx \ln{17}(1+\frac{2}{17})$ and $\dfrac{14}{361} \times \dfrac{17}{19} \gg \dfrac{9}{289}$
we can say $\dfrac{\frac{14}{361}}{\ln{19}} > \dfrac{\frac{9}{289}}{\ln{17}}$
Then by equations (3), (4) we have $y-2 > x-2$ or $\boxed{\log_{19}{375} > \log_{17}{298}}$.
The following method does not use approximate calculations.
First of all note that $\ln 19 <\frac{7}{6}\ln 17$.
$$\log_{17} 298 \vee \log_{19} 375$$ $$\frac{\ln 298}{\ln 17}\vee \frac{\ln 375}{\ln 19}$$ $$\frac{\ln 298}{\ln 17}-2\vee \frac{\ln 375}{\ln 19}-2$$ $$\frac{\ln \frac{298}{17^2}}{\ln 17}\vee \frac{\ln \frac{375}{19^2}}{\ln 19}$$ $$\frac{\ln 17}{\ln\frac{298}{17^2}} \overline{\vee} \frac{\ln 19}{\ln\frac{375}{19^2}}$$ Now we use $\ln 19 <\frac{7}{6}\ln 17$ (we will prove that the left number is bigger than the right number) $$\frac{1}{\ln\frac{298}{17^2}} \overline{\vee} \frac{\frac{7}{6}}{\ln\frac{375}{19^2}}$$ $$\frac{6}{\ln\frac{298}{17^2}} \overline{\vee} \frac{7}{\ln\frac{375}{19^2}}$$ $$6\ln\frac{375}{19^2}\overline{\vee} 7\ln\frac{298}{17^2}$$ $$\left(\frac{375}{19^2} \right )^6\overline{\vee} \left(\frac{298}{17^2} \right )^7$$ It is painfull but possible to calculate without a calculator that $\left(\frac{375}{19^2} \right )^6> \left(\frac{298}{17^2} \right )^7.$ Therefor we have $\log_{17} 298 < \log_{19} 375.$
• You might want to explain why $\ln 19 <\frac{7}{6}\ln 17$ without using the $\ln$ function directly. – Marconius Aug 7 '15 at 13:20
• It is not a problem. $$\ln 19 \vee \frac{7}{6}\ln 17$$ $$6\ln 19 \vee 7\ln 17$$ $$\ln 19^6 \vee \ln 17^7$$ $$19^6<17^7$$ – Tzara_T'hong Aug 7 '15 at 13:29
|
It is currently 17 Oct 2017, 19:17
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# GMAT Diagnostic Test Question 42
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
Founder
Joined: 04 Dec 2002
Posts: 15570
Kudos [?]: 28477 [0], given: 5105
Location: United States (WA)
GMAT 1: 750 Q49 V42
GMAT Diagnostic Test Question 42 [#permalink]
### Show Tags
29 Sep 2013, 22:00
Expert's post
6
This post was
BOOKMARKED
GMAT Diagnostic Test Question 42
Field: Algebra
Difficulty: 700
If x is an integer and 9<x^2<99, then what is the value of maximum possible value of x minus minimum possible value of x?
A. 5
B. 6
C. 7
D. 18
E. 20
_________________
Founder of GMAT Club
Just starting out with GMAT? Start here... or use our Daily Study Plan
Co-author of the GMAT Club tests
Kudos [?]: 28477 [0], given: 5105
Founder
Joined: 04 Dec 2002
Posts: 15570
Kudos [?]: 28477 [2], given: 5105
Location: United States (WA)
GMAT 1: 750 Q49 V42
Re: GMAT Diagnostic Test Question 42 [#permalink]
### Show Tags
29 Sep 2013, 22:01
2
KUDOS
Expert's post
5
This post was
BOOKMARKED
Explanation
Notice that $$x$$ can take positive, as well as negative values to satisfy $$9<x^2<99$$, hence $$x$$ can be: -9, -8, -7, -6, -5, -4, 4, 5, 6, 7, 8, or 9. We asked to find the value of $$x_{max}-x_{min}$$, ans since $$x_{max}=9$$ and $$x_{min}=-9$$ then $$x_{max}-x_{min}=9-(-9)=18$$.
_________________
Founder of GMAT Club
Just starting out with GMAT? Start here... or use our Daily Study Plan
Co-author of the GMAT Club tests
Kudos [?]: 28477 [2], given: 5105
Manager
Status: I am not a product of my circumstances. I am a product of my decisions
Joined: 20 Jan 2013
Posts: 131
Kudos [?]: 121 [0], given: 68
Location: India
Concentration: Operations, General Management
GPA: 3.92
WE: Operations (Energy and Utilities)
Re: GMAT Diagnostic Test Question 42 [#permalink]
### Show Tags
18 Aug 2014, 21:38
bb wrote:
Explanation
Notice that $$x$$ can take positive, as well as negative values to satisfy $$9<x^2<99$$, hence $$x$$ can be: -9, -8, -7, -6, -5, -4, 4, 5, 6, 7, 8, or 9. We asked to find the value of $$x_{max}-x_{min}$$, ans since $$x_{max}=9$$ and $$x_{min}=-9$$ then $$x_{max}-x_{min}=9-(-9)=18$$.
Can you explain how Xmin = (-9). IMO Xmin = (-2). Where did i go wrong ???
Kudos [?]: 121 [0], given: 68
Manager
Status: I am not a product of my circumstances. I am a product of my decisions
Joined: 20 Jan 2013
Posts: 131
Kudos [?]: 121 [0], given: 68
Location: India
Concentration: Operations, General Management
GPA: 3.92
WE: Operations (Energy and Utilities)
Re: GMAT Diagnostic Test Question 42 [#permalink]
### Show Tags
18 Aug 2014, 21:45
Ashishmathew01081987 wrote:
bb wrote:
Explanation
Notice that $$x$$ can take positive, as well as negative values to satisfy $$9<x^2<99$$, hence $$x$$ can be: -9, -8, -7, -6, -5, -4, 4, 5, 6, 7, 8, or 9. We asked to find the value of $$x_{max}-x_{min}$$, ans since $$x_{max}=9$$ and $$x_{min}=-9$$ then $$x_{max}-x_{min}=9-(-9)=18$$.
Can you explain how Xmin = (-9). IMO Xmin = (-2). Where did i go wrong ???
My understanding is that since 9 < x^2, it implies that (+/-) 3 < x and since x has to be an integer x = (+/-) 2.
Also, since X^2 < 99, it implies that x < (+/-) 9 but x cannot be less than (-9) since (-3) < x
So Xmax = 9 and Xmin = -2
therefore Xmax - Xmin = 9 -(-2) = 11
Kudos [?]: 121 [0], given: 68
Math Expert
Joined: 02 Sep 2009
Posts: 41873
Kudos [?]: 128619 [0], given: 12180
Re: GMAT Diagnostic Test Question 42 [#permalink]
### Show Tags
19 Aug 2014, 02:49
Ashishmathew01081987 wrote:
Ashishmathew01081987 wrote:
bb wrote:
Explanation
Notice that $$x$$ can take positive, as well as negative values to satisfy $$9<x^2<99$$, hence $$x$$ can be: -9, -8, -7, -6, -5, -4, 4, 5, 6, 7, 8, or 9. We asked to find the value of $$x_{max}-x_{min}$$, ans since $$x_{max}=9$$ and $$x_{min}=-9$$ then $$x_{max}-x_{min}=9-(-9)=18$$.
Can you explain how Xmin = (-9). IMO Xmin = (-2). Where did i go wrong ???
My understanding is that since 9 < x^2, it implies that (+/-) 3 < x and since x has to be an integer x = (+/-) 2.
Also, since X^2 < 99, it implies that x < (+/-) 9 but x cannot be less than (-9) since (-3) < x
So Xmax = 9 and Xmin = -2
therefore Xmax - Xmin = 9 -(-2) = 11
The easiest way to check your reasoning is to plug -9 there and see whether the inequality holds: (-9)^2=81 < 99, (x cannot be -10 because 10^2=100>99). So, the least value is -9 not -2 (notice that -9 is less than -2).
Also:
$$9 < x^2$$ means that $$x < -3$$ or $$x > 3.$$
$$x^2 < 99$$ means that $$-\sqrt{99}<x<\sqrt{99}$$
It seems that you need to brush up fundamentals on inequalities:
Hope this helps.
_________________
Kudos [?]: 128619 [0], given: 12180
Re: GMAT Diagnostic Test Question 42 [#permalink] 19 Aug 2014, 02:49
Display posts from previous: Sort by
# GMAT Diagnostic Test Question 42
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Moderator: Bunuel
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.