text
stringlengths 104
605k
|
---|
Where results make sense
About us | Why use us? | Reviews | PR | Contact us
# Topic: 73 (number)
73 (number) - Wikipedia, the free encyclopedia 73 (seventy-three) is the natural number following 72 and preceding 74. 73 is a repdigit in base 8 (111). The Saros number of the solar eclipse series which began on 717 BC July 16 and ended on 582 September 3. en.wikipedia.org /wiki/73_(number) (376 words)
Prime Curios!: 73 The number formed by the concatenation of odd numbers from 73 down to 1 is prime. The prime number 73 is the repunit 111 in octal (base 8) and the palindrome 1001001 in binary (base 2). 73 is the smaller prime factor of 10001. primes.utm.edu /curios/page.php/73.html (570 words)
JEPTER, Volume 73, Number 5, 2000 A. V. LUIKOV'S SCIENTIFIC LEGACY (ON THE 90TH ANNIVERSARY OF HIS BIRTH) (via ... (Site not responding. Last check: 2007-10-09) In their works, A. Luikov and his disciples showed that a rigorous formulation of problems of convective heat exchange in the interaction of the bodies' surface with the environment corresponds to the boundary conditions not of the third kind, as was usually assumed earlier, but of the fourth one. Based on the experimental material on the dependence of the Rebinder number on moisture content, approximate methods were developed for calculating the average integral temperature of the material, knowledge of which is necessary to create a drying technology since the temperature of the material is in many cases a determining factor. Furthermore, it is considered that the temperature of the moisture in the capillaries of the body equals the temperature of the capillaries' walls during the entire process of heat and mass transfer, which is true only for diffusion transfer. www.itmo.by.cob-web.org:8888 /jepter/732000e/730869.html (4083 words)
The Number 37 - The Heart of Wisdom The Number 37 is geometrically integrated with the Number 73 - it is the Hexagonal heart of 73 as Star, so 37 / 73 is a Hexagon / Star pair. The Number 37 is the only such number less than a thousand, and this is the Number found in the very heart of Genesis 1.1. The numbers immediately surrounding it - 36 and 38 - coincide precisely with the expected value of about 3.4 multiples whereas the power set contains 23 combinations that are multiples of 37. www.biblewheel.com /GR/GR_37.asp (661 words)
The Number 73 The prime Star Number 73 - with its palindromic Hexagonal pair 37 - forms the basis of the Creation Holograph: The Numbers 37 and 73 are the Ordinal and Standard values of Wisdom: The Number 73 - along with the closely related prime 373 - forms the basis of Logos Holograph, where its meaning shines forth with particular clarity in such identies as: www.biblewheel.com /GR/GR_73.asp (249 words)
The Prime Glossary: Sierpinski number In 1960 Sierpinski showed that there were infinitely many such numbers k (all solutions to a family of congruences), but he did not explicitly give a numerical example. The congruences provided a sufficient, but not necessary, condition for an integer to be a Sierpinski number. It is conjectured that 78557 is the smallest Sierpinski number becase for most of the smaller numbers we can easily find a prime (in fact, for about 2/3rds of the numbers k there is a prime with n less than 9). primes.utm.edu /glossary/page.php?sort=SierpinskiNumber (470 words)
500 (number) - Wikipedia, the free encyclopedia (via CobWeb/3.1 planetlab2.cs.umd.edu) (Site not responding. Last check: 2007-10-09) This number is the magic constant of n×n magic square and n-Queens Problem for n = 10. The number of keyboard sonatas written by Domenico Scarlatti, according to the catalog by Ralph Kirkpatrick. The sum of the largest prime factors of the first 558 is itself divisible by 558 (the previous such number is 62, the next is 993). en.wikipedia.org.cob-web.org:8888 /wiki/500_(number) (1677 words)
Volume 73, Number 1, January 1999 Synopsis of Articles The three-year lectionary is one of the hermeneutical techniques developed by the church to assist the faithful in probing the mystery of the Scriptures. Ramshaw expresses concern that when a parish reduces the number of readings from three to one, the resulting scriptural minimalism opens the door to fundamentalist-type preaching and simplistic thematic liturgies. Even when all three readings are proclaimed but only one is considered in the homily, the temptation remains for planners and preachers to find in the Bible warrant for whatever they wanted to say anyway. www.saintjohnsabbey.org /worship/worship/jan99a.html (1658 words)
Campaign Finance Reform — Reference Shelf - Volume 73, Number 1 - Preface During the 2000 election year, more than $3 billion were raised by national and local campaigns, and the two major parties spent record-breaking amounts on their presidential candidates. Although one may be tempted to characterize the steady decrease in the number of Americans who exercise their right to vote as evidence of an apathetic populace, many of those who abstain suffer instead from a sense of hopelessness in the face of widespread corruption in politics. There is a general perception that monied interests have acquired so much power to sway politicians that the concerns of individual citizens have been forgotten. www.hwwilson.com /print/refshelf73_1preface.html (628 words) Number 73 TV theme lyrics (Site not responding. Last check: 2007-10-09) Move your feet, down the street, come inside and see a-73. When it's through that door with the 73. Series 7 and 8 of No. 73 had a remix of the familiar theme (by Ray Shulman). www.cfhf.net /lyrics/number73.htm (196 words) Quota Notes Number 73 Having marked the ballot-paper as they would for the system of preferential voting that has been well established in Federal, State and Municipal elections in Australia since the 1920s, and noting that the poll is conducted by the Commission, voters could reasonably expect the results to be determined in a customary Australian manner. "The preference votes to the number of vacancies to be filled shall be termed primary votes, and shall have equal value in the first count and be credited to the candidate for whom they are cast, whether marked 1, 2, 3, etc., according to the number of vacancies;". Nearly all councils have been restructured so that they are not subdivided into wards, resulting in a noticeable increase in the number of candidates and a big decrease in uncontested polls. www.cs.mu.oz.au /~lee/prsa/qn/73.html (3121 words) number 30/73 on Flickr - Photo Sharing! The edition number was determined by the number of sheets the ink bled through from the possible 500. The numbering of each sheet corresponds to the position it was within the stack and also determined its value. The final sheet the ink reached, (furthest from the top) was numbered 1 / 73 and valued at £1, the one above numbered 2 / 73 and valued at £2 etc. The top sheet (the sheet the pens rested on) was numbered 73 / 73 and valued at £73 www.flickr.com /photos/frauclouds/291615636 (168 words) 73bus I could tell you so many stories relating to route 73 that I think a book could be written. All the filmmakers will be at the event and a number of Routemaster drivers and conductors will also be invited as special guests. I thought you might be interested in a song mourning the demise the number 38 bus (it's the last weekend of this routemaster!). 73bus.typepad.com (2007 words) MCOM - Volume 73, Number 245 Class numbers of some abelian extensions of rational function fields Chebyshev's bias for composite numbers with restricted prime divisors All numbers whose positive divisors have integral harmonic mean up to$\mathbf{300}\$ www.ams.org /mcom/2004-73-245 (231 words)
ESPN.com: MLB - More power to him: Bonds wraps season with 73 homers (Site not responding. Last check: 2007-10-09) SAN FRANCISCO -- On the final day of the season, Barry Bonds made an odd number a remarkable one -- 73. He finished the season with a slugging percentage of.863, easily surpassing the record of.847 set by Babe Ruth in 1920. His primary motivation is to win, and he doesn't want to saddle the Giants with a number that would prohibit us from having that chance. espn.go.com /mlb/news/2001/1007/1260805.html (1004 words)
73 Camaro Number - Team Camaro Tech Is there anyway to tell if a 73 Camaro is a true Z28 from these numbers? What is the PPG number for the correct underhood fl - 1967 Camaro GM parts number for 68 camaro quarters and door panel www.camaros.net /forums/showthread.php?p=621268#post621268 (113 words)
Walking the Line: Activities for the TI-73 Number Line App at Academic Superstore The teacher notes that accompany each activity provide the teacher with instructions for using the number line application, as well as providing sample responses and solutions. We realize that there is a lot of reading for the busy classroom teacher, but sometimes we just had so much to say and didn't know what to edit out. The activities do not even begin to address all of the capabilities of the Number Line application for addressing important upper elementary and middle school mathematics topics. www.academicsuperstore.com /market/marketdisp.html?PartNo=711285 (355 words)
Number 73 (via CobWeb/3.1 planetlab2.cs.umd.edu) (Site not responding. Last check: 2007-10-09) The first link I wrote was for Railway Modeller magazine ("Great Britain's leading model railway magazine") then after a while I changed it to Ferret Central, a website that's absolutely essential reading for those with a fascination for those delightful little creatures. On the advice of a friend who has a Ph.D in Applied Trivia, I changed it once again to a site that aims to promote interest in the London Routemaster bus in general, and those that ply the number 73 route in particular. I agree that this is a much neglected area of interest that deserves promotion to a wider public. www.obm.pwp.blueyonder.co.uk.cob-web.org:8888 /73.htm (220 words)
Number 73 (Site not responding. Last check: 2007-10-09) Welcome to the web's premier shrine to TVS's fondly remembered 1980's Saturday morning show. Over its seven year run, No. 73 provided a welcome change from the normal magazine-programme style of Saturday tv. Although it featured much the same kind of stuff - cartoons, competitions, celebrities and bands - it was framed within a comical narrative. www.paulmorris.co.uk /73 (78 words)
Math Tool: Walking the Line: Activities for the TI-73 Number Line App (Site not responding. Last check: 2007-10-09) These activities are intended to help students use the number line and the fraction line to develop both an operations and number sense. Several activities focus on operations with the integers, such as the mystifying process of subtracting negative integers, providing students a tool for interpreting processes and understanding results. Other activities focus on using the number and fraction line for various types of skip counting; it's not just for whole numbers anymore! mathforum.org /mathtools/tool/5854 (183 words)
J Am Acad Relig -- Table of Contents (June 2005, 73 [2]) J Am Acad Relig 2005 73: 307-327; doi:10.1093/jaarel/lfi038 J Am Acad Relig 2005 73: 329-359; doi:10.1093/jaarel/lfi039 J Am Acad Relig 2005 73: 361-393; doi:10.1093/jaarel/lfi040 jaar.oxfordjournals.org /content/vol73/issue2 (694 words)
J Am Acad Relig -- Table of Contents (September 2005, 73 [3]) J Am Acad Relig 2005 73: 615-635; doi:10.1093/jaarel/lfi072 J Am Acad Relig 2005 73: 637-657; doi:10.1093/jaarel/lfi073 J Am Acad Relig 2005 73: 659-684; doi:10.1093/jaarel/lfi074 jaar.oxfordjournals.org /content/vol73/issue3 (774 words)
PBY-5 Catalina Dutch Number 28-5MNE Y#73 Number 44 PBY-5 Catalina Dutch Number 28-5MNE Y#73 Number 44 This PBY was an ex-Dutch Model, 28-5MNE Catalina, Y # 73. Thanks to Lou Dorny / The Baltic Group Archive for this information. www.pacificwrecks.com /aircraft/pby/28-5MNE-73.html (212 words)
USC-MSA Compendium of Muslim Texts Allah's Apostle never proceeded (for the prayer) on the Day of 'Id-ul-Fitr unless he had eaten some dates. Anas also narrated: The Prophet used to eat odd number of dates. The Prophet said, "Whoever slaughtered (his sacrifice) before the 'Id prayer, should slaughter again." A man stood up and said, "This is the day on which one has desire for meat," and he mentioned something about his neighbors. www.usc.edu /dept/MSA/fundamentals/hadithsunnah/bukhari/015.sbt.html (2766 words)
Storm Prediction Center PDS Tornado Watch 73 (Site not responding. Last check: 2007-10-09) Initial List of Counties in Watch 73 (WOU) Note: The expiration time in the watch graphic is amended if the watch is replaced, cancelled or extended. SEL3 URGENT - IMMEDIATE BROADCAST REQUESTED TORNADO WATCH NUMBER 73...CORRECTED NWS STORM PREDICTION CENTER NORMAN OK 1155 AM CST SUN MAR 12 2006 CORRECTED FOR WATCH REPLACEMENTS THE NWS STORM PREDICTION CENTER HAS ISSUED A TORNADO WATCH FOR PORTIONS OF Top of Page/Status Messages for this watch/All Current Watches/Forecast Products/Home www.spc.noaa.gov /products/watch/ww0073.html (106 words)
MCOM - Volume 73, Number 246 The holomorphic flow of the Riemann zeta function On the multidimensional distribution of the subset sum generator of pseudorandom numbers An estimate for the number of integers without large prime factors www.ams.org /mcom/2004-73-246 (297 words)
MCOM - Volume 73, Number 248 Reducing the construction cost of the component-by-component construction of good lattice rules Canonical vector heights on K3 surfaces with Picard number three--- An argument for nonexistence Obstacles to the torsion-subgroup attack on the decision Diffie-Hellman Problem www.ams.org /mcom/2004-73-248 (216 words)
Children's TV - The Never Ending Story to Number 73 (via CobWeb/3.1 planetlab2.cs.umd.edu) (Site not responding. Last check: 2007-10-09) Children's TV - The Never Ending Story to Number 73 (via CobWeb/3.1 planetlab2.cs.umd.edu) N - THE NEVER ENDING STORY to NUMBER 73 Early eighties Saturday morning entertainment ala TISWAS but supposedly set in a terraced house (the number 73 of the title). www.memorabletv.com.cob-web.org:8888 /kidstvn.htm (1232 words)
SAHIH BUKHARI, BOOK 73: Good Manners and Form (Al-Adab) (Site not responding. Last check: 2007-10-09) SAHIH BUKHARI, BOOK 73: Good Manners and Form (Al-Adab) A man came to Allah's Apostle and said, "O Allah's Apostle! The Prophet got up to accompany her, and when they reached the gate of the mosque opposite the dwelling place of Um Sa www.islamicity.com /mosque/sunnah/bukhari/073.sbt.html (14027 words)
Try your search on: Qwika (all wikis)
About us | Why use us? | Reviews | Press | Contact us |
# Why do we use dB to represent the difference between two voltages?
For example, the passband of a LC resonant circuit is the differences of frequency at +3db and -3db.
Why do we prefer dB?
• the "3 dB point" is actually 10⋅log10(1/2) = -3.0102999566398... dB. It's chosen because 1/2 power is exactly where the asymptotes meet if you plot it on a log-log plot (I believe). – endolith May 29 '11 at 18:16
• dB doesn't represent the difference, but rather the ratio. It is another way to write percentage. For an attenuator, "power reduced to 50%" and "power reduced by 3dB" mean the same thing, but put two attenuators in series and 3dB + 3dB is easier to computer than 50% * 50%. – markrages Jun 2 '11 at 22:33
• Neper (Np) is pretty common too, especially in RF engineering. Neper's are like dB's, though based on ln(value) instead of 20.log(value). – jippie Oct 18 '12 at 19:16
Many processes in nature are either of logarithmic nature (like human senses) or have a great dynamic range.
Describing them on a logarithmic scale and expressing differences in dB has several advantages:
• often the absolute difference doesn't matter, but the ratio (that's what dB is used for) does (e.g. signal-to-noise ratio)
• smaller numbers can be used
• there's an approximately linear relation between measurement and perceived sensation
• chained attenuations or amplifications can be expressed by addition instead of multiplication (easier to calculate in the head)
In many cases, voltage ratios are expressed in terms of dB rather than absolute numbers because there are many relationships which end up being linear when expressed in terms of dB. It is simpler, for example, to say that an N-stage low-pass filter will attenuate frequencies above the cutoff by $(6 \times N) \frac{dB}{octave}$ than it is to say that it will attenuate frequencies above the cutoff by a ratio of $({\frac{f_c}{f}})^N$.
• Right idea, but mixing two concepts. First, dB is a ratio relative to some reference, not an absolute value. Second, as you point out, it's a logarithmic representation of that ratio rather than a linear one. – Chris Stratton May 30 '11 at 18:18
• @Chris Stratton: By "absolute number" I didn't mean an absolute quantity, but rather a "bare" number without a dB suffix, as distinct from one with such a suffix. I should also have mentioned that it's easier to compare things that attenuate by 40, 50, 60, and 120dB than things which scale a signal by 0.01, 0.0033, and 0.001, and 0.000001. – supercat May 31 '11 at 16:21
• the word you want is 'linear' not 'absolute' – Chris Stratton May 31 '11 at 16:31
dB is useful since it is a relative expression. +/-3dB is a doubling or halving of power.
dB are often used because the human senses have a logarithmic response, to increase the dynamic range.
Around 3dB gives a sensation of doubling or halving the stimulus, as well as doubling or halving the physical value. That value seems to apply to all human senses, and is one reason why 3dB is so ubiquitous. Psychophysics, a branch of experimental psychology, has a long history of investigating this stuff. The minimal amount of change that can be detected is around 1dB (the Just Noticeable Difference or JND). 0dB is the absolute threshold, below which the stimulus isn't detected.
• do you mean the change of voice? – Jichao May 29 '11 at 18:14
• "0dB is the absolute threshold, below which the stimulus isn't detected." What do you mean by this? 0 dB is by definition a ratio of 1 - ie, no change. Do you perhaps mean 0 dB relative to some reference power? – Chris Stratton May 30 '11 at 7:44
• @Chris - it's not 0 dB relative to some reference, but 0 dB as a reference. We talk about sound levels of 90 dB, but that's always compared to some other level. Which is the 0 dB level. There are several dB scales each with its own reference. On the dBm scale for instance 0 dB is 775mV in 600 Ohm, or 1 mW. – stevenvh May 30 '11 at 9:50
• @stevenh If you talk aboout 0dB as a reference, you are mis-speaking. dB is always a ratio, relative to some reference. "x dBm" is a power stated by its ratio to a milliwat, but "x dB" is only a ratio, since no reference is given. You must give a reference to state a power logarithmically. – Chris Stratton May 30 '11 at 15:29
• @stevenh "Everybody working with dBm knows what this reference is" yes, they know what the reference is because the 'm' in 'dBm' refers to the milliwatt reference. But if someone just says 'dB' there is no reference. You can perhaps argue that 'dB SPL' encodes a reference power in the definition of 'SPL' - but 'dB' by itself is a ratio, and it's improper to use it as a power. – Chris Stratton May 30 '11 at 18:11 |
weird alignment results for RNAseq data
0
0
Entering edit mode
9 months ago
Sara ▴ 220
I have RNAseq data for 2 samples and trying to do paired end alignment. I tried hisat2 and star for the alignment but the results are weird to me. First I used star (using the command that I used many times before) and sam file seems to be normal but when I converted to bam file and loaded it to IGV there was no read. Looking at the bam file also I did not find any reads. Then I tried hisat2 and I got the same problem for the bam file. Do you know what could be the problem. Here is the stat from hisat2:
14631020 reads; of these: 14631020 (100.00%) were paired; of these:
424695 (2.90%) aligned concordantly 0 times
13320596 (91.04%) aligned concordantly exactly 1 time
885729 (6.05%) aligned concordantly >1 times
----
424695 pairs aligned concordantly 0 times; of these:
11156 (2.63%) aligned discordantly 1 time
----
413539 pairs aligned 0 times concordantly or discordantly; of these:
827078 mates make up the pairs; of these:
412619 (49.89%) aligned 0 times
380325 (45.98%) aligned exactly 1 time
34134 (4.13%) aligned >1 times
98.59% overall alignment rate
here is the flagstat results:
samtools flagstat sample.bam
3800628 + 0 secondary
0 + 0 supplementary
0 + 0 duplicates
32650049 + 0 mapped (98.75% : N/A)
29262040 + 0 paired in sequencing
28412650 + 0 properly paired (97.10% : N/A)
28508696 + 0 with itself and mate mapped
340725 + 0 singletons (1.16% : N/A)
62868 + 0 with mate mapped to a different chr
47800 + 0 with mate mapped to a different chr (mapQ>=5)
samtools flagstat sample.sam
3800628 + 0 secondary
0 + 0 supplementary
0 + 0 duplicates
32650049 + 0 mapped (98.75% : N/A)
29262040 + 0 paired in sequencing
28412650 + 0 properly paired (97.10% : N/A)
28508696 + 0 with itself and mate mapped
340725 + 0 singletons (1.16% : N/A)
62868 + 0 with mate mapped to a different chr
47800 + 0 with mate mapped to a different chr (mapQ>=5)
hisat2 star • 618 views
0
Entering edit mode
What command did you use to convert from sam to bam ?
0
Entering edit mode
@Carlo Yague : I tried 2 commands . once:
1- samtools view -bS sample.sam > sample.bam
2-samtools view -u sample.sam | samtools sort -o sample_sorted.bam
in both cases I got weird bam file
0
Entering edit mode
Always use -o to save the output, depending on how you run the command, the output might be mangled with other output. Try this:
samtools view -bh -o sample.bam sample.sam
0
Entering edit mode
There might have been an error during conversion to bam, but that could be almost anything, from a broken library to a full harddisk. Do some more very simple checks, f.e. what do samtools view and flagstats yield? Technically, bam files might be in gzip format, so try gzip -t, may yield a CRC error. What is the size and type of the bam file using ls and file? Is your disk full (not a joke, I assume this is the most common source of error: df -h .). What are the chromosome names in the bam file, etc.?
0
Entering edit mode
Michael Dondrup question is updated.
0
Entering edit mode
probably it's a mismatch of contig/chromosome names between ref used in IGV and ref used in bam. Genome version used in IGV should be same as genome version used in alignment.
0
Entering edit mode
cpad0112 1- I used the same genome version indeed. in addition to IGV, I also looked into the bam file. that also does not have reads information!
1
Entering edit mode
Please do the basic sanity checks first, the file might simply be truncated.
0
Entering edit mode
can you check file sizes of sam and bam?
0
Entering edit mode
1605304 sample.bam
18105768 sample.sam
0
Entering edit mode
can you try this:
$samtools view sample.bam | wc$ samtools view sample.sam | wc
or you can run samtools quickcheck to check the integrity of bam fille. If it is not still not working, can you post a subset bam here to see what's going on?
0
Entering edit mode
Hmm, looks good to me. So, there is seemingly nothing wrong with your file. Possibly, you may simply need to zoom in on a region in IGV where there are reads, and there might be regions that don't have any. Also, in IGV you have to zoom in to a certain level first before it shows anything. Also, the bam files should be sorted and indexed. After that, try to view the first few reads from the bam file, check where they are aligned and then zoom in to that position in the reference. |
# 10.1. Introduction
NVNMD stands for non-von Neumann molecular dynamics.
This is the training code we used to generate the results in our paper entitled “Accurate and Efficient Molecular Dynamics based on Machine Learning and Non Von Neumann Architecture”, which has been accepted by npj Computational Materials (DOI: 10.1038/s41524-022-00773-z).
Any user can follow two consecutive steps to run molecular dynamics (MD) on the proposed NVNMD computer, which has been released online: (i) to train a machine learning (ML) model that can decently reproduce the potential energy surface (PES); and (ii) to deploy the trained ML model on the proposed NVNMD computer, then run MD there to obtain the atomistic trajectories.
# 10.2. Training
Our training procedure consists of not only the continuous neural network (CNN) training, but also the quantized neural network (QNN) training which uses the results of CNN as inputs. It is performed on CPU or GPU by using the training codes we open-sourced online.
To train a ML model that can decently reproduce the PES, training and testing data set should be prepared first. This can be done by using either the state-of-the-art active learning tools, or the outdated (i.e., less efficient) brute-force density functional theory (DFT)-based ab-initio molecular dynamics (AIMD) sampling.
If you just want to simply test the training function, you can use the example in the $deepmd_source_dir/examples/nvnmd directory. If you want to fully experience training and running MD functions, you can download the complete example from the website. Then, copy the data set to working directory mkdir -p$workspace
cd $workspace mkdir -p data cp -r$dataset data
where $dataset is the path to the data set and $workspace is the path to working directory.
## 10.2.1. Input script
Create and go to the training directory.
mkdir train
cd train
Then copy the input script train_cnn.json and train_qnn.json to the directory train
cp -r $deepmd_source_dir/examples/nvnmd/train/train_cnn.json train_cnn.json cp -r$deepmd_source_dir/examples/nvnmd/train/train_qnn.json train_qnn.json
The structure of the input script is as follows
{
"nvnmd" : {},
"learning_rate" : {},
"loss" : {},
"training": {}
}
### 10.2.1.1. nvnmd
The “nvnmd” section is defined as
{
"net_size":128,
"sel":[60, 60],
"rcut":6.0,
"rcut_smth":0.5
}
where items are defined as:
Item
Mean
Optional Value
net_size
the size of nueral network
128
sel
the number of neighbors
integer list of lengths 1 to 4 are acceptable
rcut
(0, 8.0]
rcut_smth
the smooth cutoff parameter
(0, 8.0]
### 10.2.1.2. learning_rate
The “learning_rate” section is defined as
{
"type":"exp",
"start_lr": 1e-3,
"stop_lr": 3e-8,
"decay_steps": 5000
}
where items are defined as:
Item
Mean
Optional Value
type
learning rate variant type
exp
start_lr
the learning rate at the beginning of the training
a positive real number
stop_lr
the desired learning rate at the end of the training
a positive real number
decay_stops
the learning rate is decaying every {decay_stops} training steps
a positive integer
### 10.2.1.3. loss
The “loss” section is defined as
{
"start_pref_e": 0.02,
"limit_pref_e": 2,
"start_pref_f": 1000,
"limit_pref_f": 1,
"start_pref_v": 0,
"limit_pref_v": 0
}
where items are defined as:
Item
Mean
Optional Value
start_pref_e
the loss factor of energy at the beginning of the training
zero or positive real number
limit_pref_e
the loss factor of energy at the end of the training
zero or positive real number
start_pref_f
the loss factor of force at the beginning of the training
zero or positive real number
limit_pref_f
the loss factor of force at the end of the training
zero or positive real number
start_pref_v
the loss factor of virial at the beginning of the training
zero or positive real number
limit_pref_v
the loss factor of virial at the end of the training
zero or positive real number
### 10.2.1.4. training
The “training” section is defined as
{
"seed": 1,
"stop_batch": 1000000,
"numb_test": 1,
"disp_file": "lcurve.out",
"disp_freq": 1000,
"save_ckpt": "model.ckpt",
"save_freq": 10000,
"training_data":{
"systems":["system1_path", "system2_path", "..."],
"set_prefix": "set",
"batch_size": ["batch_size_of_system1", "batch_size_of_system2", "..."]
}
}
where items are defined as:
Item
Mean
Optional Value
seed
the randome seed
a integer
stop_batch
the total training steps
a positive integer
numb_test
the accuracy is test by using {numb_test} sample
a positive integer
disp_file
the log file where the training message display
a string
disp_freq
display frequency
a positive integer
save_ckpt
check point file
a string
save_freq
save frequency
a positive integer
systems
a list of data directory which contains the dataset
string list
set_prefix
the prefix of dataset
a string
batch_size
a list of batch size of corresponding dataset
a integer list
## 10.2.2. Training
Training can be invoked by
# step1: train CNN
dp train-nvnmd train_cnn.json -s s1
# step2: train QNN
dp train-nvnmd train_qnn.json -s s2
After training process, you will get two folders: nvnmd_cnn and nvnmd_qnn. The nvnmd_cnn contains the model after continuous neural network (CNN) training. The nvnmd_qnn contains the model after quantized neural network (QNN) training. The binary file nvnmd_qnn/model.pb is the model file which is used to performs NVNMD in server [http://nvnmd.picp.vip]
# 10.3. Testing
The frozen model can be used in many ways. The most straightforward testing can be invoked by
mkdir test
dp test -m ./nvnmd_qnn/frozen_model.pb -s path/to/system -d ./test/detail -n 99999 -l test/output.log
where the frozen model file to import is given via the -m command line flag, the path to the testing data set is given via the -s command line flag, the file containing details of energy, force and virial accuracy is given via the -d command line flag, the amount of data for testing is given via the -n command line flag.
# 10.4. Running MD
After CNN and QNN training, you can upload the ML model to our online NVNMD system and run MD there.
## 10.4.1. Account application
The server website of NVNMD is available at http://nvnmd.picp.vip. You can visit the URL and enter the login interface (Figure.1).
To obtain an account, please send your application to the email ([email protected], [email protected]). The username and password will be sent to you by email.
Figure.2 The homepage
The homepage displays the remaining calculation time and all calculation records not deleted. Click Add a new task to enter the interface for adding a new task (Figure.3).
• Upload mode: two modes of uploading results to online data storage, including Manual upload and Automatic upload. Results need to be uploaded manually to online data storage with Manual upload mode, and will be uploaded automatically with Automatic upload mode.
• Input script: input file of the MD simulation.
In the input script, one needs to specify the pair style as follows
pair_style nvnmd model.pb
pair_coeff * *
• Model file: the ML model named model.pb obtained by QNN training.
• Data files: data files containing information required for running an MD simulation (e.g., coord.lmp containing initial atom coordinates).
Next, you can click Submit to submit the task and then automatically return to the homepage (Figure.4).
Figure.4 The homepage with a new record
Then, click Refresh to view the latest status of all calculation tasks.
## 10.4.3. Cancelling calculation
For the task whose calculation status is Pending and Running, you can click the corresponding Cancel on the homepage to stop the calculation (Figure.5).
Figure.5 The homepage with a cancelled task
For the task whose calculation status is Completed, Failed and Cancelled, you can click the corresponding Package or Separate files in the Download results bar on the homepage to download results.
Click Package to download a zipped package of all files including input files and output results (Figure.6).
Click Separate files to download the required separate files (Figure.7).
If Manual upload mode is selected or the file has expired, click Upload on the download interface to upload manually.
## 10.4.5. Deleting record
For the task no longer needed, you can click the corresponding Delete on the homepage to delete the record.
Records cannot be retrieved after deletion.
## 10.4.6. Clearing records
Click Clear calculation records on the homepage to clear all records.
Records cannot be retrieved after clearing. |
1. Mar 23, 2013
### uperkurk
I don't really understand this formula but if light has no mass, then how comes a blackhole can pull it in?
$F=G\frac{MassLight\times MassWormhole}{WormholeRadius^2} = G\frac{0}{WormholeRadius^2}= 0$
My question is if light experiences no gravitational force wherever it is in the universe, why can a wormhole pull it in?
I know you guys probably get stupid questions like this all the time but my mind often wonders into things I don't understand.
Hope someone can clear up my ill thinking.
2. Mar 23, 2013
### ZapperZ
Staff Emeritus
Last edited by a moderator: May 6, 2017
3. Mar 23, 2013
### uperkurk
Nevermind, I just found on the forums it's due to GR and the fact that the space-time itself is bent so light isn't actually being pulled due to the sheer gravitational force of the wormhole but because space-time is curved.
Pretty neat really
4. Mar 23, 2013
### DrStupid
There is a gravitational deflection of light in classical mechanics:
$F = G \cdot \frac{{M \cdot m }}{{r^2 }} = m \cdot a$
$a = G \cdot \frac{M}{{r^2 }}$
As light is not massless in classical mechanics this works for photons without problems and due to
$\mathop {\lim }\limits_{m \to 0} \frac{m}{m} = 1$
it could also be used for massless objects.
However, the results does not fit to reality. (e.g. the deflection of light in the gravitational field of the Sun is double as high) You can't use Newton's law of gravity for light or black holes. General relativity must be used to get the correct results.
Last edited: Mar 23, 2013
5. Mar 23, 2013
### uperkurk
How is it possible that in one field of physics light is massless but in another it isn't? How can you guys just chop and change things like that?
6. Mar 23, 2013
### DrStupid
1. There are different theories for light.
2. There are different definitions of mass.
7. Mar 23, 2013
### Staff: Mentor
According to Newton's second law, how much force is required to accelerate a massless object?
Of course, the real answer requires relativity. Newtonian physics doesn't treat massless particles correctly. But the point is that you need to think about your premise a bit and see if it makes sense.
8. Mar 25, 2013
### Lsos
I think the answer is that one theory is correct in all instances that we're discussing (Relativity), while another is correct in only some instances (Newton). Ideally we would just use Relativity for everything, but Newton's theory is much simpler and easier to use...so we only use Relativity when we really really have to. The key is knowing when that is.
As an example, we all know that the earth is round. Nevertheless, for everyday basic tasks such as walking around, throwing a ball, etc, thinking of the earth as being flat is good enough, because accounting for the curvature of the earth will give you practically the same result, but with a much larger headache. |
# Shouldn't some stars behave as black holes?
Some of the "smaller" black holes have a mass of 4-15 suns. But still, they are black holes. Thus their gravity is so big, even light cannot escape.
Shouldn't this happen to some stars, that are even more massive? (mass of around 100 suns) If their mass is so much bigger, shouldn't their gravity be also bigger? (So they would behave like a black hole). Or does gravity depend on the density of the object as well?
• just a thought: galaxy has much higher mass than sun. Should not galaxy behave like a black hole? – Umaxo Nov 26 '20 at 12:26
• You need an energy gradient. The early universe had an energy density much larger than e.g. a star just before turning into a black hole - but it was nearly the same everywhere, so the gradient was nearly zero, and the spacetime curvature was also nearly flat. – Luaan Nov 27 '20 at 9:16
• @Luaan While you definitely need a gradient for black hole behavior, I think the early universe comment is misleading. The (idealized) early universe did not have a flat spacetime, even though it had essentially no gradients. It was spatially flat in the sense that the Riemannian constant-time slices were flat (even this is not immediate from homogeneity and isotropy, i.e. the lack of gradients-- it is inferred from observation), but the spacetime was very much curved, as manifest in the growth of the scale factor, which encodes all of the curvature of the "flat" FLRW metrics. – jawheele Nov 29 '20 at 17:59
• Comments made by Vilenkin about the Borde-Guth-Vilenkin Theorem (which specifies that universes that are on average expanding cannot be eternal to the past) suggest that contraction is considered to precede expansion, in the deSitter spacetime on which inflationary cosmologies are generally based, simply because the contracting phase would leave no evidence of its having happened, and would consequently leave no observable phenomena allowing scientific verification: Consequently, I'd agree with jawheele that "early" is only an idealization. – Edouard Dec 1 '20 at 20:30
• The OP may have an impression that black holes originate only from the collapse of individual stars: One clear instance of them having formed from the collapse of material scattered through vastly larger regions has been observed, in Sagitarrius A, and is discussed at astronomy.stackexchange.com/questions/25466/… . "Dust" is sometimes used in a very inclusive sense in discussions of such observations, just as stars are, in some contexts, referred to as "particles". – Edouard Dec 1 '20 at 20:39
The true answer lies in General Relativity, but we can make a simple Newtonian argument.
From the outside, a uniform sphere attracts test masses exactly as if all of its mass was concentrated in the center (part of the famous Shell theorem).
Gravitational attraction also increases the closer you are to the source of gravitation, but if you go inside the sphere, some of the mass of the sphere will form a shell surrounding you, hence you will experience no gravitational attraction from it, again because of the Shell theorem. This is because while the near side of the shell is pulling you towards it, so is the far side, and the forces cancel out, and the only gravitational forces remaining are from the smaller sphere in front of you.
Once you get near the center of the sphere, you will experience almost no gravitational pull at all, as pretty much all of the mass is pulling you radially away from the center.
This means that if you can get very close to the center of the sphere without going inside the sphere, you will experience much stronger gravitational attraction, as there is no exterior shell of mass to compensate the center of mass attraction. Hence, density plays a role: a relatively small mass concentrated in a very small radius will allow you to get incredibly close to the center and experience incredible gravitational forces, while if the same mass occupies a larger space, to get very close to the center you will have to get inside the mass, and some of the attraction will cancel out.
The conclusion is that a small mass can be a black hole if it is concentrated inside a small enough radius. The largest such radius is called the Schwarzschild radius. As a matter of fact our own Sun would be a black hole if it had a radius of less than $$3$$ km and the same mass, and the Earth would be a black hole if it had a radius of less than $$9$$ mm and the same mass.
• @StephenG Thanks for the comment, I'm not trying to explain why large stars don't form black holes, just why a small, light object can be a black hole while a large, heavy object isn't, the point being that the entirety of the object must be inside the Schwarzschild radius. I'm not commenting on the stability of very massive very large object, all the statement I made are about a test mass at various points. – user2723984 Nov 26 '20 at 15:47
• A wrong but amusing argument: if you set $\tfrac12mc^2=\tfrac{GMm}{r}$ in Newtonian physics to discover the radius of a spherical mass $M$ with surface escape velocity $c$, you get $r=\tfrac{2GM}{c^2}$, which is just the Schwarzschild radius. – J.G. Nov 26 '20 at 21:06
• @user2723984 Well, it wouldn't be surprising if it were wrong by a factor, but thanks to dimensional analysis that's the worst that could happen. I'm trying to remember the name of an effect in atomic physics that's famously twice as large when you take special relativity into account. – J.G. Nov 27 '20 at 8:31
• @J.G. Are you thinking of the gyromagnetic ratio, which is 1 for a classical electron, 2 (Dirac) for relativistic, and 2.00........ when QED has thrown all its corrections at it. – Neil_UK Nov 27 '20 at 11:48
• @RobJeffries There's no gravitational field inside a spherical shell because the gravity from mass behind you and the mass in front of you cancel out. Which is what the answer says. – Carmeister Nov 29 '20 at 0:42
Stars generate a great deal of energy through fusion at the core. Basically the more massive a star is, the more pressure the core is under (due to the star's own gravity) and the more energy it can generate (somewhat simplified).
That energy of course radiates outward and heats everything outside the core making it a something like a pressure cooker, with heat creating pressure and the outer regions of the star being kept in place by it's own gravity. Stars would collapse into more dense objects (like white dwarfs and neutron stars and black holes) if this outward heat driven pressure did not exist.
Black holes are created when the fusion process can no longer generate enough energy to produce that pressure to prevent collapse and the star is massive enough so that it's gravitational field can compress itself so far it becomes dense enough to be a black hole.
• Also, when the pressure is high enough, it cannot prevent collapse because pressure is a component of the stress-energy tensor which is the source of spacetime curvature. So increased pressure leads to increased gravity, which leads to further increased pressure, etc, and that feedback loop causes runaway collapse of the star core. – PM 2Ring Nov 26 '20 at 17:13
Roughly speaking, for a star to become a black hole, its physical radius has to become smaller than its Schwarzschild radius. So even the Earth could be a black hole if it shrinks to below 9 milimiters. It is not precise to say that a black hole depends on the density of the object, since a Schwarzschild metric is a vacuum solution of Einstein's field equations.
• Exactly. Those stars simply occupy too much space to be black holes. You can weight the mass inside any sphere within a star, and you will never have enough mass in there for the sphere to be an even horizon. It's only when the stars compress down their core later in their life that the core may become smaller than its Schwarzschild radius, causing it to become a black hole. – cmaster - reinstate monica Nov 29 '20 at 14:51
• The extremely complex Kerr metric is also a vacuum solution, but the Kerr-Newman metric is not, I guess because it contains electrons, which are material. – Edouard Dec 1 '20 at 21:02
Or does gravity depend on the density of the object as well?
The problem with this question is that it's rather ambiguous as to what you mean by "gravity". An object doesn't have a single number that is its "gravity". If a ship is near a star, the gravitational force that the ship feels depends on the mass of the star, the mass of the ship, and the distance between them. If we consider the acceleration, rather than the force, then we can divide out by the mass of the ship. So rather than saying "gravity", I will talk about the gravitational acceleration. We can take the mass of the star as being fixed, but that still leaves the variable of the distance between them.
So the question is whether this distance is measured from the center of the object, or from the surface of the object. If the distance is measured from the center, then gravitational acceleration does not depend on the density of the object. If the Sun were to contract and become more dense, the orbit of the Earth would not be affected.
However, the less dense the object is (for a fixed mass), the further the surface will be from the center. So decreasing the density of an object decreases its surface gravitational acceleration. If the Earth were to expand in volume, but not increase in mass, then the gravitational acceleration at its new surface would be lower.
Also, it's more the escape velocity, rather than the gravitational acceleration, that determines whether something is a black hole. However, the escape velocity follows the same pattern as gravitational acceleration: the escape velocity relative to the center of an object does not depend on the density, but the surface escape velocity does. As a star collapses, its surface escape velocity increases, and once the surface escape velocity reaches the speed of light, it is a black hole.
If the visible matter became enough dense to be concentrated inside its Schwarzschild radius, it becomes a BH. Until their inner pressure withstand the gravitation they stay being stars. |
# Find a pair of nodes with maximum sum of distances in k given trees
For k edge-weighted trees $T_1,T_2...T_k$ which contain the same set of nodes $\{1,2,... n \}$, I want to find a pair of nodes $(x,y)$ which maxifies $$\sum_{i=1}^k d_i(x,y)$$ where $d_i(x,y)$ denotes the sum of weights of the edges in the path between x and y in $T_i$.
Recently I came across the $k \leq 3$ version of this problem, which can be solved in $O(n\log(n))$. But for general k, is there any way to get a complexity lower than $O(n^2)$ (assuming k is a constant)?
• Is there any relation between the trees other than that they contain the same nodes? – John Dvorak Feb 26 '18 at 10:18
• I assume there's nope. I forgot to mention n in the question :/ my fault. – newbie Feb 26 '18 at 11:21
• Is there a reference for the $k\leq 3$ algorithm? – Chao Xu Feb 26 '18 at 20:20
• @ChaoXu uoj.ac/problem/347 solution: drive.google.com/file/d/1J2vOg9UEHVpHquhMTkCu5iQh3bU0JZpO/… (in chinese) – newbie Feb 27 '18 at 0:04
• @daniello in your example, take k1=k2=1, v1 is the point with maximum k1*x+k2*y, and u1 is the point with minimum. We're not taking extremal points along axis, but for every vector of length k consisted of {-1,1}, taking scalar products with these points and check the maximum and minimum. – newbie Mar 1 '18 at 10:20 |
# Category Archives: math
## WAGS @ UCLA, 14–15 October 2017
WAGS returns to UCLA.
The Fall 2017 edition of the Western Algebraic Geometry Symposium (WAGS) will take place the weekend of 14–15 October 2017 at IPAM on the UCLA campus, hosted by the UCLA Mathematics Department. Details are now on the conference website.
If you plan to attend but haven’t yet registered, please register. It’s free, and knowing who’s coming will allow us to ensure that:
• We have enough space.
• We have enough coffee.
• We have enough food.
• We have a name tag ready for you, so that the conference is successful in helping you meet fellow mathematicians and helping other mathematicians meet you.
• We can help our funder to demonstrate — with evidence — that they’re supporting a thriving enterprise.
Send questions to [email protected]. [UPDATE: I previously gave the wrong email address (starting fall17).]
Photo of the Powell Cat from the Daily Bruin. More about the Powell Cat on twitter.
Filed under math, travel
## New (review) paper: Recent progress on the Tate conjecture
My paper about the Tate conjecture for Bull. AMS is now available to view.
In it I survey the history of the Tate conjecture on algebraic cycles. The conjecture is closely intertwined with other big problems in arithmetic and algebraic geometry, including the Hodge and Birch–Swinnerton-Dyer conjectures. I conclude by discussing the recent proof of the Tate conjecture for K3 surfaces over finite fields.
After returning the proofs to the AMS, it occurred to me that it could be helpful to readers if I recommended some available related videos. I was too slow for the AMS’s speedy production, however, so I make the recommendations here.
Videos
Photo is from foldedspace.org.
1 Comment
Filed under math
## Now it’s about stacks (new paper)
I’ve posted an updated version of a paper on the arXiv, Hodge theory of classifying stacks (initial version posted 10 March 2017). We compute the Hodge and de Rham cohomology of the classifying space BG (defined as sheaf cohomology on the algebraic stack BG) for reductive groups G over many fields, including fields of small characteristic. These calculations have a direct relation with representation theory, yielding new results there. The calculations are closely analogous to, but not always the same as, the cohomology of classifying spaces in topology. This is part of the grand theme of bringing topological ideas into algebraic geometry to build analogous organizing structures and machines.
In the first version of the paper, I defined Hodge and de Rham cohomology of classifying spaces by thinking of BG as a simplicial scheme. In the current version, I followed Bhargav Bhatt‘s suggestion to make the definitions in terms of the stack BG, with simplicial schemes as just a computational tool. Although I sometimes resist the language of algebraic stacks as being too abstract, it is powerful, especially now that so much cohomological machinery has been developed for stacks (notably in the Stacks Project). (In short, stacks are a flexible language for talking about quotients in algebraic geometry. There is a “quotient stack” [X/G] for any action of an algebraic group G on a scheme X.)
Using the language of stacks led to several improvements in the paper. For one thing, my earlier definition gave what should be considered wrong answers for non-smooth groups, such as the group scheme μp of pth roots of unity in characteristic p. Also, the paper now includes a description of equivariant Hodge cohomology for group actions on affine schemes, generalizing work of Simpson and Teleman in characteristic zero. There is a lot of room here for further generalizations (and calculations).
Photo: Susie the cat in Cambridge, November 2000.
Filed under math, Susie
## Very old paper: The cohomology ring of the space of rational functions
Thanks to Claudio Gonzales, who requested it, and to MSRI Librarian Linda Riewe, who found and scanned it, my 1990 MSRI preprint “The cohomology ring of the space of rational functions” is available on my webpage.
Abstract: We consider three spaces which can be viewed as finite-dimensional approximations to the 2-fold loop space of the 2-sphere, Ω2S2. These are Ratk(CP1), the space of based holomorphic maps S2→S2; Bβ2k, the classifying space of the braid group on 2k strings; and Ck(R2, S1), a space of configurations of k points in R2 with labels in S1. Cohen, Cohen, Mann, and Milgram showed that these three spaces are all stably homotopy equivalent. We show that these spaces are in general not homotopy equivalent. In particular, for all positive integers k with k+1 not a power of 2, the mod 2 cohomology ring of Ratk is not isomorphic to that of Bβ2k or Ck. There remain intriguing questions about the relation among these three spaces.
Since 1990, a few papers have built on this preprint, including:
J. Havlicek. The cohomology of holomorphic self-maps of the Riemann sphere. Math. Z. 218 (1995), 179–190.
D. Deshpande. The cohomology ring of the space of rational functions (2009). arXiv:0907.4412
Photo: Susie the cat in Cambridge c. 2001.
Filed under math, Susie
## Why I like the spin group / New paper: Essential dimension of the spin groups in characteristic 2
Mathematics is about rich objects as well as big theories. This post is about one of my favorite rich objects, the spin group, inspired by my new paper Essential dimension of the spin groups in characteristic 2. What I mean by “rich” is being simple enough to be tractable yet complicated enough to exhibit interesting behavior and retaining this characteristic when viewed from many different theoretical angles.
Other objects in mathematics are rich in this way. In algebraic geometry, K3 surfaces come to mind, and rich objects live at various levels of sophistication: the Leech lattice, the symmetric groups, E8, the complex projective plane,…. I’d guess other people have other favorites.
Back to spin. The orthogonal group is a fundamental example in mathematics: much of Euclidean geometry amounts to studying the orthogonal group O(3) of linear isometries of R3, or its connected component, the rotation group SO(3). The 19th century revealed the striking new phenomenon that the group SO(n) has a double covering space which is also a connected group, the spin group Spin(n). That story probably started with Hamilton’s discovery of quaternions (where Spin(3) is the group S3 of unit quaternions), followed by Clifford’s construction of Clifford algebras. (A vivid illustration of this double covering is the Balinese cup trick.)
In the 20th century, the spin groups became central to quantum mechanics and the properties of elementary particles. In this post, though, I want to focus on the spin groups in algebra and topology. In terms of the general classification of Lie groups or algebraic groups, the spin groups seem straightforward: they are the simply connected groups of type B and D, just as the groups SL(n) are the simply connected groups of type A. In many ways, however, the spin groups are more complex and mysterious.
One basic reason for the richness of the spin groups is that their smallest faithful representations are very high dimensional. Namely, whereas SO(n) has a faithful representation of dimension n, the smallest faithful representation of its double cover Spin(n) is the spin representation, of dimension about 2n/2. As a result, it can be hard to get a clear view of the spin groups.
For example, to understand a group G (and the corresponding principal G-bundles), topologists want to compute the cohomology of the classifying space BG. Quillen computed the mod 2 cohomology ring of the classifying space BSpin(n) for all n. These rings become more and more complicated as n increases, and the complete answer was an impressive achievement. For other cohomology theories such as complex cobordism MU, MU*BSpin(n) is known only for n at most 10, by Kono and Yagita.
In the theory of algebraic groups, it is especially important to study principal G-bundles over fields. One measure of the complexity of such bundles is the essential dimension of G. For the spin groups, a remarkable discovery by Brosnan, Reichstein, and Vistoli was that the essential dimension of Spin(n) is reasonably small for n at most 14 but then increases exponentially in n. Later, Chernousov and Merkurjev computed the essential dimension of Spin(n) exactly for all n, over a field of characteristic zero.
Even after those results, there are still mysteries about how the spin groups are changing around n = 15. Merkurjev has suggested the possible explanation that the quotient of a vector space by a generically free action of Spin(n) is a rational variety for small n, but not for n at least 15. Karpenko’s paper gives some evidence for this view, but it remains a fascinating open question. The spin groups are far from yielding up all their secrets.
Image is a still from The Aristocats (Disney, 1970). Recommended soundtrack: Cowcube’s Ye Olde Skool.
Filed under math, opinions
## WAGS @ Colorado State, 15–16 October
The Fall 2016 edition of the Western Algebraic Geometry Symposium (WAGS) will be held at Colorado State University on the weekend of 15–16 October. In addition to an excellent program of talks, there will be a lively poster session.
Speakers are:
Enrico Arbarello, Sapienza Universita di Roma/Stony Brook
Emily Clader, San Francisco State University
Luis Garcia, University of Toronto
Diane Maclagan, University of Warwick
Sandra Di Rocco, KTH
Brooke Ullery, University of Utah
For more information and to register, see the WAGS Fall 2016 site.
Filed under math, travel
## Our friend the Tate elliptic curve
Rigid analytic spaces are all the rage these days, thanks to the work of Peter Scholze and his collaborators on perfectoid spaces. In this post, I want to briefly describe the example that inspired the whole subject of rigid analytic spaces: the Tate elliptic curve. Tate’s original 1959 notes were not published until 1995. (My thanks to Martin Gallauer for his explanations of the theory.)
Let ${\bf C}_p$ be the completion of the algebraic closure of the p-adic numbers ${\bf Q}_p$. The difficulty in defining analytic spaces over ${\bf C}_p$, by analogy with complex analytic spaces, is that ${\bf C}_p$ is totally disconnected, and so there are too many locally analytic (or even locally constant) functions. Tate became convinced that it should be possible to get around this problem by his discovery of the Tate elliptic curve. Namely, by explicit power series, he argued that some elliptic curves $X$ over ${\bf Q}_p$ could be viewed as a quotient of the affine line minus the origin as an analytic space: ${\bf Q}_p^*/\langle q^{\bf Z}\rangle \cong X({\bf Q}_p).$
Trying to make sense of the formulas led Tate to his definition of rigid analytic spaces. In short, one has to view a rigid analytic space not just as a topological space, but as a space with a Grothendieck topology — that is, a space with a specified class of admissible coverings. So, for example, the closed unit disc $\{ z: |z| \leq 1\}$ acts as though it is connected, because its covering by the two disjoint open subsets $\{ z: |z| < 1\}$ and $\{ z: |z| = 1\}$ is not an admissible covering. (“Affinoids,” playing the role of compact open sets, include closed balls such as $|z|\leq a$ for any real number $a$, but not the open ball $|z|<1$. An admissible covering of an affinoid such as $\{ z: |z| \leq 1\}$ is required to have a refinement by finitely many affinoids.)
Tate’s formulas for the p-adic analytic map $G_m \rightarrow X$, modeled on similar formulas for the Weierstrass $p$-function, are as follows.
Theorem. Let $K$ be a complete field with respect to a non-archimedean absolute value, and let $q \in K^*$ have $0<|q|<1$. Then the following power series define a isomorphism of abelian groups $K^*/q^{\bf Z}\cong X(K)$, for the elliptic curve $X$ below:
$x(w)=\sum_{m\in {\bf Z}}\frac{q^m w}{(1-q^mw)^2} -2s_1$
$y(w)=\sum_{m\in {\bf Z}}\frac{q^{2m} w}{(1-q^mw)^2} +s_1,$
where $s_l=\sum_{m\geq 1}\frac{m^lq^m}{1-q^m}$ for positive integers $l$. The corresponding elliptic curve $X$ in ${\bf P}^2$ is defined in affine coordinates by $y^2+xy=x^3+Bx+C,$ where $B=-5s_3$ and $C=(5s_3+7s_5)/12$. Its $j$-invariant is $j(q)=1/q+744+196884q+\cdots.$ For every element $j\in K$ with $|j|>1$ (corresponding to an elliptic curve over $K$ that does not have potentially good reduction), there is a unique $q\in K$ with $j(q)=j$.
It is worth contemplating why the formulas for $x(w)$ and $y(w)$ make sense, for $w\in K^*$. The series both have poles when $w$ is an integer power of ${q}$, just because these points map to the origin of the elliptic curve, which is at infinity in affine coordinates. More important, these formulas make it formally clear that $x(qw)=x(w)$ and $y(qw)=y(w)$, but the series do not obviously converge; the terms are small for $m \rightarrow \infty$, but they are large for $m\rightarrow -\infty$.
To make sense of the formulas, one has to use the identity of rational functions $\frac{w}{(1-w)^2} = \frac{w^{-1}}{(1-w^{-1})^2}.$ As a result, the series for $x(w)$ (for example) can be written as
$x(w)=\frac{w}{(1-w)^2}+\sum_{m\geq 1}\big(\frac{q^mw}{(1-q^mw)^2}+\frac{q^mw^{-1}}{(1-q^mw^{-1})^2} -2\frac{q^m}{(1-q^m)^2}\big),$
which manifestly converges. One checks from this description that the series $x(w)$ satisfies $x(qw)=x(w)$, as we want.
References:
S. Bosch, U. Güntzer, R. Remmert. Non-Archimedean Analysis. Springer (1984).
B. Conrad. Several approaches to non-Archimedean geometry. P-adic Geometry, 9–63, Amer. Math. Soc. (2008).
W. Lütkebohmert. From Tate’s elliptic curve to abeloid varieties. Pure and Applied Mathematics Quarterly 5 (2009), 1385–1427.
J. Tate. A review of non-Archimedean elliptic functions. Elliptic Curves, Modular Forms, & Fermat’s Last Theorem (Hong Kong, 1993), 162–184. Int. Press (1995). |
## Set-Up Instructions
Please have exactly one person in your group create project repo through the GitHub Classroom link here and create a team. Please name your team project2-groupX, where X is your group number.
Then, the other team members should use the same link and join the existing group.
Each person should then create a project in RStudio which is initialized using the GitHub URL that you are provided when you joined the project on GitHub Classroom.
## Goals and New Requirements
The end deliverable of this project is the same as Project 1: Create a “blog-post” style writeup in the style of FiveThirtyEight or the NYTimes “The Upshot” column, centered on a related set of informative, accurate, and aesthetically pleasing data graphics illustrating something about a topic of your choice.
Now that we have more data-wrangling tools in our toolbox, however, we can be more flexible about the kinds of questions we can ask and the kinds of visualizations we can create. Specifically, this project adds the following requirements to those of Project 1:
• At least one of your visualizations needs to involve a dataset that is constructed by combining data from two or more data files (that is, it should be created by using a join operation of some kind)
• Your visualizations should collectively examine the data at multiple levels of granularity. That is, there should be some examination of variables at both the individual case level and at the level of a summary of multiple cases that share a characteristic (that is, you should use at least one group_by() and summarize() combo).
Beyond those requirements, you should use whatever tools are useful for examining the things you want to examine. Likely you will end up wanting to use a number of other data-wrangling tools, but you won’t necessarily need every tool we’ve learned in this class.
There may be things you want to do that aren’t easily done using the tools we’ve specifically covered — if that’s the case, you are welcome and encouraged to look into other tools that suit your purposes, but you should only do this if it would help you tell a more interesting story about the thing you’re studying, and only if you feel up to it.
In general, I will take the ambitiousness of your investigation into account when grading — an investigation that minimally meets the requirements will be held to a higher standard of technical precision than an investigation that goes somewhat beyond them — but please try to stay within the length guidelines.
## Required skills
• Proficiency with the ggplot2 package
• Proficiency with the “five main verbs” of the dplyr package
• Proficiency with join operations from the dplyr package
• Proficiency with the restructuring verbs gather and spread from the tidyr package
• Proficiency with the GitHub workflow
## Structure of the Write-Up (This section is identical to project 1 but is reproduced for convenience)
Your group will work together to write a blog post about a topic of your choice, with insights informed by data and illustrated via at least three related data graphics.
The following are some examples of the kind of structure I have in mind (though most of these are longer than your post will be).
Conciseness is of value here: aim for a post in the 700-1000 word range (not including the Methodology section at the end). A suggested structure is as follows (though, apart from the inclusion of the Methodology section at the end, you do not have to adhere to this exactly):
### Introduction
Sets up the context and introduces the dataset and its source. Tell the reader what the cases in the data are, and what the relevant variables are. However, don’t just list these things: work them into one or more paragraphs that inform the reader about your data as though you were writing an article for a blog.
### Headings describing each aspect of the topic you’re focused on
For each graphic you include, a paragraph or two discussing what the graphic shows, including a concise “takehome” message in one or two sentences. Again, don’t just show graphic, paragraph, graphic paragraph, …, connect your text and graphics in a coherent narrative.
### Discussion
The last section of the main writeup should tie together the insights from the various views of the data you have created, and suggest open questions that were not possible to answer in the scope of this project (either because the relevant data was not available, or because of a technical hurdle that we have not yet learned enough to overcome)
### Appendix: Methodology
This should be separate from the main narrative and should explain the technical details of your project for a reader interested in data visualization. Explain the choices you made in your graphic: why did you choose the types of graphs (geometries) that you did; why did you choose the aesthetic mappings you did, why did you choose the color schemes you did, etc.
## Turning in your project (Identical to Project 1)
Collaboration on your project should take place via GitHub commits. Your .Rmd source should be part of a GitHub repo from its inception, and changes should be recorded via commits from the account of the person who made the edit. Everyone in the group should be making commits to the repo.
Your final submission will consist of
• The .Rmd source
• The compiled .html (or .pdf) file
• Any other files needed for the .Rmd to compile successfully.
If your data is available on the web, prefer to read it directly from the web in your R code. If you needed to download and “clean up” the data outside of RStudio, and thus need to read it from a .csv file stored locally (that is, in your RStudioPro server account), commit this file if it is relatively small (no more than a few MB in size), and make sure that you are using a relative path to the file when you read in the data. If you have a local data file which is larger than a few MB, you can instead share it via Slack and include instructions in your GitHub README.md file that indicate where it should be placed.
Whatever state the files in your GitHub repo are in at the deadline is what I will grade.
## Data
You can use any data sources you want, but you will need to combine data from at least two sources, so find datasets that have shared variables.
Some possible sources for data are:
• The federal government’s Data.gov site
• The American Psychological Association
• The data science competition Kaggle
• The UC Irvine machine learning repository
• The Economics Network
• Data provided by an R package, such as
• nycflights13: data about flights leaving from the three major NYC airports in 2013
• Lahman: comprehensive historical archive of major league baseball data
• fueleconomy: fuel economy data from the EPA, 1985–2015
• fivethirtyeight: provides access to data sets that drive many articles on FiveThirtyEight
You can find data anywhere else you like. But don’t use a dataset we’ve used in class or homework, and this time, at least some of your data must not come from an R package.
## Caution!
Sometimes when a lot of people are reading in datasets and leave their RStudio sessions open, it can eat up a lot of memory on the server and slow things down. To minimize this issue, please close your RStudio project and sign out from the server (by clicking Sign Out in the upper right, not just closing your browser tab) after each session you spend working on it, so that the memory used by your session can be released.
## Tips for git and GitHub
• Each time you sit down to work on the project, pull before you do anything else. This will save you headaches.
• Whenever you make an edit to any file and want to save it, pull first, then stage (add) and commit. If you’re ready to share it with your group, then push.
• If you get an error upon pulling, committing or pushing likely it is because a file you have edited was changed by someone else, and GitHub couldn’t figure out how to reconcile the changes. Most of the time this can be prevented by pulling every time you sit down to work on it, but if not, you may need to go into the file and manually resolve the changes by finding the markup added by GitHub (look for >>>> and <<<<) and editing the file to keep what you want from each version, then commit to merge them in the repo and push. If this happens, notify your group members that you are undertaking a manual merge, so they do not continue to make edits in the mean time!
• Make sure you have coordinated who is doing what when with your group, to minimize the above sorts of problems.
A suggested division of labor is that each group member is individually responsible for
• at least one graphic
• the part of the writeup and Methodology section directly pertaining to that graph
and the group as a whole works jointly on
• the general Introduction and Discussion
• any components of the Methodology section that pertain to the project as a whole.
Your group may choose to divide the work differently, but be sure that each person is involved in
• the topic selection and planning stage
• the coding component
• the “general audience” writing element
• the “technical writing” element.
### Relevant SLOs
#### Data Science Workflow
• A1: Demonstrate basic fluency with programming fundamentals
• A2: Create clean, reproducible reports
• The graphics are generated by the code embedded in the .Rmd (not included from an external file)
• The .Rmd compiles successfully
• Code, unnecessary messages, and raw R output (other than the plots) are suppressed from the .html output
• A3: Use a version control system for collaboration and documentation
• There is a GitHub record of commits
• The commit messages are concise and informative
• A4: Produce clean, readable code
• Variable names are descriptive
• Line breaks and indentation are used to highlight the structure of the code
#### Understanding Visualization
• B1: Identify variables, visual cues, and mappings between them
• The choices of aesthetic mappings and visual elements is motivated well in the Methodology section
• B2: Identify key patterns revealed by a visualization
• Concise summaries of each individual visualization are included
• B3: Identify strengths and weaknesses of particular visualizations
• The summaries highlight what the visualization shows clearly, what it doesn’t, some improvements that could be made with additional data or technical skills
#### Creating Visualizations
• C1: Choose appropriate and effective graphical representations
• The visualizations chosen fit together to illustrate interesting features about the data
• The choices made for your visualizations are effective and allow information to be conveyed clearly and efficiently
• C2: Employ informative annotation and visual cues to guide the reader
• C3: Write clean, readable visualization code
• Pipe syntax is used to promote readability
• Line breaks and indentation are used to highlight the structure of the visualization code
#### Translating between the qualitative and the quantitative
• D1: Choose suitable datasets to address questions of interest
• D2: Describe what data suggests in language suitable for a general audience
• D3: Extract “takehome messages” across multiple visualizations
• There is a description of “big picture” insights gained from considering the visualizations as a set
• The graphics used collectively convey aspects of the data that would have been difficult to notice with any single view
#### Data Wrangling
• E1: Master “slicing and dicing” data to access needed elements (e.g., with filter(), select(), slice_max(), etc)
• E2: Create new variables from old to serve a purpose (e.g., with mutate(), possibly involving other wrangling or cleaning functions within the definition of the new variables)
• E3: Aggregate and summarize data within subgroups (e.g., with group_by() and summarize())
• E4: Join data from multiple sources to examine relationships (with join() operations, and potentially with pivot_longer() or pivot_wider()
#### Intermediate Data Science Tools (Optional: you might not have a need for these, but if you do use them you can get an extra crack at showing mastery of them)
• F1: Modularize repetitive tasks (e.g., by writing your own functions and/or using iteration constructs like lapply() or do()) |
# Vasicek Model Calibration Python
Definition at line 42 of file vasicek. Nature of risk and risk measures. The models implemented in this library are widely used by practitionners. cir: Yields and maturities simulated from the CIR model. In this paper a review of short rate’s stochastic properties relevant to the derivation of the closed-form solution of the bond price within the Vasicek framework is presented. Last year's course homepage. This platform is the best place to trade knowledge between individuals or businesses. Wade is a South African male, primarily contracting for EY in the Actuarial, Quants and Data Science space. 2) Require the user to do a full calculation (Ctrl-Alt-F9) once after the workbook is open. For the moment, the ShortRateModels::Model class defines the short-rate dynamics with stochastic equations of the type. In financial mathematics, the Ho-Lee model is a short-rate model widely used in the pricing of bond options, swaptions and other interest rate derivatives, and in modeling future interest rates. I now that in the 1-factor Vasicek model the dynamics of the SDE are constants. cir: Simulates the values and yields of zero-coupon bonds when the bond. Everything is in Matlab. papers like Vasicek (1977) and Cox, Ingersoll, and Ross (1985). This is done by model calibration (choice the model parameter so that the model give the same premium for the quoted swaptions). plot (x, y) plt. I tried to translate a code from MATLAB to Python but I'm running into some errors. However, discrimination and calibration will only provide information on how well a model fits the data. 57\%$for an QAE model with a depth of 4 and compression 1. Apr 16, 2020 · The task of learning in Gaussian processes simplifies to determining suitable properties for the covariance and mean function, which will determine the calibration of our model. • Created multiple 'python' and 'R' test scripts for VaR and time series analysis models for internal automated. Everything is in Matlab. The parameter θ ( t) is chosen in order to fit the input term. The R code for this post, complete with documented functions, is located on my GitHub here. dr = a ( b − r ) dt + σ dW (4. Vasicek): Evaluation ofBermudanswitha tree: piecewiseconstantHW shortrate volatilitiesneeded - modified Excel (*. This is done by model calibration (choice the model parameter so that the model give the same premium for the quoted swaptions). Calibration. 31/32 Equation 62 Exponential Vasicek Model and Paper 3 p. The same set of parameter values and initial conditions will lead to an ensemble of different outputs. Course Abstract. After two decades of studying one- and two-factor models, itwas clear by the 1990s that more is needed—3 factors at least. Calibration of Short Rate Models - Calibrated Vasicek, Ho-Lee and the one factor Hull&White (HW1F) model using Caps and Swaption volatilities. Probability of Default is the one of the key metric used to identify the creditworthiness of a customer. • Risk indicators calculation. • The volatility structure is given by the market. Development of interest-rate sensitivity and liquidity models for nonmaturing (checking) accounts. Calibrated models are simulated and counterparty credit risk measures are computed for a portfolio of interest rate instruments. - Tools: R, QuantLib Python. Journal of Computational and Applied Mathematics 375 , 112796. Download it once and read it on your Kindle device, PC, phones or tablets. QuantLib_HestonModel (3) - Heston model for the stochastic volatility of an asset. An investigation into rates modelling: PCA and Vasicek models. In particular the Least Squares Method, the Maximum Likelihood Method and the Long. Also, this process is a diffusion process, hence Markovian, which will lead to some nice closed form formulas. 7 (Zero-coupon bond in the calibrated G2++ model). Vasicek model. In Mean Reversion in Finance: definitions I added a python notebook that explains the nomenclature and the API usage (under 'Calibration'). Main challenges. We introduce the well-known Vasicek model, the large homogeneous portfolios or Vasicek distribution and their corresponding generalizations. It also has to adapt their PD model calibration to ensure these models provide. plot (x, y) plt. The direct link to the colaboratory is [mean reversion] The long answer: I strongly recommend the following explanation: Calibrating the Ornstein-Uhlenbeck (Vasicek) model [COU]. Filled with expert advice, valuable insights, and advanced modeling techniques, Interest Rate, Term Structure, and Valuation Modeling is a book that all institutional investors, portfolio managers, and risk professionals should have. Calibration of the Vasicek Model: An Step by Step Guide Victor Bernal A. Nov 2011 - Dec 20132 years 2 months. The calibration. Vasicek Model, Monte Carlo Simulation is used and in each run of the Monte Car lo. The final module focuses on real-world model calibration techniques used by practitioners to estimate interest rate processes and derive prices of different financial products. 4 Analytical approximation by some beta distribution 1. I expect that these objectives may shift or expand as I continue working on the library. i) interface Implementation November 30th, 2017 25 Next stepscouldbe:. These conditions include samples of data longer than those that will be available this decade. Rough volatility with Python As we will see, even without proper calibration (i. For the moment, the ShortRateModels::Model class defines the short-rate dynamics with stochastic equations of the type. Vasicek model. Nov 01, 2019 · Subsequently, Cox et al. SPX smiles in the rBergomi model¶ In Figures 9 and 10, we show how well a rBergomi model simulation with guessed parameters fits the SPX option market as of February 4. Structural models including Merton model and mKMV, CreditMetrics and Gaussian copula, Vasicek model and Hull-White model. Implied zero coupon yield curve from the parameters estimated by our calibration procedure. First we introduce the model and the way we can price a swaption under this speci c short-rate model. The earliest attempt to model interest rates was published by Vasicek (1977), whereby the short rate was used as the factor driving the entire yield curve. represented by the Vasicek or CIR model. We start by reviewing the Basel and IFRS 9 regulation. This is followed by an overview of variable selection and profit driven performance evaluation. Algorithms behind Term Structure Models II: Hull-White Model. R-code for Vasicek estimation; more commented than usual. My thanks to everyone in the QuantLib team who have been supporting and extending this library now for most of 20. Vasicek model’s tractability property in bond pricing and the model’s interesting stochastic characteristics make this classical model quite pop-ular. Yield curve-Wikipedia. Vasicek’s model and its descendants Application: ED / FRA convexity corrections Other short rate models Modeling mean reversion of rates A special feature of Vasicek’s model is that the stochastic differential equation (2) has a closed form solution. 1 The Merton Model (1974) The Merton model takes an overly simple debt structure, and assumes that the total value A t of a firm's assets follows a geometric Brownian motion under the physical measure dA t = µA tdt+σA tdW t,A 0 > 0, (4. That makes fitting the volatility structure. The C++ implementation of the Hull-White model roughly follows the two-stage procedure for. This is needed to determine a, b, and sigma in the model. sample selection; variable types; missing values (imputation schemes) outlier detection and treatment (box plots, z-scores, truncation, etc. In this paper we calibrate the Vasicek interest rate model under the risk neutral measure by learning the model parameters using Gaussian processes for machine learning regression. The final module focuses on real-world model calibration techniques used by practitioners to estimate interest rate processes and derive prices of different financial products. LSM cannot be used to estimate the parameters in the CIR model. Bayesian Finance I - Stochastic Process Calibration using Bayesian Inference & Probabilistic Programs. Last year's course homepage. The effect of partitioning the available market data into sub-samples with an appropriately chosen probability distribution is twofold: (1) to improve the calibration of the Vasicek/CIR model's parameters in order to capture all the statistically significant changes of variance in market spot. 2 Binding of term structures by expected long-term interest rate interval 56. This paper led to the development of various alternative models (e. We then discuss how to leverage alternative data sources for credit risk modeling and do feature engineering. En effet quand les taux d'intérêts sont élevés, les agents anticipent une évolution à la baisse des taux et inversement. exp(-kappatau)) / kappa A = np. I have the overall form below. After two decades of studying one- and two-factor models, itwas clear by the 1990s that more is needed—3 factors at least. Translating MATLAB to Python. Single-factor Hull-White (extended Vasicek) model class. In this course, students learn how to do advanced credit risk modeling. A good example of this is a chart on the Wikipedia page for the Vasicek model. By Zvi Wiener. Introduction to Python and Subversion. feller: Estimates the parameters of the Feller process. The outputPeak makes the calibrate to provide the peaks table, which are the position for the peaks for each tube. To calibrate the model, analysts typically perform a simple ordinary least squares (OLS) regression using actual daily interest rate data. a stock, an equity index an interest rate). Source: Moody's Analytics. Yield curve-Wikipedia. - Tools: R, QuantLib Python. Markov Chain Approximations For Term Structure Models. I have the overall form below. Throughout the course, the instructor (s) extensively report on their recent scientific findings and international consulting experience. The OU mean reverting model described in (1) is a gaussian model in the sense that, given X0, the time t value of the process X(t) is normally distributed with E[x(t)jx0] = x +(x0 x )exp[ t] and Var[x(t)jx0] = ˙2 2 (1 exp[ 2 t]): Appendix A explains this based on the solution of the SDE (1). | 125 Kontakte | Vollständiges Profil von Georgos auf LinkedIn anzeigen und vernetzen. 1 Calibration of Heston's Model. Use features like bookmarks, note taking and highlighting while reading Interest Rate Models - Theory and Practice: With Smile, Inflation and Credit (Springer Finance). pdf - MScFE xxx[Course Name Module X Collaborative Review Task Revised \u00a9 2019 WorldQuant University \u2013 All rights. Single factor and multifactor models are calibrated to both historical data and current market data using optimization solvers. Filled with expert advice, valuable insights, and advanced modeling techniques, Interest Rate, Term Structure, and Valuation Modeling is a book that all institutional investors, portfolio managers, and risk professionals should have. node --experimental-modules test. Summary of the LSMC approach to 1-year VaR implementation. By Zvi Wiener. Zero coupon bonds, forward ZCB's, forward rates, short rate, forward instantaneous rate, formulas The SDE for the Vasicek model. calibration and hedging issues and the pricing of the most common structured products. The theory is finally applied to some real-world data, and the aforementioned models are calibrated to fit as closely as possible to this data, with the goal of replicating the. , N = 10, seed = 777): np. In the present technical report we examine the main theoretical aspects in some models used in Portfolio credit risk. The relationship between the linear fit and the model parameters is given by rewriting these equations gives. About Press Copyright Contact us Creators Advertise Developers Terms Privacy Policy & Safety How YouTube works Test new features Press Copyright Contact us Creators. An illustrative example considering factors following a logistic distribution is presented. 27 Equation 47 is terming it Vasicek Model and on p. A trinomial interest rate tree is a discrete representation of the stochastic process for the short rate15. Hi all, I have to calibrate with matlab a model that calculates the prices of a down and out digital barrier option written on an underlying that follows a geometric brownian motion dynamics. Extensions of the Ho and Lee interest-rate model to the multinomial case. The above setup is nice if the user wants a little more control over the calculation event. The model is described and the sensitivity analysis with respect to changes in the parameters is performed. We place ourselves in the context of the Vasiček model, which is a famous affine model, see Filipović ( 2009 ) and Keller-Ressel et al. The square root diffusion process is widely used for modeling interest rates behaviour. The measurement and transition equations so obtained represent the state space formulation of our model and open the way for us to present the set of Kalman filter equations associated with the. It takes into consideration few parameters (strike and volatility). 2 Unit 1: Model Calibration Unit 1: Video Transcript. A good example of this is a chart on the Wikipedia page for the Vasicek model. GeometricBrownianMotion (mu = 0. For the moment, the ShortRateModels::Model class defines the short-rate dynamics with stochastic equations of the type. Effectively, the model becomes a more. Designed to implement the Vasicek interest rate model. Detailed Description. QuantLib_HestonModel (3) - Heston model for the stochastic volatility of an asset. If you read it, the. That makes fitting the volatility structure. • In deterministic models, the output of the model is fully determined by the parameter values and the initial conditions. Single-factor Hull-White (extended Vasicek) model class. Financial Engineering Masters degree obtained at the University of Pretoria (cum. append (rates [-1] + dr) return range (N + 1), rates: if __name__ == "__main__": x, y = vasicek (0. Versions latest Downloads pdf html epub On Read the Docs Project Home Builds Free document hosting provided by Read the Docs. Option sur obligation zéro-coupon dans le modèle de Vasicek. This course covers the Theory and principles behind Fixed Income Securities and Fixed Income Derivatives modeling. This concept of averaging out independent errors using regression is powerful, particularly when the liability is a function of many risk factors (in statistical jargon, when the fitting space has high dimension). Everything is in Matlab. The theory is finally applied to some real-world data, and the aforementioned models are calibrated to fit as closely as possible to this data, with the goal of replicating the. The same set of parameter values and initial conditions will lead to an ensemble of different outputs. We then discuss how to leverage alternative data sources for credit risk modeling and do feature engineering. 2 A scenarios of a the Ornstein-Uhlenbeck process. This paper explores the ability of the Machine Learning (ML) techniques to calibrate models that replicate the outputs of the Vasicek credit risk model. It can have either a standard or a non-standard structure. This paper does not develop a new method but shows how to implement the algorithm behind the Hull- White interest rate model. It has been a while since my last post series, today is the first post in a mini-series on the fantastic QuantLib-Python library, where I will present an investigation of various instruments, pricing models and calibration choices, along with the code to generate them yourselves. modèle de Vasicek est le plus classique, il repose sur la dynamique suivante : t t t dr a b r dt dW. To this end, we analysed how these will be affected by the switch from EONIA to ESTER discounting under different market scenarios. Vasicek calibration. My assignment was to estimate the parameters in the CIR model using the historical data collected by Rabobank. I am currently studying about Vasicek model and I am trying to understand how one can calibrate the model in order to fit to the reality. Vasicek approach to allow the models to fit the initial term structure. Note that the notes use a Python minimization function. However, discrimination and calibration will only provide information on how well a model fits the data. model become a forward-looking, practical and dynamic method compared the conventional methods. The final module focuses on real-world model calibration techniques used by practitioners to estimate interest rate processes and derive prices of different financial products. We will illustrate several regression techniques used for interest rate model calibration and end the module by covering the Vasicek and CIR model for pricing fixed. The model is described as: d r t = ( θ ( t) − a r t) d t + σ d W t. The Cox-Ingersoll-Ross, CIR, interest rate model is a one-factor, equilibrium interest rate model. Vasicek and BeyondStochastic Interest RatesInterest Rate Derivatives Stochastic Analysis PerspectiveReal Options ValuationMastering Python for particularly useful for its description of real-life model usage and for its expansive discussion of model calibration, approximation theory, and numerical. vasicek: Simulates the values and yields of zero-coupon bonds when the data. It takes into consideration few parameters (strike and volatility). Python Vasicek model calibration using scipy optimize. They are based on Calibrating the Ornstein-Uhlenbeck (Vasicek) model at www. Moreover, for obtaining these values one way is to fit to the model the current zero coupon bond curve and. Calibration of short rate models in Excel with C#, Solver Foundation and Excel-DNA. Hull and White (1990) introduce time-inhomogeneous extensions capable of fitting any given initial forward rate curve and similar extensions for short rate models are in Bjork and Hyll (2000) and Brigo and. • The Vasicek model is the same as the intensity model with a Gaussian copula, identical default probabilities and a large number of names. vasicek: Simulates the values and yields of zero-coupon bonds when the data. normalvariate(0, 1). 2 (Short rate in the Vasicek model). , J generates the above matrix and η k is normally distributed with E(η k) = 0 and. Extensions of the Ho and Lee interest-rate model to the multinomial case. 3 Points, Thursdays, 5:10-7:00PM, Jonathan Goodman. HJM Model for Interest Rates and Credit Interest Rate Models Advanced. modèle de Vasicek est le plus classique, il repose sur la dynamique suivante : t t t dr a b r dt dW. An illustrative example considering factors following a logistic distribution is presented. 57\%$ for an QAE model with a depth of 4 and compression 1. If the aluev of the assets are less than the outstanding debt at time T, then a default is deemed. and Stein (2006)). ( Numeric_Finance is the name of the docker container) If you are not using docker, just copy in the shell the following: cd functions/cython python setup. Markov Chain Approximations For Term Structure Models. Designed to implement the Vasicek interest rate model. By change of notation, we have for j = 1,2,. %% Simulating Vasicek Euler Scheme S 0 = 0. This model measures the loss distribution of a portfolio made up of loans that can be exposed to multiple systemic factors and it is widely used in the financial sector and by regulators. Sep 19, 2019 · 安装步骤如下: (1)解压下载的工具箱,将其复制到matlab的toolbox文件夹下 (2)建立搜索路径,matlab - >设置路径 - >添加并包含子文件夹 - >找到在toolbox目录下的时频分析工具箱 - >保存 - >关闭 第二步为安装EMD工具箱,这个就简单一些了,下载完毕直接运行. [email protected] We implement PCA and a Vasicek short-rate model for swap rates, treasury rates and the spread between these two. This paper explores the ability of the Machine Learning (ML) techniques to calibrate models that replicate the outputs of the Vasicek credit risk model. Rabobank encountered this problem. Vasicek Model Definition 4. It has extensive use to determine bond, option,. Vasicek model class. The path simulation is based on the the Euler Maruyana Scheme for Vasicek model which follows. Combining theory with practice, this course walks you through the fundamentals of credit risk modeling and shows you how to implement these concepts using both R and Python software, with helpful code provided. (1999) show that generalized Vasicek model captures the hump in the volatility of forward rates, leads to signi-cant improvements on pricing. Nature of risk and risk measures. 05, sigma=0. In finance, the Vasicek model is a mathematical model describing the evolution of interest rates. Calibrated models are simulated and counterparty credit risk measures are computed for a portfolio of interest rate instruments. The drift factor, (), is exactly the same as in the Vasicek model. Florence has 4 jobs listed on their profile. Let 0 ≤ s ≤ t. 2) Require the user to do a full calculation (Ctrl-Alt-F9) once after the workbook is open. papers like Vasicek (1977) and Cox, Ingersoll, and Ross (1985). Processus d'Ornstein Uhlenbeck. The theory is finally applied to some real-world data, and the aforementioned models are calibrated to fit as closely as possible to this data, with the goal of replicating the. This comprehensive guide covers various aspects of model building for fixed income securities and derivatives. Bayesian Finance - Notebook PyMC3 implementation. • A variant of this approach exists where TTC PDs are estimated at loan level but exclude macroeconomic variables. Yield curve-Wikipedia. 5 Correlation induced by an underlying factor 1. The model is described and the sensitivity analysis with respect to changes in the parameters is performed. Vasicek Model Project P a g e 18 4. 4 Vasicek Model. where a is the mean reversion constant, σ is the volatility parameter. 1 (Short-rate dynamics in the Vasicek model). example is the Merton's structural model, where assets are modelled as a geometric Brownian motion and debt as a single outstanding bond with a certain face aluev at a given maturity time 1. julia bayesian-inference sde mcmc stochastic-differential-equations diffusion ornstein-uhlenbeck brownian-motion levy-process vasicek diffusion-processes simulating-diffusion-bridges gamma-process. Reduced form models including Hazard rates and calibration, Exponential models of defaults and Contagion models. calibration and hedging issues and the pricing of the most common structured products. Main challenges. Links The official desription of the course. Furthermore, by using our previous framework, we solve the closed-form asymptotic for the model's rst passage time;. This is the first post in a multipart series on credit risk models. Impacts of Brexit on UK and France Wholesale credit portfolios: - Contribution to redevelopment of the French Corporate and SME EAD model: creation of SAS macros for automatic selection of drivers, analysis of features of the portfolio, segmentation and calibration of the model and various other ad-hoc analyses. after that, paste the following code into the shell: docker exec -it Numeric_Finance bash cd work/functions/cython python setup. It is an open-sourced library that can be used in a variety of financial applications, such as modeling, trading, evaluation and risk management. , N = 10, seed = 777): np. In the present technical report we examine the main theoretical aspects in some models used in Portfolio credit risk. The effect of partitioning the available market data into sub-samples with an appropriately chosen probability distribution is twofold: (1) to improve the calibration of the Vasicek/CIR model's parameters in order to capture all the statistically significant changes of variance in market spot. This framework (corresponding to the ql/ShortRateModels directory) implements some single-factor and two-factor short rate models. An investigation into rates modelling: PCA and Vasicek models. Complete Algorithm of Calibration with Vasicek Model using Term-Structure Dynamics over Time. She was able to grasp many very complex valuation concepts within a short amount of time, and because of this (and her natural charm/wit) I greatly enjoyed mentoring her. This is needed to determine a, b, and sigma in the model. Ross as an extension of the Vasicek model. We investigate maximum likelihood estimation of the square root process (CIR process) for interest rate time series. The user can choose between 4 di˙erent parametric models: Vasicek model (Vasicek(1977)), CIR model (Cox, Ingersoll Jr, and Ross(2005)), Nelson & Siegel model (Nelson and Siegel(1987)) or Svensson model (Svensson(1994)). 05, sigma=0. Translating MATLAB to Python. [email protected] The Vasicek model is a popular one-factor model that derives the limiting form of the portfolio loss. Option sur obligation à coupon. View Florence Fita Yabana's profile on LinkedIn, the world's largest professional community. The user can download the fitted spot and zero-coupon bond term structures. Calibration of short rate models in Excel with C#, Solver Foundation and Excel-DNA. We implement PCA and a Vasicek short-rate model for swap rates, treasury rates and the spread between these two. previous contents next. This paper does not develop a new method but shows how to implement the algorithm behind the Hull- White interest rate model. • Under this approach, TTC PDs are determined by utilizing macroeconomic variables and non -cyclical risk drivers to predict default rates over an economic cycle. One factor in that it models the short - term interest rate and equilibrium in that it uses assumptions about various economic variables (e. Versions latest Downloads pdf html epub On Read the Docs Project Home Builds Free document hosting provided by Read the Docs. Background information Calibration framework. rvasicek returns a (n, m+1) matrix of n path of the Vasicek process. 2 A scenarios of a the Ornstein-Uhlenbeck process. ( sqrt ( d e l t a _ t ). The models implemented in this library are widely used by practitionners. the Hull-White model can be characterized as an extension of the Vasicek model with a time-dependent reversion level of a q (t) at rate a14. 6 Correlated processes of obligor's asset value log-returns. Vasicek is a mean reverting short term interest rate model. Added missing Hong Kong holiday (thanks to GitHub user CarrieMY). For calibration I used dt (delta in time between two observations) = 1/252. Real Estate Property Fraud - Unsupervised fraud detection model that can identify likely candidates of fraud. cir: Yields and maturities simulated from the CIR model. Journal of Computational and Applied Mathematics 375 , 112796. Montreal, Canada Area. ˇ Affine term structure models were then and remain the workhorse model classes thanks to their richness and tractability. For the moment, the ShortRateModels::Model class defines the short-rate dynamics with stochastic equations of the type. In the Vasicek model, the short rate is assumed to satisfy the stochastic differential equation dr(t)=k(θ −r(t))dt+σdW(t), where k,θ,σ >0andW is a Brownian motion under the risk-neutral measure. Examples EquityOption. 05, sigma = 0. University of Essex, CCFEA (Centre for Computational Finance and Economic Agents), Graduate Student. In financial mathematics, the Ho-Lee model is a short-rate model widely used in the pricing of bond options, swaptions and other interest rate derivatives, and in modeling future interest rates. Spotafile Supplier. The example below, with 10,000 daily scenarios (2,520,000 values) took just 160 milliseconds to run!. Behavioural Economics - Behavioural Economics and Finance Python Notebooks. ) exploratory data analysis. 3) Configure cell dependencies and triggers such that after subsequent updates an incremental calculation (F9) is sufficient. scenarios method. after installing with npm, pass --experimental-modules to node to use ESM javascript file. Nature of risk and risk measures. We place ourselves in the context of the Vasiček model, which is a famous affine model, see Filipović ( 2009 ) and Keller-Ressel et al. n_steps: the number of timesteps for each scenario. \\ A relatively large fidelity measure was acquired with the developed model, through testing on the Iris data set; A mean fidelity topping at $97. 3 An empirical portfolio loss distribution 1. By the time we are done with this series, you should be able to calculate the probability of default for Barclays Bank (and if you really want, to calculate it for 4 other banks in the BBA USD LIBOR Panel). First, we analytically approximate standard errors for value-at-risk and expected. Definition at line 42 of file vasicek. Vasicek model class. The measurement and transition equations so obtained represent the state space formulation of our model and open the way for us to present the set of Kalman filter equations associated with the. mean 0, variance Δ t ). 2 A scenarios of a the Ornstein-Uhlenbeck process. 2) Require the user to do a full calculation (Ctrl-Alt-F9) once after the workbook is open. - Simulation of the short rate and built Yield Curves. with After then, Vasicek and McQuown improved the Merton Model in 1984(Vasicek, 1984). Calibration of the Vasicek Model: An Step by Step Guide Victor Bernal A. Effectively, the model becomes a more. HJM Model for Interest Rates and Credit Interest Rate Models Advanced. It was introduced in 1985 by John C. Dullmann and Uhrig‐Homburg (2000) use the Nelson‐Siegel model to describe the yield curves of Deutsche Mark‐. The square root diffusion process is widely used for modeling interest rates behaviour. The final module focuses on real-world model calibration techniques used by practitioners to estimate interest rate processes and derive prices of different financial products. Vasicek model calibration. We introduce the well-known Vasicek model, the large homogeneous portfolios or Vasicek distribution and their corresponding generalizations. The model is described as: d r t = ( θ ( t) − a r t) d t + σ d W t. In order to find it we utilize the method of variations of constants. That could be explain by the fact that the Vasicek Model is a simple model, with only three parameters and one driving factor ( ). cir: Yields and maturities simulated from the CIR model. Profils de la courbe des taux dans le modèle de Vasicek. CHAPTER 4 One-Factor Short-Rate Models 4. The final module focuses on real-world model calibration techniques used by practitioners to estimate interest rate processes and derive prices of different financial products. G2++ MODEL 35 Theorem 6. We start by reviewing the Basel and IFRS 9 regulation. 2: Zero curves. In finance, the Vasicek model is a mathematical model describing the evolution of interest rates. It helps us to estimate the chances that customer would make payments on time or would remain solvent during the period of mortgage. n_scenarios: the number of scenarios you want to generate. It is a type of one-factor short-rate model as it describes interest rate movements as driven by only one source of market risk. Background information Calibration framework. An illustrative example considering factors following a logistic distribution is presented. with After then, Vasicek and McQuown improved the Merton Model in 1984(Vasicek, 1984). • The volatility structure is given by the market. Added control variate based on asymptotic expansion for the Heston model (thanks to Klaus Spanderen). Of course, discrimination and calibration can also be used to validate PD models for loans to individuals. - Model Overlay and Trade Capture Overlay - IRC: Recovery Rate Parameter Calibration and Simulation - Option Based Security Financing (Structured Asset Collateral) Stress Exposure Model • Developed codes in Python, C#/C++ and Excel VBA for replication, benchmarking, stress testing and sensitivity analyses. Hull-White model was one of the first practical exogenous models that attempted to fit to the market interest rate term structures. That could be explain by the fact that the Vasicek Model is a simple model, with only three parameters and one driving factor ( ). Tube Calibration Examples. make install. Wade is a South African male, primarily contracting for EY in the Actuarial, Quants and Data Science space. - Calibration of the Vasicek model and interest rate simulation via VBA (Euler's Method) for Interest Rate Risk in the Banking Book purposes (IRRBB) In Quantitative Model Validation: - Validating a Probability of Default model for a retail portfolio using SAS Show more Show less. HJM model, or strictly speaking the HJM framework, is a general model en-vironment and incorporates many previously developed models like the a-V sicek model ( 1977) (Vasicek 1977) or the Hull-White model ( 1990) (Hull & White1990). By change of notation, we have for j = 1,2,. 2043-001 Scientific Computing. Macroeconomic time series modeling (ARIMA) for systemic risk under ASRF (Merton-Vasicek. QuantLib_HestonModel (3) - Heston model for the stochastic volatility of an asset. That makes fitting the volatility structure. This paper explores the ability of the Machine Learning (ML) techniques to calibrate models that replicate the outputs of the Vasicek credit risk model. Effectively, the model becomes a more. Of course, discrimination and calibration can also be used to validate PD models for loans to individuals. The model is described and the sensitivity analysis with respect to changes in the parameters is performed. The measurement and transition equations so obtained represent the state space formulation of our model and open the way for us to present the set of Kalman filter equations associated with the. 387 Equation 4 is terming the process Geometric Mean Reversion (GMR) and Equation 5 the exponential version Exponential Mean Reversion (EMR), Paper 2 p. In finance, the Vasicek model is a mathematical model describing the evolution of interest rates. An investigation into rates modelling: PCA and Vasicek models. 1 documentation. Combining theory with practice, this course walks you through the fundamentals of credit risk modeling and shows you how to implement these concepts using both R and Python software, with helpful code provided. Some Python scripts for analyzing. The Hull-White model is an interest rate derivatives pricing model. Interpretation of the mean reversion. Usually, N increases with the dimension of ϑ. for example here we make 6 scenarios because we have 6 months. LSM cannot be used to estimate the parameters in the CIR model. My thanks to everyone in the QuantLib team who have been supporting and extending this library now for most of 20. Definition at line 42 of file vasicek. My assignment was to estimate the parameters in the CIR model using the historical data collected by Rabobank. Vasicek Model | Python Fiddle. This model is simple enough to be understood quite easily, and thanks to properties of the normal distribution and log-normal distributions it relies on, easily manageable. Pricing and Simulating in Python Zero Coupon Bonds with Vasicek and Cox Ingersoll Ross short term interest rate modes - GitHub - dpicone1/Vasicek_CIR_HoLee_HullWhite_Models_Python: Pricing and Simulating in Python Zero Coupon Bonds with Vasicek and Cox Ingersoll Ross short term interest rate modes. This is done by rst calibrating a Vasicek short rate model and then deriving models for the bank's deposit volume and deposit rate using multiple regression. Calcul des taux zéro-coupon et forward dans le modèle de Vasicek. The effect of partitioning the available market data into sub-samples with an appropriately chosen probability distribution is twofold: (1) to improve the calibration of the Vasicek/CIR model's parameters in order to capture all the statistically significant changes of variance in market spot. Python Vasicek model calibration using scipy optimize. The final module focuses on real-world model calibration techniques used by practitioners to estimate interest rate processes and derive prices of different financial products. The earliest attempt to model interest rates was published by Vasicek (1977), whereby the short rate was used as the factor driving the entire yield curve. Apr 16, 2020 · The task of learning in Gaussian processes simplifies to determining suitable properties for the covariance and mean function, which will determine the calibration of our model. cir: Simulates the values and yields of zero-coupon bonds when the bond. Note that the first value has no density. Updated on Oct 16, 2020. See full list on pypi. Calibration. Vasicek): Evaluation ofBermudanswitha tree: piecewiseconstantHW shortrate volatilitiesneeded - modified Excel (*. In an ideal setting, we. The Two-Factor Hull-White Model : Pricing and Calibration of Interest Rates Derivatives Arnaud Blanchard Under the supervision of Filip Lindskog. It was introduced in 1985 by John C. Vasicek Model Project P a g e 18 4. It was recently adopted to model nitrous oxide emission from soil by Pedersen and to model the evolutionary rate variation across sites in molecular evolution. The 1952-2004 US data. Modèle de Cox. The relationship between the linear fit and the model parameters is given by rewriting these equations gives. In finance, the Vasicek model is a mathematical model describing the evolution of interest rates. Reduced form models including Hazard rates and calibration, Exponential models of defaults and Contagion models. Vasicek Model | Python Fiddle. In the Vasicek model, the short rate is assumed to satisfy the stochastic differential equation dr(t)=k(θ −r(t))dt+σdW(t), where k,θ,σ >0andW is a Brownian motion under the risk-neutral measure. They are based on Calibrating the Ornstein-Uhlenbeck (Vasicek) model at www. Calibration of a local volatility surface to a sparse grid of options. The Two-Factor Hull-White Model : Pricing and Calibration of Interest Rates Derivatives Arnaud Blanchard Under the supervision of Filip Lindskog. Wade is a South African male, primarily contracting for EY in the Actuarial, Quants and Data Science space. This model will allow calculating different risk measures such as, for example, the expected loss (EL), the value at risk (VaR) and the Expected Shortfall (ES). An investigation into rates modelling: PCA and Vasicek models. The following models are available: Throughout this. The scenarios start at S(0)=3 and reverting to a long term mean of 1. In this course, students learn how to do advanced credit risk modeling. Since introduced, the standard models have been Vasicek (1977) and Hull-White (1990), until the introduction of the SABR model (Hagan et al. In this paper we calibrate the Vasicek interest rate model under the risk neutral measure by learning the model parameters using Gaussian processes for machine learning regression. LSM cannot be used to estimate the parameters in the CIR model. - Simulation of the short rate and built Yield Curves. We will illustrate several regression techniques used for interest rate model calibration and end the module by covering the Vasicek and CIR model for pricing fixed. Nature of risk and risk measures. They improved the formula of the distance to default, and built a default information database which includes more than 3400 listed compa-. Vasicek is a mean reverting short term interest rate model. (1999) show that generalized Vasicek model captures the hump in the volatility of forward rates, leads to signi-cant improvements on pricing. • A variant of this approach exists where TTC PDs are estimated at loan level but exclude macroeconomic variables. Translating GARCH (1,1) model from MATLAB to Python using scipy package. DepartmentofMathematicsandStatistics. vasicek: Yields and maturities simulated from the Vasicek model. Usually, N increases with the dimension of ϑ. In particular the Least Squares Method, the Maximum Likelihood Method and the Long. Quantitative projects implementation (Java) in Webfolio (portfolio management platform). • Risk indicators calculation. Sep 19, 2019 · 安装步骤如下: (1)解压下载的工具箱,将其复制到matlab的toolbox文件夹下 (2)建立搜索路径,matlab - >设置路径 - >添加并包含子文件夹 - >找到在toolbox目录下的时频分析工具箱 - >保存 - >关闭 第二步为安装EMD工具箱,这个就简单一些了,下载完毕直接运行. we will be going through a calibration for the Vasicek and the Heston model. Note that the notes use a Python minimization function. Then of course, you will be ask about the draw-backs of the model, how it behaves on the limit, how is the PDF of the simulated rate, and how do you check that the simulation is correct. Model Fitting to Market Term Structure Tutorial File: Term Structure Fitting Tutorial. Rabobank encountered this problem. cir: Yields and maturities simulated from the CIR model. 27 Equation 47 is terming it Vasicek Model and on p. Interest Rate Term Structure Models: Introductory ConceptsParameter estimation of Vasicek interest rate model and its limitation Bond Pricing with Hull White Model in Python Parameter Calibration for Cox Ingersoll Ross Model Interest-rate Risk for Banks Part 1/2 Managing Interest Rate Risk - Income Gap Analysis 24. This time, I wanted to present one possible solution for calibrating one-factor short interest rate model to market data. feller: Estimates the parameters of the Feller process. Things will be pretty much the same this year. (1985) eliminate the main drawback of the Vasicek model, that is a non-null probability of negative values. Vasicek model’s tractability property in bond pricing and the model’s interesting stochastic characteristics make this classical model quite pop-ular. Interest rates provide a fairly good standard for applying PCA and Vasicek stochastic modelling, and getting a good feel for the characteristics of these models. 1) where µ is the mean rate of return on the assets and σ is the asset volatility. 7 (Zero-coupon bond in the calibrated G2++ model). Calibration of a local volatility surface to a sparse grid of options. We then discuss how to leverage alternative data sources for credit risk modeling and do feature engineering. Main challenges. 3 An empirical portfolio loss distribution 1. About An associate with a demonstrated growth in the financial services industry (quantitative research). pdf Excel Template File: Term Structure Fitting Dataset. By Zvi Wiener. I thought best to use scipy. In this paper we calibrate the Vasicek interest rate model under the risk neutral measure by learning the model parameters using Gaussian processes for machine learning regression. Various statistical techniques can be employed to develop PD models. Throughout the course, the instructor (s) extensively report on their recent scientific findings and international consulting experience. randn ( 1 ) ) ; i = i + 1 ; end 2 Least Squares Calibration. This time, I wanted to present one possible solution for calibrating one-factor short interest rate model to market data. One factor in that it models the short - term interest rate and equilibrium in that it uses assumptions about various economic variables (e. She was able to grasp many very complex valuation concepts within a short amount of time, and because of this (and her natural charm/wit) I greatly enjoyed mentoring her. previous contents next. Impacts of Brexit on UK and France Wholesale credit portfolios: - Contribution to redevelopment of the French Corporate and SME EAD model: creation of SAS macros for automatic selection of drivers, analysis of features of the portfolio, segmentation and calibration of the model and various other ad-hoc analyses. This is followed by an overview of variable selection and profit driven performance evaluation. Spotafile is one of its kind marketplace for business tools, documents, videos or any file format. Named after the Brownian Bridge. The Hull-White model is an interest rate derivatives pricing model. The example below, with 10,000 daily scenarios (2,520,000 values) took just 160 milliseconds to run!. It takes into consideration few parameters (strike and volatility). scenarios method. This model is simple enough to be understood quite easily, and thanks to properties of the normal distribution and log-normal distributions it relies on, easily manageable. Pricing Of Non-Callable And Callable Cashflows. A year's cDR is drawn from the Vasicek Distribution. All these options are explored and used inside the examples. This concept of averaging out independent errors using regression is powerful, particularly when the liability is a function of many risk factors (in statistical jargon, when the fitting space has high dimension). Vasicek model. Calibrated models are simulated and counterparty credit risk measures are computed for a portfolio of interest rate instruments. Montreal, Canada Area. The one-factor Hull-White model is: t t HW drt =(θ 1(t) −ar)dt +σdB (1. the formula is as follows: r i + 1 = a ( b − r i) Δ t + σ Δ t z i. where a is the mean reversion constant, σ is the volatility parameter. I expect that these objectives may shift or expand as I continue working on the library. NTRODUCTION. The volume model and the deposit rate model are used to determine the liquidity and interest rate risk, which is done separately. cd to the Python-Heston-Option-Pricer directory, type following command into terminal. Parameter estimation of Vasicek interest rate model and its limitation Bond Pricing with Hull White Model in Python Parameter Calibration for Cox Ingersoll Ross Model Interest-rate Risk for Banks Part 1/2 Managing Interest Rate Risk - Income Gap Analysis 24. Extensions of the Ho and Lee interest-rate model to the multinomial case. In the case of the Hull-White model, there are only a few pieces of information required: a discount factor, a local volatility and a term volatility. Here's a python implementation written by Pong et al The mathematical model for Vasicek's work was given by an Ornstein-Uhlenbeck process, but has since been discredited because the model predicts a positive probability that the short rate becomes negative and is inflexible in creating yield curves of different shapes. io Find an R package R language docs Run R in your browser. Implied zero coupon yield curve from the parameters estimated by our calibration procedure. We implement PCA and a Vasicek short-rate model for swap rates, treasury rates and the spread between these two. Efficient Calibration of Trinomial Trees for One-Factor Short Rate Models. This class implements the Vasicek model defined by $dr_t = a(b - r_t)dt + \sigma dW_t ,$ where $$a$$, $$b$$ and $$\sigma$$ are constants; a risk premium $$\lambda$$ can also be specified. Source: Moody's Analytics. cir: Estimates the parameters of the CIR model. One factor in that it models the short - term interest rate and equilibrium in that it uses assumptions about various economic variables (e. I am currently studying about Vasicek model and I am trying to understand how one can calibrate the model in order to fit to the reality. The Hull-White model is an interest rate derivatives pricing model. Rough volatility with Python As we will see, even without proper calibration (i. X(t k) = Φ(ψ)X(t k − 1) +η(t k). Follow @python_fiddle. interpret model outcomes; Nevertheless, there is still work to be done. We will illustrate several regression techniques used for interest rate model calibration and end the module by covering the Vasicek and CIR model for pricing fixed. 3 Points, Thursdays, 5:10-7:00PM, Jonathan Goodman. Not pretty code. The role of a credit risk model is to take as input the conditions of the general economy and those of the specific firm in question, and generate as output a credit spread. This platform is the best place to trade knowledge between individuals or businesses. Summary of the LSMC approach to 1-year VaR implementation. If the aluev of the assets are less than the outstanding debt at time T, then a default is deemed. The drift factor, (), is exactly the same as in the Vasicek model. Course Abstract. The C++ implementation of the Hull-White model roughly follows the two-stage procedure for. pyplot as plt: plt. PD models are broadly […]. My assignment was to estimate the parameters in the CIR model using the historical data collected by Rabobank. 2: Zero curves. - Machine Learning model for credit scoring, cash collection, decision-making, time-series forecasting • Excellent programming skill (Python, R, MatLab and VBA) • Keen on how real life problems can be solved with mathematical models. Implied zero coupon yield curve from the parameters estimated by our calibration procedure. Vasicek Model Definition 4. A good example of this is a chart on the Wikipedia page for the Vasicek model. In this paper a review of short rate’s stochastic properties relevant to the derivation of the closed-form solution of the bond price within the Vasicek framework is presented. This is followed by an overview of variable selection and profit driven performance evaluation. It is a type of one-factor short-rate model as it describes interest rate movements as driven by only one source of market risk. They are based on Calibrating the Ornstein-Uhlenbeck (Vasicek) model at www. Introduction to Python and Subversion. previous contents next. In the Vasicek model, the short rate is assumed to satisfy the stochastic differential equation dr(t)=k(θ −r(t))dt+σdW(t), where k,θ,σ >0andW is a Brownian motion under the risk-neutral measure. An appropriate model to evaluate bonds, and options on interest (like options on interest rate swap, or swaptions) should incorporate the dynamic of the yield curve term structure. This is needed to determine a, b, and sigma in the model. Complete Algorithm of Calibration with Vasicek Model using Term-Structure Dynamics over Time. Writing Python prototypes to apply Machine Learning option prices and can be used for calibration of a Libor market. The earliest attempt to model interest rates was published by Vasicek (1977), whereby the short rate was used as the factor driving the entire yield curve. Probability of Default is the one of the key metric used to identify the creditworthiness of a customer. (Vasicek) model By Thijs van den Berg | Published: May 28, 2011 We will use this data to explain the model calibration steps. \\ A relatively large fidelity measure was acquired with the developed model, through testing on the Iris data set; A mean fidelity topping at$97. My thanks to everyone in the QuantLib team who have been supporting and extending this library now for most of 20. cir: Yields and maturities simulated from the CIR model. Using several short-rate models such as the Vasicek, Hull-White one-factor and the G2++ model in a multi-curve setup, we simulated short rate paths in order to price interest rate swaps and swaptions. Calibrated models are simulated and counterparty credit risk measures are computed for a portfolio of interest rate instruments. It is an underlying process of the well-known Cox-Ingersoll-Ross term structure model (1985). To estimate my model parameters I am fitting a regression on the discrete data that gives me the following model: r t + 1 − r t = ( a − b r t) Δ t + σ Z t. exp((theta-(sigma2)/(2(kappa2))) * (B-tau) - (sigma2)/(4*kappa)(B2)) Vasicek = Anp. Added control variate based on asymptotic expansion for the Heston model (thanks to Klaus Spanderen). Single factor and multifactor models are calibrated to both historical data and current market data using optimization solvers. CIR Interest Rate Model. The R code for this post, complete with documented functions, is located on my GitHub here. Bayesian Finance - Notebook PyMC3 implementation. Python Fiddle Python Cloud IDE. The Gaussian asymptotic single factor model of portfolio credit losses (ASFM), developed by Vasicek (1987), Finger (1999), Schönbucher (2001), Gordy (2003), and others, provides an approximation for the loss rate distribution for a credit portfolio in which the. The R code for this post, complete with documented functions, is located on my GitHub here. the Vasicek loan portfolio value model that is used by firms in their own stress testing and is the basis of the Basel II risk weight formula. This model measures the loss distribution of a portfolio made up of loans that can be exposed to multiple systemic factors and it is widely used in the financial sector and by regulators. Implied zero coupon yield curve from the parameters estimated by our calibration procedure. [email protected] Introduction to Python and Subversion. An investigation into rates modelling: PCA and Vasicek models. This is needed to determine a, b, and sigma in the model. aBlack, Derman, and Toy (BDT) (1990), but. SPX smiles in the rBergomi model¶ In Figures 9 and 10, we show how well a rBergomi model simulation with guessed parameters fits the SPX option market as of February 4. They are used to represent the fundamental risk factors driving uncertainty (e. Background information Calibration framework. (ii)There is one volatility parameter only available for calibration (two, if you count the mean reversion rate). In this short post, we give the code snippets for both the least-square method (LS) and the maximum likelihood estimation (MLE). n_scenarios: the number of scenarios you want to generate. QuantLib_HimalayaOption (3) - Himalaya option. model become a forward-looking, practical and dynamic method compared the conventional methods. • A variant of this approach exists where TTC PDs are estimated at loan level but exclude macroeconomic variables. R-code for Vasicek estimation; more commented than usual. The form of the model I am using is: d r t = ( a − b r t) d t + σ d W. Profils de la courbe des taux dans le modèle de Vasicek. The same set of parameter values and initial conditions will lead to an ensemble of different outputs. where a is the mean reversion constant, σ is the volatility parameter. We start by reviewing the Basel and IFRS 9 regulation. A Generalized Single Factor Model of Portfolio Credit Risk. Iterative procedure for calibration of the LGM model;. after that, paste the following code into the shell: docker exec -it Numeric_Finance bash cd work/functions/cython python setup. Finally, the future value of the interest rate is normally distributed with the distribution. ( 2018 ) for a guide to the. I also tried fmin from the same package. Effectively, the model becomes a more. 57\%\$ for an QAE model with a depth of 4 and compression 1. This paper is devoted to the parameterization of correlations in the Vasicek credit portfolio model. 3890 Table 1 Data used for the model calibration in the example. 2 The Portfolio Loss Distribution 1. The same set of parameter values and initial conditions will lead to an ensemble of different outputs. Often you can generate 10,000 scenarios in fractions of a second. We then discuss how to leverage alternative data sources for credit risk modeling and do feature engineering. Detailed Description. Macroeconomic time series modeling (ARIMA) for systemic risk under ASRF (Merton-Vasicek. Moreover, for obtaining these values one way is to fit to the model the current zero coupon bond curve and. Vasicek interest rate model is quite popular among the practitioners due to the interpretability of its parameters and the parsimonious setup. 05, sigma=0. Vasicek Model Definition 4. The R code for this post, complete with documented functions, is located on my GitHub here. In particular the Least Squares Method, the Maximum Likelihood Method and the Long. , the specific simulated parameter ϑ n that minimizes the distance between model produced prices. Rabobank encountered this problem. Note that the first value has no density. G2++ MODEL 35 Theorem 6. |
# Math Help - Assistance is needed.
1. ## Assistance is needed.
n
2=n+1
can you help me solve this?
2. Originally Posted by gedprep
n
2=n+1
can you help me solve this?
This is an addition equation because a value has been added to the variable n.
To isolate the variable (get the variable by itself on one side of the equation),
you must perform the inverse of addition (subtraction) on the value.
$2=n+1$
Subtract 1 from both sides
$2-1=n+1-1$
$1=n$
$\boxed{n=1}$
3. ## I made a mistake.
I am sorry but tis was 2 over n equals n plus 1
4. Originally Posted by gedprep
I am sorry but tis was 2 over n equals n plus 1
$\frac{2}{n}=n+1$
First, multiply each term by n
$2=n^2+n$
Now, set the equation = 0
$n^2+n-2=0$
Now, factor the trinomial
$(n+2)(n-1)=0$
Using the zero product property that says "If ab=0, then a=0 or b=0",
$n+2=0 \ \ or \ \ n-1=0$
$n=-2 \ \ or \ \ n=1$
5. ## That error is my fault and I corrected it already. Check it out
I am sorry but it is n over 2 equals n plus 1
not 2 over n
6. Originally Posted by gedprep
I am sorry but it is n over 2 equals n plus 1
not 2 over n
Third time's the charm...
$\frac{n}{2}=n+1$
Multiply all terms by 2 (in order to eliminate the fraction)
$n=2n+2$
Subtract 2n from each side.
$-n=2$
Multiply everything by -1.
$n=-2$
7. ## Re: Assistance is needed
why must you multiply the 1 as well as the number n?
8. Originally Posted by gedprep
why must you multiply the 1 as well as the number n?
One of the 10 commandants of math is:
Thou shalt do unto one side of an equation what thou doest to the other.
If I multiply 2 times one term in an equation, I must multiply 2 times every term in the equation.
Example:
$\frac{x}{2}+3=2x-1$
Multiple all terms by 2.
${\color{red}2}\left(\frac{x}{2}\right)+{\color{red }2}(3)={\color{red}2}(2x)-{\color{red}2}(1)$ |
# Tree¶
class Tree
A phylogenetic tree.
Some instance variables
• fName, if the Tree was read from a file
• name, the name of the tree.
• root, the root node
• nodes, a list of nodes.
• preOrder and postOrder, lists of node numbers
• recipWeight, the weight, if it exists, is usually 1/something, so the reciprocal looks nicer ...
• nexusSets, if it exists, a NexusSets object.
Properties
• taxNames, a list of names. Usually the order is important!
• data, a Data.Data object
• model, a Model.Model object
• nTax, the number of taxa
• nInternalNodes, the number of non-leaf nodes
The node method
You often will want to refer to a node in a tree. This can be done via its name, or its nodeNum, or as an object, via the method node(). For example, from a Tree t you can get node number 3 via:
n = t.node(3)
and you can get the node that is the parent of the Mastodon via:
n = t.node('Mastodon').parent
For many methods that require specifying a node, the method argument is nodeSpecifier, eg:
t.reRoot(23)
reRoots‘s the tree to node number 23.
Describe, draw, and get information about the tree
dump Print rubbish about self. draw Draw the tree to the screen. textDrawList tv Tree Viewer. btv Big Tree Viewer. isFullyBifurcating Returns True if the tree is fully bifurcating. taxSetIsASplit Asks whether a nexus taxset is a split in the tree. getAllLeafNames Returns a list of the leaf names of all children getChildrenNums Returns a list of nodeNums of children of the specified node. getDegree getLen Return the sum of all br.len’s. getNodeNumsAbove Gets a list of nodeNums, in postOrder, above nodeSpecifier. getPreAndPostOrderAbove Returns 2 lists of node numbers, preOrder and postOrder. getPreAndPostOrderAboveRoot Sets self.preOrder and self.postOrder. getSeqNumsAbove Gets a list of seqNums above nodeSpecifier. subTreeIsFullyBifurcating Is theNode and everything above it (or below it) bifurcating? summarizeModelThingsNNodes Summarize nNodes for all modelThings if isHet verifyIdentityWith For MCMC debugging.
Write
write This writes out the Newick tree description to sys.stdout. writeNewick Write the tree in Newick, aka Phylip, format. writeNexus Write the tree out in Nexus format, in a trees block. writePhylip Write the tree in Phylip or Newick format. tPickle Pickle self to a file with a ‘p4_tPickle’ suffix.
See also Trees methods p4.trees.Trees.writeNexus() and p4.trees.Trees.writeNewick() for doing trees by the bunch.
Iteration over the nodes
Sometimes you don’t want to just iterate over the self.nodes list, because after some manipulations a node might be in self.nodes but not actually in the tree; using these ‘iter’ methods takes care of that, skipping such nodes.
iterInternals Internal node generator. iterInternalsNoRoot Internal node generator, skipping the root. iterInternalsNoRootPostOrder Internal post order node generator, skipping the root. iterInternalsNoRootPreOrder Internal post order node generator, skipping the root. iterInternalsPostOrder Internal post order node generator. iterLeavesNoRoot Leaf node generator, skipping the root. iterLeavesPostOrder iterLeavesPreOrder iterNodes Node generator, in preOrder. iterNodesNoRoot Node generator, skipping the root. iterPostOrder Node generator. iterPreOrder Node generator. nextNode Get next node cycling around a hub node.
See also Node methods that do similar things starting from a given node.
Copy
dupe Duplicates self, but with no c-pointers. copyToTree dupeSubTree Makes and returns a new Tree object, duping part of self.
In combination with Data and Model
calcLogLike Calculate the likelihood of the tree, without optimization. optLogLike Calculate the likelihood of the tree, with optimization. simulate Simulate into the attached data. getSiteLikes Likelihoods, not log likes. getSiteRates Get posterior mean site rate, and gamma category. bigXSquaredSubM Calculate the X^2_m stat compStatFromCharFreqs Calculate a statistic from observed and model character frequencies. compoTestUsingSimulations Compositional homogeneity test using a null distribution from simulations. modelFitTests Do model fit tests on the data. modelSanityCheck Check that the tree, data, and model specs are good to go. simsForModelFitTests Do simulations for model fit tests. getEuclideanDistanceFromSelfDataToExpectedComposition Calculate the c_E stat between self.data and model expected composition.
Setting a model
newComp Make, attach, and return a new Comp object. newRMatrix Make, attach, and return a new RMatrix instance. newGdasrv setPInvar setRelRate setModelThing setModelThingsRandomly Place model things (semi-)randomly on the tree. setModelThingsNNodes Set nNodes for all modelThings summarizeModelThingsNNodes Summarize nNodes for all modelThings if isHet setNGammaCat setTextDrawSymbol
Tree manipulation
addLeaf Add a leaf to a tree. addNodeBetweenNodes Add a node between 2 exisiting nodes, which should be parent-child. addSibLeaf Add a leaf to a tree as a sibling, by specifying its parent. addSubTree Add a subtree to a tree. allBiRootedTrees Returns a Trees object containing all possible bi-rootings of self. collapseNode Collapse the specified node to make a polytomy, and remove it from the tree. ladderize Rotate nodes for a staircase effect. lineUpLeaves Make the leaves line up, as in a cladogram. nni Simple nearest-neighbor interchange. pruneSubTreeWithoutParent Remove and return a node, together with everything above it. randomSpr Do a random spr move. randomizeTopology reRoot Re-root the tree to the node described by the specifier. reconnectSubTreeWithoutParent Attach subtree stNode to the rest of the tree at newParent. removeEverythingExceptCladeAtNode Like it says. removeNode Remove a node, together with everything above it. removeAboveNode Remove everything above an internal node, making it a leaf, and so needing a new name. removeRoot Removes the root if self.root is mono- or bifurcating. renameForPhylip Rename with phylip-friendly short boring names. restoreDupeTaxa Restore previously removed duplicate taxa from a dict file. restoreNamesFromRenameForPhylip Given the dictionary file, restore proper names. rotateAround Rotate a clade around a node. spr Subtree pruning and reconnection. stripBrLens Sets all node.br.len’s to 0.1, the default in p4.
Misc
checkDupedTaxonNames checkSplitKeys checkTaxNames Check that all taxNames are in the tree, and vice versa. checkThatAllSelfNodesAreInTheTree Check that all self.nodes are actually part of the tree. inputTreesToSuperTreeDistances Return the topology distance between input trees and pruned self. makeSplitKeys Make long integer-valued split keys. readBipartitionsFromPaupLogFile Assigns support to the tree, from the PAUP bipartitions table. recalculateSplitKeysOfNodeFromChildren setNexusSets Set self.nexusSets from var.nexusSets. topologyDistance Compares the topology of self with tree2. tvTopologyCompare Graphically show topology differences. patristicDistanceMatrix Matrix of distances along tree path. inputTreesToSuperTreeDistances Return the topology distance between input trees and pruned self.
addLeaf(attachmentNode, taxName)
Add a leaf to a tree.
The leaf is added to the branch leading from the specified node. A new node is made on that branch, so actually 2 nodes are added to the tree. The new leaf node is returned.
addNodeBetweenNodes(specifier1, specifier2)
Add a node between 2 exisiting nodes, which should be parent-child.
The specifier can be a nodeNum, name, or node object.
Returns the new node object.
addSibLeaf(attachmentNode, taxName)
Add a leaf to a tree as a sibling, by specifying its parent.
The leaf is added so that its parent is the specified node (ie attachmentNode), adding the node as a rightmost child to that parent. The attachmentNode should not be a leaf – it must have children nodes, to which the new leaf can be added as a sibling.
The new node is returned.
addSubTree(selfNode, theSubTree, subTreeTaxNames=None)
Add a subtree to a tree.
The nodes from theSubTree are added to self.nodes, and theSubTree is deleted.
If subTreeTaxNames is provided, fine, but if not this method can find them. Providing them saves a bit of time, I assume.
allBiRootedTrees()
Returns a Trees object containing all possible bi-rootings of self.
Self should have a root node of degree > 2, but need not be fully resolved.
Self needs a taxNames.
bigXSquaredSubM(verbose=False)
Calculate the X^2_m stat
This can handle gaps and ambiguities.
Column zeros in the observed is not a problem with this stat, as we are dividing by the expected composition, and that comes from the model, which does not allow compositions with values of zero.
btv()
Big Tree Viewer. Show the tree in a gui window.
This is for looking at big trees. The viewer has 2 panels – one for an overall view of the whole tree, and one for a zoomed view, controlled by a selection rectangle on the whole tree view.
Needs Tkinter.
If you have nexus taxsets defined, you can show them.
calcLogLike(verbose=1, resetEmpiricalComps=True)
Calculate the likelihood of the tree, without optimization.
checkDupedTaxonNames()
checkSplitKeys(useOldName=False, glitch=True, verbose=True)
checkTaxNames()
Check that all taxNames are in the tree, and vice versa.
checkThatAllSelfNodesAreInTheTree(verbose=False, andRemoveThem=False)
Check that all self.nodes are actually part of the tree.
Arg andRemoveThem will remove those nodes, renumber the nodes, and reset pre- and postOrder, and return None
If andRemoveThem is not set (the default is not set) then this method returns the list of nodes that are in self.nodes but not in the tree.
collapseNode(specifier)
Collapse the specified node to make a polytomy, and remove it from the tree.
Arg specifier, as usual, can be a node, node number, or node name.
The specified node remains in self.nodes, and is returned.
compStatFromCharFreqs(verbose=False)
Calculate a statistic from observed and model character frequencies.
Call it c_m, little c sub m.
It is calculated from observed character frequencies and character frequencies expected from the (possibly tree-heterogeneous) model.
It would be the sum of abs(obs-exp)/exp
compoTestUsingSimulations(nSims=100, doIndividualSequences=0, doChiSquare=0, verbose=1)
Compositional homogeneity test using a null distribution from simulations.
This does a compositional homogeneity test on each data partition. The statistic used here is X^2, obtained via Data.compoChiSquaredTest().
The null distribution of the stat is made using simulations, so of course you need to provide a tree with a model, with optimized branch lengths and model parameters. This is a comp homogeneity test, so the model should be tree-homogeneous.
The analysis usually tests all sequences in the data partition together (like paup), but you can also ‘doIndividualSequences’ (like puzzle). Beware that the latter is a multiple simultaneous stats test, and so the power may be compromized.
For purposes of comparison, this test can also do compo tests in the style of PAUP and puzzle, using chi-square to assess significance. Do this by turning ‘doChiSquare’ on. The compo test in PAUP tests all sequences together, while the compo test in puzzle tests all sequences separately. There are advantages and disadvantages to the latter– doing all sequences separately allows you to identify the worst offenders, but suffers due to the problems of multiple simultaneous stats tests. There are slight differences between the computation of the Chi-square in PAUP and puzzle and the p4 version. The compo test in PAUP (basefreq) does the chi-squared test, but if sequences are blank it still counts them in the degrees of freedom; p4 does not count blank sequences in the degrees of freedom. Puzzle simply uses the row sums, ie the contributions of each sequence to the total X-squared, and assesses significance with chi-squared using the number of symbols minus 1 as the degrees of freedom. Ie for DNA dof=3, for protein dof=19. Puzzle correctly gets the composition from sequences with gaps, but does not do the right thing for sequences with ambiguities like r, y, and so on. P4 does calculate the composition correctly when there are such ambiguities. So p4 will give you the same numbers as paup and puzzle for the chi-squared part as long as you don’t have blank sequences or ambiguities like r and y.
This uses the Data.compoChiSquaredTest() method to get the stats. See the doc string for that method, where it describes how zero column sums (ie some character is absent) can be dealt with. Here, when that method is invoked, ‘skipColumnZeros’ is turned on, so that the analysis is robust against data with zero or low values for some characters.
For example:
# First, do a homog opt, and pickle the optimized tree.
# Here I use a bionj tree, but you could use whatever.
a = var.alignments[0]
dm = a.pDistances()
t = dm.bionj()
d = Data()
t.data = d
t.newComp(free=1, spec='empirical')
t.newRMatrix(free=1, spec='ones')
t.setNGammaCat(nGammaCat=4)
t.newGdasrv(free=1, val=0.5)
t.setPInvar(free=0, val=0.0)
t.optLogLike()
t.name = 'homogOpt'
t.tPickle()
# Then, do the test ...
t = var.trees[0]
d = Data()
t.data = d
t.compoTestUsingSimulations()
# Output would be something like ...
# Composition homogeneity test using simulations.
# P-values are shown.
# Part Num 0
# Part Name all
# -------------------- --------
# All Sequences 0.0000
# Or using more sims for more precision, and also doing the
# Chi-square test for contrast ...
t.compoTestUsingSimulations(nSims=1000, doChiSquare=True)
# Output might be something like ...
# Composition homogeneity test using simulations.
# P-values are shown.
# (P-values from Chi-Square are shown in parens.)
# Part Num 0
# Part Name all
# -------------------- --------
# All Sequences 0.0140
# (Chi-Squared Prob) (0.9933)
It is often the case, as above, that this test will show significance while the Chi-square test does not.
copyToTree(otherTree)
data
deleteCStuff()
Deletes c-pointers from nodes, self, and model, but not the data.
draw(showInternalNodeNames=1, addToBrLen=0.2, width=None, showNodeNums=1, partNum=0, model=None)
Draw the tree to the screen.
This method makes a text drawing of the tree and writes it to sys.stdout.
Arg addToBrLen adds, by default 0.2, to each branch length, to make the tree more legible. If you want the branch lengths more realistic, you can set it to zero, or better, use vector graphics for drawing the trees.
Setting arg model aids in drawing trees with tree-hetero models. If the model characteristic (usually composition or rMatrix) differs over the tree, this method can draw it for you.
See also Tree.Tree.textDrawList(), which returns the drawing as a list of strings.
See the method Tree.Tree.setTextDrawSymbol(), which facilitates drawing different branches with different symbols.
dump(tree=0, node=0, model=0, all=0)
tree
is the default, showing basic info about the tree.
node
shows info about all the nodes.
model
shows which modelThing number goes on which node. (which you can also get by drawing the tree)
(If you want the info about the model itself, do a aTree.model.dump() instead.)
dupe()
Duplicates self, but with no c-pointers. And no data object.
If there is a model, it is duped.
Returns a copy of self.
dupeSubTree(dupeNodeSpecifier, up, doBrLens=True, doSupport=True)
Makes and returns a new Tree object, duping part of self.
The dupeNodeSpecifier can be a node name, node number, or node object.
Arg ‘up’ should be True or False.
The returned subtree has a root-on-a-stick.
BrLens are not duped – brLens are default in the new subtree.
So if the tree is like this:
+---------2:A
+--------1
| +---------3:B
|
0--------4:C
|
| +---------6:D
+--------5:dupeNode
| +--------8:E
+---------7
+--------9:F
Then the subtree from node 5 up is:
+---------2:D
subTreeRoot:0--------1:dupeNode
| +--------4:E
+---------3
+--------5:F
and the subtree from node 5 down is:
+--------2:A
+---------1
dupeNode:0--------5 +--------3:B
|
+---------4:C
eps(fName=None, width=500, putInternalNodeNamesOnBranches=0)
Make a basic eps drawing of self.
The ‘width’ is in postscript points.
By default, internal node names label the node, where the node name goes on the right of the node. You can make the node name label the branch by setting ‘putInternalNodeNamesOnBranches’.
getAllLeafNames(specifier)
Returns a list of the leaf names of all children
getChildrenNums(specifier)
Returns a list of nodeNums of children of the specified node.
getDegree(nodeSpecifier)
getEuclideanDistanceFromSelfDataToExpectedComposition()
Calculate the c_E stat between self.data and model expected composition.
The expected composition comes from the current tree (self) and model. There is an expected composition of each sequence in each part, and is obtained via pf.p4_expectedComposition(cTree). In non-stationary evolution, the expected composition of sequences approach the model composition asymptotically as the branch increases.
I am calling the Euclidean distance from the actual sequence composition to the expected composition c_E.
Returns: A list of lists — the c_E for each sequence, for each part. Order of the sequences is as in the Data.
getLen()
Return the sum of all br.len’s.
getNodeNumsAbove(nodeSpecifier, leavesOnly=0)
Gets a list of nodeNums, in postOrder, above nodeSpecifier.
The node specified is not included.
getPreAndPostOrderAbove(nodeSpecifier)
Returns 2 lists of node numbers, preOrder and postOrder.
This uses a stack, not recursion, so it should work for large trees without bumping into the recursion limit. The 2 lists are relative to the node specified, and include the node specified. PreOrder starts from theNode and goes to the tips; postOrder starts from the tips and goes to theNode.
getPreAndPostOrderAboveRoot()
Sets self.preOrder and self.postOrder.
This uses a stack, not recursion, so it should work for large trees without bumping into the recursion limit. PreOrder starts from the root and goes to the tips; postOrder starts from the tips and goes to the root.
getSeqNumsAbove(nodeSpecifier)
Gets a list of seqNums above nodeSpecifier.
getSiteLikes()
Likelihoods, not log likes. Placed in self.siteLikes, a list.
getSiteRates()
Get posterior mean site rate, and gamma category.
This says two things – 1. The posterior mean site rate, calculated like PAML 2. Which GDASRV category contributes most to the likelihood.
The posterior mean site rate calculation requires that there be only one gdasrv over the tree, which will usually be the case.
For placement in categories, if its a tie score, then it is placed in the first one.
The list of site rates, and the list of categories, both with one value for each site, are put into separate numpy arrays, returned as a list, ie [siteRatesArray, categoriesArray]
There is one of these lists for each data partition, and the results as a whole are returned as a list. So if you only have one data partition, then you get a 1-item list, and that single item is a list with 2 numpy arrays. Ie [[siteRatesArray, categoriesArray]]
If nGammaCat for a partition is 1, it will give that partition an array of ones for the site rates and zeros for the categories.
getWeightCommandComment(tok)
inputTreesToSuperTreeDistances(inputTrees, doSd=True, doScqdist=True)
Return the topology distance between input trees and pruned self.
Either or both of two distances, RF (‘sd’) and quartet (‘scqdist’) distances, are returned.
See also the Trees method of the same name that does a bunch of them.
isFullyBifurcating(verbose=False)
Returns True if the tree is fully bifurcating. Else False.
iterInternals()
Internal node generator. PreOrder. Including the root, if it is internal.
iterInternalsNoRoot()
Internal node generator, skipping the root. PreOrder
iterInternalsNoRootPostOrder()
Internal post order node generator, skipping the root. Assumes preAndPostOrderAreValid.
iterInternalsNoRootPreOrder()
Internal post order node generator, skipping the root. Assumes preAndPostOrderAreValid.
iterInternalsPostOrder()
Internal post order node generator. Assumes preAndPostOrderAreValid.
iterLeavesNoRoot()
Leaf node generator, skipping the root. PreOrder.
iterLeavesPostOrder()
iterLeavesPreOrder()
iterNodes()
Node generator, in preOrder. Assumes preAndPostOrderAreValid.
iterNodesNoRoot()
Node generator, skipping the root. PreOrder.
iterPostOrder()
Node generator. Assumes preAndPostOrderAreValid.
iterPreOrder()
Node generator. Assumes preAndPostOrderAreValid.
ladderize(biggerGroupsOnBottom=True)
Rotate nodes for a staircase effect.
This method, in its default biggerGroupsOnBottom way, will take a tree like this:
+---------4:A
+--------3
+---------2 +---------5:B
| |
| +--------6:C
+--------1
| | +--------8:D
| +---------7
| +--------9:E
0
|--------10:F
|
| +---------12:G
+--------11
+---------13:H
and rearranges it so that it is like ...
+--------10:F
|
| +---------12:G
|--------11
| +---------13:H
0
| +--------8:D
| +---------7
| | +--------9:E
+--------1
| +--------6:C
+---------2
| +---------4:A
+--------3
+---------5:B
Note that for each node, the more populated group is on the bottom, the secondmost populated second, and so on.
To get it with the bigger groups on top, set biggerGroupsOnBottom=False. I made the default with the bigger groups on the bottom so that it often makes room for a scale bar.
The setting biggerGroupsOnBottom, the default here, would equivalent to set torder=right in paup; torder=left puts the bigger groups on the top.
lineUpLeaves(rootToLeaf=1.0, overWriteBrLens=True)
Make the leaves line up, as in a cladogram.
This makes the rootToLeaf distance the same for all leaves.
If overWriteBrLens is set, then the newly calculated br.lens replace the original br.lens. If it is not set, then the new br.lens are placed in br.lenL, and does not over-write the original br.lens.
makeSplitKeys(makeNodeForSplitKeyDict=False)
Make long integer-valued split keys.
This needs to have self.taxNames set.
We make 2 kinds of split keys– rawSplitKeys and splitKeys. Both are attributes of node.br, so we have eg node.br.splitKey.
Raw split keys for terminal nodes are 2**n, where n is the index of the taxon name. Eg for the first taxon, the rawSplitKey will be 1, for the 3rd taxon the rawSplitKey will be 4.
RawSplitKeys for internal nodes are the rawSplitKey’s for the children, bitwise OR’ed together.
SplitKeys, cf rawSplitKeys, are in ‘standard form’, where the numbers are even, ie do not contain the 1-bit. Having it in standard form means that you can compare splits among trees. If the rawSplitKey is even, then the splitKey is simply that, unchanged. If, however, the rawSplitKey is odd, then the splitKey is the rawSplitKey bit-flipped. For example, if there are 5 taxa, and one of the rawSplitKeys is 9 (odd), we can calculate the splitKey by bit-flipping, as:
01001 = 9 rawSplitKey
10110 = 22 splitKey
(Bit-flipping is done by exclusive-or’ing (xor) with 11111.)
The splitKey is readily converted to a splitString for display, as 22 becomes ‘.**.*’ (note the ‘1’ bit is now on the left). It is conventional that the first taxon, on the left, is always a dot. (I don’t know where the convention comes from.)
The root has no rawSplitKey or splitKey.
For example, the tree:
+-------2:B (rawSplitKey = 2)
+---1
| +---------3:C (rawSplitKey = 4)
|
0-------------4:E (rawSplitKey = 16)
|
| +-----6:A (rawSplitKey = 1)
+----5
+-----------7:D (rawSplitKey = 8)
has 2 internal splits, on nodes 1 and 5.
Node n.br.rawSplitKey n.br.splitKey
1 6 6
5 9 22
There should be no duplicated rawSplitKeys, but if the tree has a bifurcating root then there will be a duped splitKey.
This method will fail for trees with internal nodes that have only one child, because that will make duplicated splits.
If arg makeNodeForSplitKeyDict is set, then it will make a dictionary nodeForSplitKeyDict where the keys are the splitKeys and the values are the corresponding nodes.
model
modelFitTests(fName='model_fit_tests_out', writeRawStats=0)
Do model fit tests on the data.
The two tests are the Goldman-Cox test, and the tree- and model- based composition fit test. Both require simulations with optimizations in order to get a null distribution, and those simulations need to be done before this method. The simulations should be done with the simsForModelFitTests() method.
Self should have a data and a model attached, and be optimized.
The Goldman-Cox test (Goldman 1993. Statistical tests of models of DNA substitution. J Mol Evol 36: 182-198.) is a test for overall fit of the model to the data. It does not work if the data have gaps or ambiguities.
The tree- and model-based composition test asks the question: ‘Does the composition implied by the model fit the data?’ If the model is homogeneous and empirical comp is used, then this is the same as the chi-square test except that the null distribution comes from simulations, not from the chi-square distribution. In that case only the question is, additionally, ‘Are the data homogeneous in composition?’, ie the same question asked by the chi-square test. However, the data might be heterogeneous, and the model might be heterogeneous over the tree; the tree- and model-based composition fit test can ask whether the heterogeneous model fits the heterogeneous data. The composition is tested in each data partition, separately. The test is done both overall, ie for all the sequences together, and for individual sequences.
If you just want a compo homogeneity test with empirical homogeneous comp, try the compoTestUsingSimulations() method– its way faster, because there are not optimizations in the sims part.
Output is verbose, to a file.
modelSanityCheck(resetEmpiricalComps=True)
Check that the tree, data, and model specs are good to go.
Complain and exit if there is anything wrong that might prevent a likelihood evaluation from being done. We are assuming that a data object exists and is attached, and that model stuff has been set.
Check that each part has at least 1 each from comps, rMatrices, and gdasrvs (if nGammaCat is > 1).
If it is not a mixture model for a particular part, check that each node has a comp, rMatrix, and gdasr. Check that all comps, rMatrices, gdasrvs are used on a node somewhere.
Here relRate, ie the relative rate of each data partition, is adjusted based on the size of the data partitions.
newRelRate_p = oldRelRate_p * (Sum_p[oldRelRate_i * partLen_i] / Sum[partLen_i])
That ensures that Sum(relRate_i * partLen_i) = totalDataLength, ie that the weighted mean of the rates is 1.0.
This method also tallies up the number of free prams in the whole model, and sets self.model.nFreePrams.
nInternalNodes
nTax
newComp(partNum=0, free=0, spec='empirical', val=None, symbol=None)
Make, attach, and return a new Comp object.
The arg spec should be a string, one of:
'equal' no val
'empirical' no val
'specified' val=[aList]
'wag', etc no val
(ie one of the empirical protein models, including
cpREV, d78, jtt, mtREV24, mtmam, wag, etc)
If spec=’specified’, then you specify dim or dim-1 values in a list as the ‘val’ arg.
This method returns a Comp object, which you can ignore if it is a tree-homogeneous model. However, if it is a tree-hetero model then you may want to get that Comp object so that you can place it on the tree explicitly with setModelThing(), like this:
c0 = newComp(partNum=0, free=1, spec='empirical')
c1 = newComp(partNum=0, free=1, spec='empirical')
Alternatively, you can simply let p4 place them randomly:
newComp(partNum=0, free=1, spec='empirical')
newComp(partNum=0, free=1, spec='empirical')
myTree.setModelThingsRandomly()
Calculation of probability matrices for likelihood calcs etc are wrong when there are any comp values that are zero, so that is not allowed. Any zeros are converted to var.PIVEC_MIN, which is 1e-18 this week. Hopefully close enough to zero for you.
newGdasrv(partNum=0, free=0, val=None, symbol=None)
newRMatrix(partNum=0, free=0, spec='ones', val=None, symbol=None)
Make, attach, and return a new RMatrix instance.
spec should be one of:
• ‘ones’ - for JC, poisson, F81
• ‘2p’ - for k2p and hky
• ‘specified’
• ‘cpREV’
• ‘d78’
• ‘jtt’
• ‘mtREV24’
• ‘mtmam’
• ‘wag’
• ‘rtRev’
• ‘tmjtt94’
• ‘tmlg99’
• ‘lg’
• ‘blosum62’
• ‘hivb’
• ‘mtart’
• ‘mtzoa’
You do not set the ‘val’ arg unless the spec is ‘specified’ or ‘2p’. If spec=‘2p’, then you set val to kappa.
If the spec is ‘specified’, you specify all the numerical values in a list given as the ‘val’ arg. The length of that list will be (((dim * dim) - dim) / 2) - 1, so for DNA, where dim=4, you would specify a list containing 5 numbers.
nextNode(spokeSpecifier, hubSpecifier)
Get next node cycling around a hub node.
A bit of a hack to make a p4 Node behave sorta like a Felsenstein node. Imagine cycling around the branches emanating from a node like spokes on a hub, starting from anywhere, with no end.
The hub node would usually be the parent of the spoke, or the spoke would be the hub itself. Usually self.nextNode(spoke,hub) delivers spoke.sibling. What happens when the siblings run out is that self.nextNode(rightmostSibling, hub) delivers hub itself, and of course its branch (spoke) points toward the hub.parent. (Unless hub is the root, of course, in which case self.nextNode(rightmostSibling, hub) delivers hub.leftChild.) In the usual case of the hub not being the root, the next node to be delivered by nextNode(spokeIsHub, hub) is usually the leftChild of the hub. Round and round, clockwise.
nni(upperNodeSpec=None)
Simple nearest-neighbor interchange.
You specify an ‘upper’ node, via an upperNodeSpec, which as usual can be a node name, node number, or node object. If you don’t specify something, a random node will be chosen for you. (This latter option might be a little slow if you are doing many of them, as it uses iterInternalsNoRoot(), but mostly it should be fast enough).
The upper node has a parent – the ‘lower’ node. One subtree from the upper node and one subtree from the lower node are exchanged. Both subtrees are chosen randomly.
This works on biRooted trees also, preserving the biRoot.
node(specifier)
Get a node based on a specifier.
The specifier can be a nodeNum, name, or node object.
optLogLike(verbose=1, newtAndBrentPowell=1, allBrentPowell=0, simplex=0)
Calculate the likelihood of the tree, with optimization.
There are 3 optimization methods– choose one. I’ve made ‘newtAndBrentPowell’ the default, as it is fast and seems to be working. The ‘allBrentPowell’ optimizer used to be the default, as it seems to be the most robust, although it is slow. It would be good for checking important calculations. The simplex optimizer is the slowest, and will sometimes find better optima for difficult data, but often fails to optimize (with no warning).
optTest()
parseNewick(flob, translationHash, doModelComments=0)
Parse Newick tree descriptions.
This is stack-based, and does not use recursion.
parseNexus(flob, translationHash=None, doModelComments=0)
Start parsing nexus format newick tree description.
From just after the command word ‘tree’, to the first paren of the Newick part of the tree.
Parameters: flob – an open file or file-like object translationHash (dict) – associates short names or numbers with long proper taxon names doModelComments (bool) – whether to parse p4-specific model command comments in the tree description None
patristicDistanceMatrix()
Matrix of distances along tree path.
This method sums the branch lengths between each pair of taxa, and puts the result in a DistanceMatrix object, which is returned.
Self.taxNames is required.
pruneSubTreeWithoutParent(specifier, allowSingleChildNode=False)
Remove and return a node, together with everything above it.
Arg specifier can be a nodeNum, name, or node object.
By default, the arg allowSingleChildNode is turned off, and is for those cases where the parent of the node has more than 2 children. So when the subTree is removed, the parent node that is left behind has more than one child.
The stuff that is removed is returned. The nodes are left in self; the idea being that the subTree will be added back to the tree again (via reconnectSubTreeWithoutParent()).
randomSpr()
Do a random spr move.
randomizeTopology(randomBrLens=True)
reRoot(specifier, moveInternalName=True, stompRootName=True, checkBiRoot=True, fixRawSplitKeys=False)
Re-root the tree to the node described by the specifier.
The specifier can be a node.nodeNum, node.name, or node object.
Here is a potential problem. Lets say you start with this tree, with split support as shown:
(((A, B)99, C)70, D, E);
+--------3:A
+---------2:99
+--------1:70 +--------4:B
| |
| +---------5:C
0
|--------6:D
|
+--------7:E
Now we want to reRoot it to node 2. So we do that. Often, node.name’s are really there to label the branch, not the node; you may be using node.name’s to label your branches (splits), eg with support values. If that is the case, then you want to keep the node name with the branch (not the node) as you reRoot(). Ie you want node labels to behave like branch labels. If that is the case, we set moveInternalName=True; that is the default. When that is done in the example above, we get:
(A, B, (C, (D, E)70)99);
+--------3:A
|
|--------4:B
2
| +---------5:C
+--------1:99
| +--------6:D
+---------0:70
+--------7:E
Another possibility is that the node names actually are there to name the node, not the branch, and you want to keep the node name with the node during the re-rooting process. That can be done by setting moveInternalName=False, and the tree below is what happens when you do that. Each internal node.name stays with its node in the re-rooting process:
(A, B, (C, (D, E))70)99;
+--------3:A
|
|--------4:B
99:2
| +---------5:C
+--------1:70
| +--------6:D
+---------0
+--------7:E
Now if you had the default moveInternalName=True and you had a node name on the root, that would not work (draw it out to convince yourself ...). So in that case you probably want stompRootName=True as well — Both are default. If you set stompRootName=True, it gives you a little warning as it does it. If you set stompRootName=2, it will do it silently. If moveInternalName is not set, then stompRootName is not used.
You probably do not want to reRoot() if the current root is bifurcating. If you do that, you will get a node in the tree with a single child, which is strictly ok, but useless and confusing. So by default I checkBiRoot=True, and throw a P4Error if there is one. If you want to draw such a pathological tree with a node with a single child, set checkBiRoot=False, and it will allow it.
readBipartitionsFromPaupLogFile(thePaupLogFileName)
Assigns support to the tree, from the PAUP bipartitions table.
This needs to have self.taxNames set.
This is useful if you want to make a consensus tree using PAUP, and get the support values. When you make a cons tree with PAUP, the support values, usually bootstrap values, are unfortunately not saved with the tree. That information is in the Bipartitions table, which can be saved to a PAUP log file so that p4 can get it. This method will read thePaupLogFileName and extract the split (tree bipartition) supports, and assign those supports to self as node.br.support’s (as a float, not a string).
It also returns a hash with the split strings as keys and the split support as values, if you need it.
recalculateSplitKeysOfNodeFromChildren(aNode, allOnes)
reconnectSubTreeWithoutParent(stNode, newParent, beforeNode=None)
Attach subtree stNode to the rest of the tree at newParent.
The beforeNode is by default None, and then the subtree is reconnected as the rightmost child of the new parent. However, if you want it somewhere else, for example as the leftmost child, or between two existing child nodes, specify a beforeNode (specified as usual as a node, nodeNumber, or node name) and the subtree will be inserted there.
removeAboveNode(specifier, newName)
Remove everything above an internal node, making it a leaf, and so needing a new name.
removeEverythingExceptCladeAtNode(specifier)
Like it says. Leaves a tree with a root-on-a-stick.
removeNode(specifier, alsoRemoveSingleChildParentNode=True, alsoRemoveBiRoot=True, alsoRemoveSingleChildRoot=True)
Remove a node, together with everything above it.
Arg specifier can be a nodeNum, name, or node object.
So lets say that we have a tree like this:
+-------1:A
0
| +--------3:B
+-------2
+--------4:C
and we remove node 4. When it is removed, node 2 ends up having only one child. Generally you would want to remove it as well (so that the parent of node 3 is node 0), so the option alsoRemoveSingleChildParentNode is turned on by default. If alsoRemoveSingleChildParentNode is turned off, nodes like node 2 get left in the tree.
Removal of a node might cause the creation of a bifurcating root. I assume that is not desired, so alsoRemoveBiRoot is turned on by default.
In the example above, if I were to remove node 4, by default node 2 would also disappear, but by default node 0 would also disappear because it would then be a tree with a bifurcating root node. So starting with a 5-node tree, by removing 1 node you would end up with a 2-node tree, with 1 branch.
In the example here, if I were to simply remove node 1:
+--------1:A
|
0 +---------3:B
+--------2
| +--------5:C
+---------4
+--------6:D
Then node 0 would remain, as:
+---------2:B
0--------1
| +--------4:C
+---------3
+--------5:D
Presumably that is not wanted, so the arg alsoRemoveSingleChildRoot is turned on by default. When the root is removed, we are left with a bi-root, which (if the arg alsoRemoveBiRoot is set) would also be removed. The resulting tree would be:
+-------0:B
|
1-------2:C
|
+-------3:D
The deleted nodes are really deleted, and do not remain in self.nodes.
removeRoot()
Removes the root if self.root is mono- or bifurcating.
This removes the root node if the tree is rooted on a terminal node, or if the tree is rooted on a bifurcating node. Otherwise, it refuses to do anything.
In the usual case of removing a bifurcating root, the branch length of one fork of the bifurcation is added to the other fork, so the tree length is preserved.
In the unusual case of removing a monofurcating root (a root that is a terminal node, a tree-on-a-stick) then its branch length disappears.
renameForPhylip(dictFName='p4_renameForPhylip_dict.py')
Rename with phylip-friendly short boring names.
It saves the old names (together with the new) in a python dictionary, in a file by default named p4_renameForPhylip_dict.py
If self does not have taxNames set, it does not write originalNames to that file– which may cause problems restoring names. If you want to avoid that, be sure to set self.taxNames before you do this method.
This method does not deal with internal node names, at all. They are silently ignored. If they are too long for phylip, they are still silently ignored, which might cause problems.
restoreDupeTaxa(dictFileName='p4DupeSeqRenameDict.py', asMultiNames=True)
Restore previously removed duplicate taxa from a dict file.
The usual story would be like this: You read in your alignment and p4 tells you that you have duplicate sequences. So you use the Alignment method checkForDuplicateSequences() to remove them, which makes a dictionary file, by default ‘p4DupeSeqRenameDict.py’, to facilitate restoration of the names. You do your analysis on the reduced alignment, and get a tree. Then you use the dictionary file with this method to restore all the taxon names.
If asMultiNames is turned on, the default, then the leaf nodes are not replicated, and the name is changed to be a long multi-name.
If asMultiNames is turned off, then the restored taxa are made to be siblings, and the branch lengths are set to zero.
restoreNamesFromRenameForPhylip(dictFName='p4_renameForPhylip_dict.py')
Given the dictionary file, restore proper names.
The renaming is done by the Alignment method renameForPhylip(), which makes the dictionary file. The dictionary file is by default named p4_renameForPhylip_dict.py
rotateAround(specifier)
Rotate a clade around a node.
The specifier can be a nodeNum, name, or node object.
setCStuff()
Transfer info about self to c-language stuff.
Transfer relationships among nodes, the root position, branch lengths, model usage info (ie what model attributes apply to what nodes), and pre- and post-order.
setEmpiricalComps()
Set any empirical model comps to the comp of the data.
This is done by self.modelSanityCheck(), but sometimes you may want to do it at other times. For example, do this after exchanging Data objects, or after simulating. In those cases there does not seem to be a reasonable way to do it automatically.
setModelThing(theModelThing, node=None, clade=1)
setModelThingsNNodes()
Set nNodes for all modelThings
setModelThingsRandomly(forceRepresentation=2)
Place model things (semi-)randomly on the tree.
For example, if there are 2 compositions in model part partNum, this method will decorate each node of the tree with zeros and ones, randomly. The actual thing set is node.parts[partNum].compNum. If the model thing is homogeneous, it will just put zeros in all the nodes.
We want to have each model thing on the tree somewhere, and so it is not really randomly set. If the model thing numbers were assigned randomly on the tree, it may occur that some model thing numbers by chance would not be represented. This is not allowed, and you can set forceRepresentation to some positive integer, 1 or more. That number will be the lower limit allowed on the number of nodes that get assigned the model thing number. For example, if forceRepresentation is set to 2, then each model thing must get assigned to at least 2 nodes.
setNGammaCat(partNum=0, nGammaCat=1)
setNexusSets()
Set self.nexusSets from var.nexusSets.
A deepcopy is made of var.nexusSets, only if it exists. If var.nexusSets does not yet exist, a new blank one is not made (cf this method in Alignment class, where it would be made).
Important! This method depends on a correct taxNames.
setPInvar(partNum=0, free=0, val=0.0)
setPreAndPostOrder()
Sets or re-sets self.preOrder and self.postOrder lists of node numbers.
PreOrder starts from the root and goes to the tips; postOrder starts from the tips and goes to the root.
setRelRate(partNum=0, val=0.0)
setRjComp(partNum=0, val=True)
setRjRMatrix(partNum=0, val=True)
setTextDrawSymbol(theSymbol='-', node=None, clade=1)
simsForModelFitTests(reps=10, seed=None)
Do simulations for model fit tests.
The model fit tests are the Goldman-Cox test, and the tree- and model-based composition fit test. Both of those tests require simulations, optimization of the tree and model parameters on the simulated data, and extraction of statistics for use in the null distribution. So might as well do them together. The Goldman-Cox test is not possible if there are any gaps or ambiguities, and in that case Goldman-Cox simulation stats are not collected.
Doing the simulations is therefore the time-consuming part, and so this method facilitates doing that job in sections. If you do that, set the random number seed to different numbers. If the seed is not set, the process id is used. (So obviously you should explicitly set the seed if you are doing several runs in the same process.) Perhaps you may want to do the simulations on different machines in a cluster. The stats are saved to files. The output files have the seed number attached to the end, so that different runs of this method will have different output file names. Hopefully.
When your model uses empirical comps, simulation uses the empirical comp of the original data for simulation (good), then the optimization part uses the empirical comp of the newly-simulated data (also good, I think). In that case, if it is tree-homogeneous, the X^2_m statistic would be identical to the X^2 statistic.
You would follow this method with the modelFitTests() method, which uses all the stats files to make null distributions to assess significance of the same stats from self.
simulate(calculatePatterns=True, resetSequences=True, resetNexusSetsConstantMask=True, refTree=None)
Simulate into the attached data.
The tree self needs to have a data and model attached.
This week, generation of random numbers uses the C language random function, which is in stdlib on Linux. It will use the same series of random numbers over and over, unless you tell it otherwise. That means that (unless you tell it otherwise) it will generate the same simulated data if you run it twice. To reset the randomizer, you can use func.reseedCRandomizer(), eg
func.reseedCRandomizer(os.getpid())
The usual way to simulate does not use reference data. An unsual way to simulate comes from (inspired by?) PhyloBayes, where the simulation is conditional on the original data. It uses conditional likelihoods of that reference data at the root. To turn that on, set refTree to the tree+model+data that you would like to use. Calculate a likelihood with that refTree before using it, so that conditional likelihoods are set. The tree and model for refTree should be identical to the tree and model for self.
Parameters: calculatePatterns (bool) – True by default. Whether to “compress” the newly simulated data to facilitate a faster likelihood calculation. resetSequences (bool) – True by default. whether to bring the simulated sequences in C back into Python resetNexusSetsConstantMask (bool) – True by default. When simulations are made, the constant mask in any associated nexus sets will get out of sync. Setting this to True makes a new mask and sets it. refTree (Tree) – None by default. If supplied, a tree+model+data which has had its likelihood calculated, where the tree+model is identical to self.
spr(pruneNode=None, above=True, graftNode=None)
Subtree pruning and reconnection.
See also the Tree.randomSpr() method. It uses this method to do a random spr move.
This only works on fully bifurcating trees. Doing spr moves would tend to break up polytomies anyway; pruning subtrees from a polytomy would require creation of new nodes.
The subtree to be pruned might be pointing up or pointing down from a specified node. If the subtree is pointing up, the subtree to be pruned is specified by the appropriate child of the root of the subtree; the subtree would have a root-on-a-stick (Is monofurcating a proper word?) with the subtree root’s single child being the specified node. If the subtree is pointing down, then the tree is re-rooted to the specified node to allow pruning of the subtree, now above the specified node, with the specified node as the root, including the subtree with the pre-re-rooting parent of the specified node.
I’ll draw that out. Lets say we want to prune the subtree below node 2 in this tree. That would include nodes 0, 1, 2, and 7.
+--------1:A
|
| +---------3:B
|--------2
0 | +--------5:C
| +---------4
| +--------6:D
|
+--------7:E
The way it is done in this method is to re-root at node 2, which is the specified node. Then the subtree including the pre-re-rooting parent of the specified node, ie node 0, is pruned.
+--------3:B
|
| +--------5:C
2--------4
| +--------6:D
|
| +--------1:A
+--------0
+--------7:E
stripBrLens()
Sets all node.br.len’s to 0.1, the default in p4.
Then, if you were to write it out in Nexus or Newick format, no branch lengths would be printed.
subTreeIsFullyBifurcating(theNode, up=True)
Is theNode and everything above it (or below it) bifurcating?
Arg up says whether its above or below.
summarizeModelThingsNNodes()
Summarize nNodes for all modelThings if isHet
tPickle(fName=None)
Pickle self to a file with a ‘p4_tPickle’ suffix.
If there is an attached Data object, it is not pickled. If there is an attached model object, it is pickled. Pointers to c-structs are not pickled.
If fName is supplied, the file name becomes fName.p4_tPickle, unless fName already ends with .p4_tPickle. If fName is not supplied, self.name is used in fName’s place. If neither is supplied, the pid is used as fName.
If a file with the chosen name already exists, it is silently over-written!
p4 can read a p4_tPickle file from the command line or using the read() function, as usual.
(This would not be a good option for long-term storage, because if you upgrade p4 and the p4 Classes change a lot then it may become impossible to unpickle it. If that happens, you can use the old version of p4 to unpickle.)
taxNames
taxSetIsASplit(taxSetName)
Asks whether a nexus taxset is a split in the tree.
Parameters: taxSetName (str) – The name of the taxset. Case does not matter. the node in self if the taxset is a split, or else None Node
textDrawList(showInternalNodeNames=1, addToBrLen=0.2, width=None, autoIncreaseWidth=True, showNodeNums=1, partNum=0, model=False)
topologyDistance(tree2, metric='sd', resetSplitKeySet=False)
Compares the topology of self with tree2.
The complete list of metrics is given in var.topologyDistanceMetrics
For most metrics using this method, taxNames needs to be set, to the same in the two trees. If the taxa differ, this method simply returns -1
The ‘metric’ can be one of ‘sd’ (symmetric difference), ‘wrf’ (weighted Robinson-Foulds), ‘bld’ (Felsenstein’s branch- length distance), or ‘diffs’. The unweighted Robinson-Foulds metric would be the same as the symmetric difference.
There is also an experimental scqdist, but that needs the scqdist.so module, in the QDist directory.
See Felsenstein 2004 Inferring Phylogenies, Pg 529.
The default metric is the very simple ‘sd’, symmetric difference. Using this metric, if the 2 trees share the same set of splits, they are deemed to be the same topology; branch lengths are not compared. This method returns the number of splits that are in self that are not in tree2 plus the number of splits that are in tree2 that are not in self. So it would return 0 for trees that are the same.
The ‘wrf’ and ‘bld’ metrics take branch lengths into account. Bifurcating roots complicate things, so they are not allowed for weighted distance calculations.
In the unweighted case (ie metric=’sd’), whether the trees compared have bifurcating roots or not is ignored. So the trees (A,B,(C,D)) and ((A,B),(C,D)) will be deemed to have the same topology, since they have the same splits.
The measurement ‘diffs’, which returns a tuple of 2 numbers – both are set differences. The first is the number of splits in self that are not in tree2, and the second is the number of splits in tree2 that are not in self. (Consider it as the the symmetric difference split into its 2 parts.)
If you calculate a distance and then make a topology change, a subsequent sd topologyDistance calculation will be wrong, as it uses previous splits. So then you need to ‘resetSplitKeySet’.
The ‘scqdist’ metric also gives quartet distances. It was written by Anders Kabell Kristensen for his Masters degree at Aarhus University, 2010. http://www.cs.au.dk/~dalko/thesis/ It has two versions – a pure Python version (that needs scipy) that I do not include here, and a fast C++ version, that I wrapped in python. Its speedy – the ‘sc’ in ‘scqdist’ is for ‘sub-cubic’, ie better than O(n^3).
tv()
Tree Viewer. Show the tree in a gui window.
Needs Tkinter.
If you have nexus taxsets defined, you can show them.
tvTopologyCompare(treeB)
Graphically show topology differences.
The taxNames need to be set, and need to be the same for both trees.
(If the red lines don’t show up right away, try adjusting the size of the windows slightly.)
verifyIdentityWith(otherTree, doSplitKeys)
For MCMC debugging. Verifies that two trees are identical.
write()
This writes out the Newick tree description to sys.stdout.
writeNewick(fName=None, withTranslation=0, translationHash=None, doMcmcCommandComments=0, toString=False, append=False, spaceAfterComma=True)
Write the tree in Newick, aka Phylip, format.
This is done in a Nexus-oriented way. If taxNames have spaces or odd characters, they are single-quoted. There is no restriction on the length of the taxon names. A translationHash can be used.
fName may also be an open file object.
If ‘toString’ is turned on, then ‘fName’ should be None, and a Newick representation of the tree is returned as a string.
The method ‘writePhylip()’ is the same as this, with fewer arguments.
writeNexus(fName=None, append=0, writeTaxaBlockIfTaxNamesIsSet=1, message=None)
Write the tree out in Nexus format, in a trees block.
If fName is None, the default, it is written to sys.stdout.
#NEXUS is written unless we are appending– set append=1.
If you want to write with a translation, use a Trees object.
writePhylip(fName=None, withTranslation=0, translationHash=None, doMcmcCommandComments=0)
Write the tree in Phylip or Newick format.
(This method is just a dupe of writeNewick(). Without the ‘toString’ or ‘append’ args.)
fName may also be an open file object. |
# Inverse Fourier Transform of the Cauchy distribution
Given the characteristic function of the cauchy distribution in the form
$$\hat{f}(q) = \exp(-\gamma|q|)$$
I am unsure how to derive the original probability distribution function
$$f(x) = \frac{\gamma}{\pi(\gamma^2+x^2)}$$
via the inverse Fourier transform, which I have tried using the following form.
$$f(x) = \frac{1}{2\pi}\int_{-\infty}^\infty \hat{f}(q)e^{iqx} \, dq$$
I suspect I am going wrong when transforming with respect to the absolute value $|q|$ in the characteristic function as I am unsure how to eliminate it. Any insight is very much appreciated.
Let the Cauchy distribution be $$\frac\gamma{\pi(\gamma^2+x^2)} \, dx$$ so that if $X$ is a random variable with that distribution then $$\Pr(X\in A) = \int_{-\infty}^\infty \frac \gamma{\pi(\gamma^2+x^2)} \, dx$$ for every Borel set $A.$
Then the characteristic function is $$q\mapsto \operatorname E\left( e^{iqX} \right) = \int_{-\infty}^\infty e^{iqx} \frac \gamma {\pi(\gamma^2+x^2)} \, dx.$$ Note that it says $e^{iqX},$ not $e^{-iqX}.$
An inversion theorem for this kind of transform will need a $\text{“}{-}\text{''}$ where you have a $\text{“}{+}\text{''}.$ |
# Object Detection¶
Note
We assume that by now you have already read the previous tutorials. If not, please check previous tutorials at http://opencv-java-tutorials.readthedocs.org/en/latest/index.html. You can also find the source code and resources at https://github.com/opencv-java/
## Goal¶
In this tutorial we are going to identify and track one or more tennis balls. It performs the detection of the tennis balls upon a webcam video stream by using the color range of the balls, erosion and dilation, and the findContours method.
## Morphological Image Processing¶
Is a collection of non-linear operations related to the morphology of features in an image. The morphological operations rely only on the relative ordering of pixel values and not on their numerical values. Some of the fundamental morphological operations are dilation and erosion. Dilation causes objects to dilate or grow in size adding pixels to the boundaries of objects in an image and therefore the holes within different regions become smaller. The dilation allows, for example, to join parts of an object that appear separated. Erosion causes objects to shrink by stripping away layers of pixels from the boundaries of objects in an image and therefore the holes within different regions become larger. The erosion can be used to remove noise or small errors from the image due to the scanning process. The opening is a compound operation that consist in an erosion followed by a dilation using the same structuring element for both operations. This operation removes small objects from the foreground of an image and can be used to find things into which a specific structuring element can fit. The opening can open up a gap between objects connected by a thin bridge of pixels. Any regions that have survived the erosion are restored to their original size by the dilation.
## What we will do in this tutorial¶
In this guide, we will:
• Insert 3 groups of sliders to control the quantity of HSV (Hue, Saturation and Value) of the image.
• Capture and process the image from the web cam removing noise in order to facilitate the object recognition.
• Finally using morphological operator such as erosion and dilation we can identify the objects using the contornous obtained after the image processing.
## Getting Started¶
Let’s create a new JavaFX project. In Scene Builder we set the window elements so that we have a Border Pane with:
• on RIGHT CENTER we can add a VBox. In this one we are going to need 6 sliders, the first couple will control hue, the next one saturation and finally brightness, with these sliders is possible to change the values of the HSV image.
<Label text="Hue Start" />
<Slider fx:id="hueStart" min="0" max="180" value="20" blockIncrement="1" />
<Label text="Hue Stop" />
<Slider fx:id="hueStop" min="0" max="180" value="50" blockIncrement="1" />
<Label text="Saturation Start" />
<Slider fx:id="saturationStart" min="0" max="255" value="60" blockIncrement="1" />
<Label text="Saturation Stop" />
<Slider fx:id="saturationStop" min="0" max="255" value="200" blockIncrement="1" />
<Label text="Value Start" />
<Slider fx:id="valueStart" min="0" max="255" value="50" blockIncrement="1" />
<Label text="Value Stop" />
<Slider fx:id="valueStop" min="0" max="255" value="255" blockIncrement="1" />
• in the CENTER. we are going to put three ImageViews, the first one shows normal image from the web cam stream, the second one will show mask image and the last one will show morph image. The HBox is used to normal image and VBox to put the other ones.
<HBox alignment="CENTER" spacing="5">
<padding>
<Insets right="10" left="10" />
</padding>
<ImageView fx:id="originalFrame" />
<VBox alignment="CENTER" spacing="5">
<ImageView fx:id="maskImage" />
<ImageView fx:id="morphImage" />
</VBox>
</HBox>
• on the BOTTOM we can add the usual button to start/stop the stream and the current values HSV selected with the sliders.
<Button fx:id="cameraButton" alignment="center" text="Start camera" onAction="#startCamera" />
<Separator />
<Label fx:id="hsvCurrentValues" />
The gui will look something like this one:
## Image processing¶
In order to use the morphological operators and obtain good results we need to process the image and remove the noise, change the image to HSV allows to get the contours easily.
• Remove noise
We can remove some noise of the image using the method blur of the Imgproc class and then apply a conversion to HSV in order to facilitate the process of object recognition.
Mat blurredImage = new Mat();
Mat hsvImage = new Mat();
Mat mask = new Mat();
Mat morphOutput = new Mat();
// remove some noise
Imgproc.blur(frame, blurredImage, new Size(7, 7));
// convert the frame to HSV
Imgproc.cvtColor(blurredImage, hsvImage, Imgproc.COLOR_BGR2HSV);
• Values of HSV image
With the sliders we can modify the values of the HSV Image, the image will be updtated in real time, that allows to increase or decrease the capactity to recognize object into the image. .
// get thresholding values from the UI
// remember: H ranges 0-180, S and V range 0-255
Scalar minValues = new Scalar(this.hueStart.getValue(), this.saturationStart.getValue(),
this.valueStart.getValue());
Scalar maxValues = new Scalar(this.hueStop.getValue(), this.saturationStop.getValue(),
this.valueStop.getValue());
// show the current selected HSV range
String valuesToPrint = "Hue range: " + minValues.val[0] + "-" + maxValues.val[0]
+ "\tSaturation range: " + minValues.val[1] + "-" + maxValues.val[1] + "\tValue range: "
+ minValues.val[2] + "-" + maxValues.val[2];
this.onFXThread(this.hsvValuesProp, valuesToPrint);
// threshold HSV image to select tennis balls
Core.inRange(hsvImage, minValues, maxValues, mask);
// show the partial output
this.onFXThread(maskProp, this.mat2Image(mask));
## Morphological Operators¶
First of all we need to define the two matrices of morphological operator dilation and erosion, then with the methods erode and dilate of the class Imgproc we process the image twice in each operation, the result is the matrix morphOutput that will be the partial output.
// morphological operators
// dilate with large element, erode with small ones
Mat dilateElement = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(24, 24));
Mat erodeElement = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(12, 12));
Imgproc.erode(mask, morphOutput, erodeElement);
Imgproc.erode(mask, morphOutput, erodeElement);
Imgproc.dilate(mask, morphOutput, dilateElement);
Imgproc.dilate(mask, morphOutput, dilateElement);
// show the partial output
this.onFXThread(this.morphProp, this.mat2Image(morphOutput));
## Object tracking¶
With the partial output obtained before we can use the method findContours of the class Imgpoc to get a matrix with the mapping of the objects recognized, then we draw the contours of these objects.
// init
List<MatOfPoint> contours = new ArrayList<>();
Mat hierarchy = new Mat();
// find contours
Imgproc.findContours(maskedImage, contours, hierarchy, Imgproc.RETR_CCOMP, Imgproc.CHAIN_APPROX_SIMPLE);
// if any contour exist...
if (hierarchy.size().height > 0 && hierarchy.size().width > 0)
{
// for each contour, display it in blue
for (int idx = 0; idx >= 0; idx = (int) hierarchy.get(0, idx)[0])
{
Imgproc.drawContours(frame, contours, idx, new Scalar(250, 0, 0));
}
}
Finally we can get this results:
The source code of the entire tutorial is available on GitHub. |
Address 118 S 3rd St, Williamsburg, KY 40769 (606) 539-0802 http://www.tekswork.com
# root mean square forecast error Newcomb, Tennessee
When JavaScript is disabled, you can view only the content of the help topic, which follows this message.Time-Series Forecast Error MeasuresCrystal Ball calculates three different error measures for the fit of Retrieved from "https://en.wikipedia.org/w/index.php?title=Forecast_error&oldid=726781356" Categories: ErrorEstimation theorySupply chain analyticsHidden categories: Articles needing additional references from June 2016All articles needing additional references Navigation menu Personal tools Not logged inTalkContributionsCreate accountLog in Namespaces Article Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Retrieved 4 February 2015. ^ J.
Loading Questions ... This measure also tends to exaggerate large errors, which can help when comparing methods.The formula for calculating RMSE:where Yt is the actual value of a point for a given time period Other methods include tracking signal and forecast bias. In other cases, a forecast may consist of predicted values over a number of lead-times; in this case an assessment of forecast error may need to consider more general ways of
Choose the best answer: Feedback This is true, but not the best answer. RMSE becomes as simple as the standard deviation if your demand forecast is the same as a simple average. Forecast error can be a calendar forecast error or a cross-sectional forecast error, when we want to summarize the forecast error over a group of units. For forecast errors on training data y ( t ) {\displaystyle y(t)} denotes the observation and y ^ ( t | t − 1 ) {\displaystyle {\hat {y}}(t|t-1)} is the forecast
International Journal of Forecasting. 22 (4): 679–688. If we observe the average forecast error for a time-series of forecasts for the same product or phenomenon, then we call this a calendar forecast error or time-series forecast error. If the RMSE=MAE, then all the errors are of the same magnitude Both the MAE and RMSE can range from 0 to ∞. Koehler, Anne B.; Koehler (2006). "Another look at measures of forecast accuracy".
Here the forecast may be assessed using the difference or using a proportional error. With the popular adoption of MAPE as a classic measure of forecast performance, we can be rest assured that the safety stock strategy is synchronized with the demand planning performance. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. Combining forecasts has also been shown to reduce forecast error.[2][3] Calculating forecast error The forecast error is the difference between the observed value and its forecast based on all previous observations.
The RMSE will always be larger or equal to the MAE; the greater difference between them, the greater the variance in the individual errors in the sample. The RMSD of predicted values y ^ t {\displaystyle {\hat {y}}_{t}} for times t of a regression's dependent variable y t {\displaystyle y_{t}} is computed for n different predictions as the Through the application of the Central Limit Theorem, we know that this is distribution-agnostic. Feedback This is true too, the RMSE-MAE difference isn't large enough to indicate the presence of very large errors.
What does this mean? These individual differences are called residuals when the calculations are performed over the data sample that was used for estimation, and are called prediction errors when computed out-of-sample. So here is the summary: 1. Applied Groundwater Modeling: Simulation of Flow and Advective Transport (2nd ed.).
They are negatively-oriented scores: Lower values are better. www.otexts.org. Since the forecast error is derived from the same scale of data, comparisons between the forecast errors of different series can only be made when the series are on the same Here is a numerical example that illustrates the benefit of using a true demand forecast error compared to using the standard deviation.
Since the errors are squared before they are averaged, the RMSE gives a relatively high weight to large errors. Your cache administrator is webmaster. Forgot your Username / Password? Root mean squared error (RMSE) The RMSE is a quadratic scoring rule which measures the average magnitude of the error.
Principles of Forecasting: A Handbook for Researchers and Practitioners (PDF). Wiki (Beta) » Root Mean Squared Error # Root Mean Squared Error (RMSE) The square root of the mean/average of the square of all of the error. uses one of these error measures to determine which time-series forecasting method is the best:RMSEMADMAPERMSERoot mean squared error is an absolute error measure that squares the deviations to keep the positive This value is commonly referred to as the normalized root-mean-square deviation or error (NRMSD or NRMSE), and often expressed as a percentage, where lower values indicate less residual variance.
This is allows us to simply assume normal distribution and use the standard normal tables for computations. Retrieved 4 February 2015. ^ "FAQ: What is the coefficient of variation?". They are negatively-oriented scores: Lower values are better. By convention, the error is defined using the value of the outcome minus the value of the forecast.
Since the errors are squared before they are averaged, the RMSE gives a relatively high weight to large errors. Home Resources Questions Jobs About Contact Consulting Training Industry Knowledge Base Diagnostic DPDesign Exception Management S&OP Solutions DemandPlanning S&OP RetailForecasting Supply Chain Analysis »ValueChainMetrics »Inventory Optimization Supply Chain Collaboration CPG/FMCG Food If RMSE>MAE, then there is variation in the errors. See the other choices for more feedback.
The system returned: (22) Invalid argument The remote host or network may be down. The MAE and the RMSE can be used together to diagnose the variation in the errors in a set of forecasts. The root-mean-square deviation (RMSD) or root-mean-square error (RMSE) is a frequently used measure of the differences between values (sample and population values) predicted by a model or an estimator and the Our belief is this is done in error failing to understand the implications of using the standard deviation over the forecast error.
Correct measure is RMSE calculated as the square root of the average squared deviation between the Forecast and Actual. 2. You read that a set of temperature forecasts shows a MAE of 1.5 degrees and a RMSE of 2.5 degrees. See also Root mean square Average absolute deviation Mean signed deviation Mean squared deviation Squared deviations Errors and residuals in statistics References ^ Hyndman, Rob J. For example, when measuring the average difference between two time series x 1 , t {\displaystyle x_{1,t}} and x 2 , t {\displaystyle x_{2,t}} , the formula becomes RMSD = ∑
So here is a final question for you: If you use the standard deviation in setting safety stock, you may actually end up being right under one scenario. This can be used to set safety stocks as well but the statistical properties are not so easily understood when one is using the absolute error. The equation for the RMSE is given in both of the references. |
Radius of convergence and interval of convergnce
2. Originally Posted by gichfred
Radius of convergence and interval of convergnce
$\sum\limits_{n = 0}^\infty {\frac{3^n x^n}{n!}} .$
${a_n} = \frac{{{3^n}{x^n}}}
{{n!}} \, \Rightarrow \, {a_{n + 1}} = \frac{{{3^{n + 1}}{x^{n + 1}}}}
{{\left( {n + 1} \right)!}} = \frac{{3x{3^n}{x^n}}}
{{\left( {n + 1} \right)n!}} \, \Rightarrow \, \frac{{{a_{n + 1}}}}
{{{a_n}}} = \frac{{3x}}{{n + 1}}.$
$\mathop {\lim }\limits_{x \to \infty } \left| {\frac{{{a_{n + 1}}}}
{{{a_n}}}} \right| = 3\left| x \right|\mathop {\lim }\limits_{x \to \infty } \frac{1}{{n + 1}} = 0 < 1 \, \Rightarrow \, R = \infty \, \wedge \, x \in \left( { - \infty ;\infty } \right).$
Ie this series converges for any x. |
# Mole concept
1. Feb 8, 2016
### brycenrg
1. The problem statement, all variables and given/known data
When a problem says 76 moles of P4O10 contains how many moles of P.
I can't seem to see why 76 moles of this molecule would contain more moles in it.
2. Relevant equations
3. The attempt at a solution
I know the answer is 76 moles of P = 76 * 4
but how could a compound P4O10 that has 76 moles have 76 moles of P and 76 moles of O. in my head thats like saying 25 lb back of wine bottles is 25 lb of wine + 25 lb of bottles. Can anyone help me fix this brain dump?
2. Feb 8, 2016
### Bandersnatch
Mole is a number, like say 10 or a dozen or 512. If you have a dozen molecules of P4O10, then there are four dozens of P and 10 dozens of O in it.
Same with moles.
3. Feb 8, 2016
### vela
Staff Emeritus
There's a difference between the number of $P_4 O_{10}$ molecules and the number of $P$ atoms. |
# Why is quantum gravity hard?
October 2, 2020. People often say that quantum gravity is hard because quantum mechanics and gravity are incompatible. I’ll give a brief, non-technical explanation of the real problem: powerful microscopes make black holes. In an appendix, we’ll also see why there is no problem combining gravity and quantum mechanics as long as we stick to sufficiently weak microscopes.
#### Introduction
Gravity is a theory of heavy things; quantum mechanics is a theory of fuzzy things; quantum gravity applies to things which are both heavy and fuzzy. People often say that the two theories are “incompatible” because they use different mathematical frameworks. This makes it seem like a technical problem, which is misleading. The real problem is that the details of quantum gravity are hidden inside black holes! Let’s see why.
#### Microscopes make black holes
First, we’ll make a heuristic argument that sufficiently powerful microscopes create black holes. It is really just a syllogism: microscopes make energy (by virtue of Heisenberg’s uncertainty principle), energy makes black holes (using $E = mc^2$ and gravity), hence microscopes make black holes.
##### Microscopes make energy
Suppose we have a microscope which can resolve lengths $\Delta x$. Heisenberg’s uncertainty principle says that the smaller this resolution, the larger the uncertainty about the momentum of things we measure, with
$\Delta p \gtrsim \frac{\hbar}{\Delta x}$
for Planck’s constant $\hbar \approx 10^{-34} \text{ J/s}$. We can relate this to the energy of the particles using Einstein’s famous $E = mc^2$. In fact, we will write it in the much less well-known form
$E^2 = m^2c^4 = p^2c^2 + m_0^2 c^4,$
where $m_0$ is the mass of the particle at rest, $c = 3 \times 10^8 \text{ m/s}$ is the speed of light, and $m$ is the relativistic mass, which increases (without limit) as the particle speeds up. When the particle is moving very quickly, the momentum can be much larger than the rest mass energy, and $p \approx E/c$. If we measure very small distances, Heisenberg’s principle tells us we will be smacking around particles at very high momenta, so this is the form we should use. Thus, the uncertainty in the energy of particles our microscope is examining is
$\Delta E \sim \frac{\hbar c}{\Delta x}\;.$
##### Energy makes black holes
Let’s now recall Newton’s universal law of gravitation,
$F = \frac{Gm_1 m_2}{r^2},$
where $G = 6.7\times 10^{-11} \text{ N m}^2\text{ /kg}$ is Newton’s constant. We can use this to estimate the size of a black hole! A black hole is a region of space where gravity is so strong light is unable to escape. To see how light figures in Newton’s law, we need to give it a mass. In classical physics, light is massless, but but we know better: Einstein’s formula tells us that it has some relativistic mass related to its energy, $E = mc^2$. [If you’re curious, the mass of a particle of light, the photon, depends on its frequency $f$ via
$m = \frac{E}{c^2} = \frac{2\pi\hbar f}{c^2},$
using the formula for the energy of a photon, $E = 2\pi \hbar f$, also discovered by Einstein.] Let’s continue. For a black hole of mass $M$, and radius $r_s$, the force it exerts on a photon with mass-energy $m$ is
$F \sim \frac{GMm}{r_s^2}.$
Using the work formula, we can view force as energy divided by distance. Since the energy of the photon is $E = mc^2$, and the relevant distance is probably the black hole size $r_s$, we have
$\frac{mc^2}{r_s} \sim \frac{GMm}{r_s^2} \quad \Longrightarrow \quad r_s \sim \frac{GM}{c^2}.$
Although we’ve been rather sloppy, this guess is correct up to a factor of $2$! So, if we take a mass $M$ and squish into a ball of radius $\lesssim GM/c^2$, it will make a black hole.
##### Concluding the syllogism
Let’s return to our microscope. Zooming in makes energy fluctuations, and by $E = mc^2$ these fluctuations have mass. The associated black hole radius is
$r_s \sim \frac{Gm}{c^2} = \frac{G\Delta E}{c^4} = \frac{G\hbar}{\Delta x \cdot c^3}.$
It may seem sketchy to replace $E$ with $\Delta E$, but if the energy of particles has fluctuations of size $\Delta E$ around $E = 0$, some of them will have energy $E$. We can clean up this expression by packaging all these constants into a single object called the Planck length:
$\ell_P := \sqrt{\frac{G\hbar}{c^3}}.$
Then the associated black hole radius for our microscope is
$r_s \sim \frac{\ell_P^2}{\Delta x}.$
Now, what does this all mean? Very simply, if the resolution $\Delta x$ of our microscope is within the associated black hole radius, then our energy fluctuations will produce a tiny black hole! We won’t see anything at all. This happens when
$\Delta x \lesssim r_s \sim \frac{\ell_P^2}{\Delta x} \quad \Longrightarrow \quad \Delta x \lesssim \ell_P.$
So, prying below the Planck scale makes black holes. Clearly, microscopes this powerful should be kept away from theoretical physicists!
#### Local observables and the shape of space
The basic difficulty with quantum gravity is that zooming in makes black holes. This problem appears in many different guises. We will discuss two here: the notion that local observables make no sense in a theory of quantum gravity, and Einstein’s observation that physical statements should only depend on the shape of space, not on how we choose to label it.
##### No local observables
Suppose I want to make statements about what is happening at a point $X$ in spacetime, e.g. an electron has some probability of being observed there. So, I wait around with my electron detector to see if the electron appears at $P$, and in due course, it does. Or does it really? If my electron detector can tell what’s happening at the point $P$ itself, it’s acting as a microscope with infinite resolution! Clearly, this is not consistent with the argument above. At best, I can look for electrons in regions larger than the Planck length.
Instead of talking about black holes, we usually say that spacetime breaks down when we try to look too close. For this reason, it doesn’t make sense to talk about what is happening at a point in spacetime. The electron detector is an example of something called a local observable: it is defined at a point (local) and makes a quantum-mechanical measurement (observable). This conclusion is so important we put it in a quote box:
In quantum gravity, there are no local observables.
In fact, this conclusion is already suggested by Einstein’s classical theory of gravity, general relativity. Without going into details, we can briefly explain why, and some of the fancy words physicists use to dress up the idea.
##### The shape of things
In Newtonian gravity, spacetime is a sort of fixed arena where objects float around and exert forces on each other. In this picture, gravity is just another force. Einstein’s profound realization was that gravity isn’t a real force at all! Rather, gravity is the shape of space, and space is shaped by matter. In the beautiful maxim of John Wheeler,
Spacetime tells matter how to move; matter tells spacetime how to curve.
It doesn’t make much sense to talk about stuff happening at a point $P$ itself. Rather, physically meaningful statements will involve the shape of space and matter in the vicinity of $P$. The shape of space is captured by an object $\mathbf{G}$ called the Einstein tensor, while the “shape” of matter is captured by an energy-like quantity called the stress-energy tensor $\mathbf{T}$. The governing equation of general relativity is the Einstein field equation:
$\mathbf{G} = \frac{8\pi G}{c^4}\mathbf{T}.$
Spacetime curves on the left; matter moves on the right; the equality sign means they are giving each other instructions. This is not the only theory of gravity you can write down, and indeed, there are many ways to generalize Einstein’s equation. But they should obey Einstein’s insight that gravitational physics only depends on the shape of things.
Physicists have a nice hack for enforcing this shape-only dependence. Clearly, the label of the point $P$ will depend on my labelling system. But however I choose to label $P$ (and the surrounding points), the shape of space doesn’t change. A sphere is still a sphere, even if I label points using fruits or days of the week! Thus, physically meaningful statements do not depend on labelling. Physicists like fancy terminology, and “relabelling points” often goes by “diffeomorphism”, and “does not depend on” by “invariant”. So in fancy language, the constraint becomes another quote:
Physically meaningful statements are invariant under diffeomorphisms.
I can’t really ask about whether an electron appeared at $P$ because this question depends on how I label $P$. Like I said above, there is no quantum mechanics here. Once we add quantum mechanics, Nature itself enforces invariance under diffeomorphisms by shrouding ill-formed questions with black holes.
#### What is quantum gravity about?
The moral is that, in quantum gravity, spacetime fluctuates. If I zoom in enough, these fluctuations are so violent they form black holes, so there are no local observables. This is also suggested by the classical invariance of general relativity under relabelling points. But if spacetime itself is fluctuating, what is quantum gravity even about? We’ll finish by describing a few possibilities.
1. Black holes. Black holes are the guardians of the secrets of quantum gravity, appearing whenever we try to probe below the Planck scale. But they are also guides, providing generous (albeit indirect) clues such as the Bekenstein-Hawking entropy. Understanding how they store and release information is one of the keys to making progress in quantum gravity.
2. Non-local observables. Even though local observables do not make sense when spacetime fluctuates, there are non-local observables that do. The basic idea is that certain spacetimes have regions which are so large or far away they are protected from fluctuation, since it would take too much energy to wobble them. Sitting in this non-fluctuating part, we can attach an electron-detector lure to a fishing rod, and dangle it near the point $P$. The lure will fluctuate in position, but because I sit somewhere stable, the whole procedure is well-defined. [These fishing-rod measurements are called dressed observables in the literature. There are other sorts of non-local observables, but I won’t discuss them here.]
3. Emergent spacetime. I have somewhat understated the problem with microscopes. The formation of black holes is a picturesque way of saying that general relativity is inconsistent at high energies, and some new theory has to kick in at (or before) the Planck length. [See the appendix for more on this argument.] The most promising candidate is string theory, which makes the wackadoo prediction that spacetime is build out of tiny, vibrating strings. If they have a length $\ell_s \gtrsim \ell_P$, our microscopes will see them before we start making black holes. When spacetime materializes from some very different looking theory at high energies, we say it is emergent.
4. Quantum shapes. In emergent theories, the macroscopic universe only appears as we zoom out from something fundamentally different. Alternatively, we can directly try to quantize the shape of spacetime, i.e. treat it as a fuzzy variable like the position of an electron. This is technically challenging since there many ways for space to look, but in loop quantum gravity, the challenge is surmounted by chunking spacetime into discrete, graph-like structures called spin networks, whose edges are around a Planck-length long. This is a very different way the theory can fail to make sense below the Planck scale. Look any closer, and you see nothing at all!
#### Appendix: QED, graviton exchange and the Planck energy
In this appendix, I’ll explain why combining gravity and quantum mechanics is not a problem until we hit the Planck scale. This is slightly more advanced, but we will still be outrageously heuristic. We start by discussing the quantum theory of electromagnetism, aka quantum electrodynamics (QED). We can then generalize to gravity, seeing what works and what doesn’t.
##### Quantum electrodynamics
Suppose two electrons pass by each other. Classically, they are repelled because each generates a field the other responds to. Quantum-mechanically, you cannot have action at a distance, and fields are replaced by messenger particles, that like carrier pigeons, alert the electrons to one another’s presence. For QED, the messenger particle is a photon, so the electrons throw some number of photons back and forth.
Without worrying too much about the details, let’s suppose the probability of exchanging a single photon is $\alpha$. We call $\alpha$ the coupling constant. What’s the probability of exchanging any number of photons? It’s just the sum of probabilities for the different number of exchanged photons:
$\alpha + \alpha^2 + \alpha^3 + \cdots = \frac{\alpha}{1 - \alpha},$
where we used the geometric series, assuming $\alpha < 1$. This is a very loose order-of-magnitude estimate for the probability $P$, and other constants can appear out the front, so $p \approx C\alpha/(1-\alpha)$, and this will be a reasonable probability with $p \leq 1$ for an appropriate $C$. But if $\alpha \to 1$, $p$ becomes infinite. No constant out the front can save us! When the probabilities add up to something sensible, we say the theory is unitary. When $\alpha = 1$, unitarity breaks down.
So, to check if QED is a sensible, unitary theory, we need to find $\alpha$. Recall Couloumb’s law for the electrostatic repulsion between two electrons:
$F = \frac{k_ee^2}{r^2},$
where $k_e= 9\times 10^{9} \text{ N}\cdot \text{m}^2/\text{C}^2$ is the Coulomb constant. In the quantum world, electrons do not have exact positions, and $r^2$ is not relevant to the probabilities. Instead, the probability of exchange is governed by the numerator, $k_e e^2$. [Technically, we usually take electrons to have perfectly well-defined momenta, $\Delta p= 0$, so by Heisenberg’s principle, $\Delta x = \infty$. They could be anywhere!]
It’s clear that $k_e e^2$ has dimensions. A quick calculation shows that
$[k_e e^2] = [F r^2] = \left(\frac{ML^2}{T^2}\right) L,$
where the term in brackets is the dimension of energy. The coupling constant $\alpha$ is a probability, i.e. a dimensionless number, so we need to make $k_ee^2$ into something dimensionless, and we can use the fundamental constants involved in the problem to make that happen. Since we’re doing quantum mechanics $\hbar$ is involved, and since photons are involved we can use the speed of light $c$, with dimensions
$[\hbar] = \left(\frac{ML^2}{T^2}\right)T, \quad [c] = \frac{L}{T}.$
We can combine these to form the dimensionless combination $k_ee^2/\hbar c$, since
$\left[\frac{k_e e^2}{\hbar c}\right] = \left(\frac{ML^2}{T^2}\right) L \cdot \left(\frac{ML^2}{T^2}\right)^{-1} \frac{1}{T} \cdot \frac{T}{L} = 1.$
The resulting coupling constant $\alpha$ is called the fine structure constant, and has a numerical value
$\alpha = \frac{k_e e^2}{\hbar c} \approx \frac{1}{137}.$
Since this is smaller than $1$, QED is unitary. Of course, we have dramatically simplified QED to make this argument work, but when we include all the technical details, the gist is the same: if the coupling constant $\alpha$ is too big, then adding up the probabilities for different processes to occur gives a nonsensical answer.
##### Quantum gravity
Quantum gravity works the same way, but instead of the photon, the messenger particle for gravity is the graviton. This theory is a bit more complicated than quantum electrodynamics, but uses the same underlying formalism. Thus, we can combine quantum mechanics and gravity into a theory of graviton exchange. This is a perfectly reasonable thing to do. The problem is that unitarity breaks down at the Planck scale.
The argument is very similar to QED. Two massive particles that pass each other exchange gravitons to signal their gravitational presence. If the probability of exchanging a single graviton is $g$, then the probability of exchanging any number is
$P \propto g + g^2 + g^3 + \cdots = \frac{g}{1-g}.$
Once again, unitarity will break down if $g$ approaches $1$. For particles of mass $M_1$ and $M_2$, Newton’s law of gravitation is
$F = \frac{GM_1 M_2}{r^2}.$
Repeating the argument from QED, we throw away the $r^2$ and divide by $\hbar c$, where the inclusion of $c$ in the dimensional analysis is justified because gravitons also travel at the speed of light. We discover that the coupling constant for quantum gravity is
$g = \frac{GM_1M_2}{\hbar c}.$
This looks very similar to the fine structure constant $\alpha$. If the masses are small, like the charges $e$, we should get a sensible answer right?
The subtlety lies in how we interpret the masses $M_1$ and $M_2$. If they are the rest masses, then for all known elementary particles this is indeed less than one. But gravity doesn’t distinguish between rest mass and the relativistic mass! We already saw this in action when we “derived” the radius of a black hole. So, if we collide two energetic particles, each with relativistic mass $M = E/c^2$, the probability of exchanging a single graviton is approximately
$g \sim \frac{G E^2}{\hbar c^5}.$
If we crank up the energies until $g \approx 1$, unitarity will break down. More precisely, this happens when
$g \sim \frac{G E^2}{\hbar c^5} \sim 1 \quad \Longrightarrow \quad E \sim \sqrt{\frac{\hbar c^5}{G}} = E_P,$
where we have defined the Planck energy $E_P = \hbar c/\ell_P$. This is the energy created by a microscope probing around the Planck length! Numerically, $E_P/c^2 \approx 20 \, \mu\text{g}$, around the mass of a flea egg as many sources inform me. So, although quantum gravity makes perfect sense at low energies — it is just graviton exchange — at the Planck scale, the theory breaks down and becomes non-unitary. This is where the mystery starts! The graviton picture also explains exactly what is creating the black holes when our microscope becomes too powerful. It is the quanta of gravity themselves.
Written on October 2, 2020 |
26
Q:
# A thief is noticed by a policeman from a distance of 200 m. The thief starts running and the policeman chases him. The thief and the policeman run at the rate of 10 km and 11 km per hour respectively. What is the distance between them after 6 minutes?
A) 100 m B) 150 m C) 190 m D) 200 m
Explanation:
Relative speed of the thief and policeman = (11 – 10) km/hr = 1 km/hr
Distance covered in 6 minutes = ${\color{Blue}&space;\left&space;(&space;\frac{1}{60}\times&space;6&space;\right&space;)}$ km = ${\color{Blue}&space;\frac{1}{10}}$ km = 100 m
${\color{Blue}&space;\therefore&space;}$ Distance between the thief and policeman = (200 – 100) m = 100 m.
Q:
Joel travels the first 3 hours of his journey at 60 mph speed and the remaining 5 hours at 24 mph speed. What is the average speed of Joel's travel in kmph ?
A) 38 kmph B) 40 kmph C) 60 kmph D) 54 kmph
Explanation:
Average speed = Total distance / Total time.
Total distance traveled by Joel = Distance covered in the first 3 hours + Distance covered in the next 5 hours.
Distance covered in the first 3 hours = 3 x 60 = 180 miles
Distance covered in the next 5 hours = 5 x 24 = 120 miles
Therefore, total distance traveled = 180 + 120 = 300 miles.
Total time taken = 3 + 5 = 8 hours.
Average speed = 300/8 = 37.5 mph.
we know that 1 mile = 1.6 kms
=> 37.5 miles = 37.5 x 1.6 = 60 kms
Average speed in Kmph = 60 kmph.
2 70
Q:
Two persons start running simultaneously around a circular track of length 600 m from the same point at speeds of 25 kmph and 35 kmph. When will they meet for the first time any where on the track ?
A) 54 sec B) 36 sec C) 72 sec D) 11 sec
Explanation:
Time taken to meet the first time = length of track/relative speed
Given the length of the track is 600 m
The relative speed = 25 + 35 = 60 kmph = 60 x 5/18 m
Therefore Time = $\inline \fn_jvn \frac{600}{(25+35)\frac{5}{18}}$ = 600/60 x (18/5) = 36 sec.
2 55
Q:
A boy can swim in still water at 4.5 km/h, but takes twice as long to swim upstream than downstream. The speed of the stream is ?
A) 1.8 kmph B) 2 kmph C) 2.2 kmph D) 1.5 kmph
Explanation:
Speed of Boy is B = 4.5 kmph
Let the speed of the stream is S = x kmph
Then speed in Down Stream = 4.5 + x
speed in Up Stream = 4.5 - x
As the distance is same,
=> 4.5 + x = (4.5 - x)2
=> 4.5 + x = 9 -2x
3x = 4.5
x = 1.5 kmph
4 91
Q:
A man can row 6 kmph in still water. When the river is running at 1.2 kmph, it takes him 1 hour to row to a place and back. What is the total distance traveled by the man ?
A) 4.58 kms B) 6.35 kms C) 5.76 kms D) 5.24 kms
Explanation:
Speed in still water = 6 kmph
Stream speed = 1.2 kmph
Down stream = 7.2 kmph
Up Stream = 4.8 kmph
x/7.2 + x/4.8 = 1
x = 2.88
Total Distance = 2.88 x 2 = 5.76 kms
2 65
Q:
A person travels from K to L a speed of 50 km/hr and returns by increasing his speed by 50%. What is his average speed for both the trips ?
A) 55 kmph B) 58 kmph C) 60 kmph D) 66 kmph |
Matrix.QR.Givens
Synopsis
# Documentation
leastSquares :: (Ix i, Enum i, Ix j, Enum j, RealFloat a) => Matrix i j a -> Array i a -> Array j a Source #
Solve a sparse overconstrained linear problem, i.e. minimize ||Ax-b||. A must have dimensions m x n with m>=n and it must have full-rank. None of these conditions is checked.
Arguments
:: (Ix i, Enum i, Ix j, Enum j, RealFloat a) => Matrix i j a A -> ([Rotation i a], Upper i j a) QR(A)
The decomposition routine is pretty simple. It does not try to minimize fill-up by a clever ordering of rotations. However, for banded matrices it will work as expected.
solve :: (Ix i, Ix j, Fractional a) => ([Rotation i a], Upper i j a) -> Array i a -> Array j a Source #
det :: (Ix i, Enum i, Ix j, Enum j, RealFloat a) => Matrix i j a -> a Source #
Only sensible for square matrices, but the function does not check whether the matrix is square.
detAbs :: (Ix i, Enum i, Ix j, Enum j, RealFloat a) => Matrix i j a -> a Source #
Absolute value of the determinant. This is also sound for non-square matrices.
data Rotation i a Source #
Instances
(Show i, Show a) => Show (Rotation i a) Source # Instance detailsDefined in Matrix.QR.Givens MethodsshowsPrec :: Int -> Rotation i a -> ShowS #show :: Rotation i a -> String #showList :: [Rotation i a] -> ShowS #
rotateVector :: (Ix i, Num a) => Rotation i a -> Array i a -> Array i a Source #
data Upper i j a Source #
Instances
(Show i, Show j, Show a) => Show (Upper i j a) Source # Instance detailsDefined in Matrix.QR.Givens MethodsshowsPrec :: Int -> Upper i j a -> ShowS #show :: Upper i j a -> String #showList :: [Upper i j a] -> ShowS #
solveUpper :: (Ix i, Ix j, Fractional a) => Upper i j a -> Array i a -> Array j a Source #
Assumes that Upper matrix is at least as high as wide and that it has full rank.
detUpper :: (Ix i, Ix j, Fractional a) => Upper i j a -> a Source # |
# Need HELP! equation of motion with variable acceleration
1. Jan 29, 2007
### Behroz
1. The problem statement, all variables and given/known data
A particle is dropped from rest, at the surface, into a tank containing oil
The acceleration of the particle in the oil is a = g – kv
where g is the gravitational acceleration and –kv being denoted by
the resistance put on the particle by the oil.
Solve for x as a function of time!
3. The attempt at a solution
I'm attaching an image file containing my calculations.
As can be seen I get a differential equation of the second order
but as I proceed to solve this equation and try to determine the
constants the whole thing turns into ZERO?!!?! What am I doing wrong??
#### Attached Files:
• ###### en.jpg
File size:
34.3 KB
Views:
80
2. Jan 29, 2007
### P3X-018
You don't need to consider this as a 2. order diff. equation. You can view the equation as
$$\ddot{x} = g - k\dot{x}$$
or
$$\dot{v} = g - kv$$
The second equation is just a 1. order diff. equation with respect to the velocity v.
3. Jan 29, 2007
### Behroz
Yeah, you're right.. but one SHOULD be able solve it by puting it up as a second order right? I just wanna know what I'm doing wrong, it really bugs me.
4. Jan 29, 2007
### denverdoc
what have you got? Kind of hard to know whats wrong when we just have a generic list of eqns....
John S |
# Finding any path between nodes of implicit, finite, regular, undirected graph
Consider a 4-regular, finite, non-directed, integer-weighed graph $G$. Let each node carry an integer value indicating its distance from zero on a hypothetical axis $X$. Take a given node $N$ for which we know its distance from zero (either negative or positive) and the weights on the edges incident to it. Note, that for each node, two of the edge weights are positive and two are negative. All four are integers. Let $N_0$ denote the node with distance zero on the $X$-axis. In the graph there is at least one $N_0$.
To better illustrate it see an outline diagram (apologies to readers with impaired eyesight for not including high contrast image).
Central blue node in the example has some negative distance from zero (black axis). Now, this node (like each node in this graph) has four edges with weights: w1 and w2 are positive-valued (so that if you choose to move along w1 and w2 this will bring you closer to zero on the axis), while w3 and w4 are negative. In the picture the blue node is connected with the target node with distance zero, but this is of course just for illustration.
Although the graph is finite, it is very large ($>10^{500}$ nodes or so), so one cannot represent the entire graph at once in RAM.
Take any initial node $N$ with weights $w_1,w_2,w_3,w_4$ and distance $d$. If one traverse along either of the four edges towards neighbour node, then all remaining weights in this neighbour can be easily calculated based on the weighs of the initial node (they form some arithmetic sequences, formulae of which are given).
Problem: Find any path (not necessarily the shortest) from $N$ to $N_0$ for the graph described above. The algorithm should basically provide values of subsequent edges (with their weighs) constituting the relevant path to $N_0$. What is the expected computational time complexity?
Additional - strictly related to the problem: Does one need to map the entire graph into memory in order to launch any suitable search algorithm?
• How are the edges of your graph defined? Why not use BFS or DFS? – adrianN Sep 22 '16 at 10:20
• Ok, then why don't you use any standard graph search? Possibly one for external memory, if your graph doesn't fit into RAM? – adrianN Sep 22 '16 at 10:45
• In general you have to look at all edges at least once to find a path. If your graph has more nodes than there are atoms in the universe you're out of luck. – adrianN Sep 22 '16 at 10:57
• Okay, a graph this big you can not store at all. How do you represent it? What is the actual problem you are trying to solve, before modelling it as a graph problem? (Please tell me you are not trying to solve the Collatz conjecture or something similar.) – Raphael Sep 22 '16 at 12:42
• Of course not. I get the sense that you are not really reading what we post. Of course you can incrementally explore the graph using BFS or DFS. Hence our question: what about these algorithms? – Raphael Sep 22 '16 at 19:58
The distances of your nodes and the weights of the edges can be used as a heuristic for an algorithm like $A^*$. That might be reasonably efficient in practice, but in the worst case still looks at the whole graph. |
0 Products | 0,00 €
## LEICA Vario-Elmar-R 4,2/105-280mm ROM
No.:
3734824
Condition:
A-B
2.499,00 €
No VAT
VAT can not be stated separately due to differential taxation.
## LEITZ Extender-R 2x
No.:
3129274
Condition:
C
69,00 €
No VAT
VAT can not be stated separately due to differential taxation.
## LEITZ Extender-R 2x
No.:
3238755
Condition:
A-B
69,00 €
No VAT
VAT can not be stated separately due to differential taxation.
## LEICA Extender-R 2x
No.:
3472406
Condition:
B
79,00 €
No VAT
VAT can not be stated separately due to differential taxation.
## LEICA Extender-R 2x
No.:
3326045
Condition:
A-B
99,00 €
No VAT
VAT can not be stated separately due to differential taxation.
## LEICA Extender-R 2x
No.:
3326402
Condition:
C
120,00 €
No VAT
VAT can not be stated separately due to differential taxation.
## LEICA Extender-R 2x
No.:
3119271
Condition:
C
149,00 €
No VAT
VAT can not be stated separately due to differential taxation.
No.:
3874078
Condition:
A
95,00 €
incl. 19% VAT |
IMI Interdisciplinary Mathematics InstituteCollege of Arts and Sciences
• Dec. 5, 2017
• 4:15 p.m.
• LeConte 312
Abstract
We discuss some of the basic elements of Maximal functions and their connection to Hardy spaces, both in the classical and the multiparameter settings on $R^d$. We also present some resent results on Hardy spaces in the presence of a non-negative self-adjoint operator whose heat kernel has Gaussian localization.
© Interdisciplinary Mathematics Institute | The University of South Carolina Board of Trustees | Webmaster |
The Annals of Mathematical Statistics
On the Identifiability of Finite Mixtures
Abstract
H. Teicher [5] has initiated a valuable study of the identifiability of finite mixtures (these terms to be defined in the next section), revealing a sufficiency condition that a class of finite mixtures be identifiable and from this, establishing the identifiability of all finite mixtures of one-dimensional Gaussian distributions and all finite mixtures of gamma distributions. From other considerations, he has generalized [4] a result of Feller [1] that arbitrary (and hence finite) mixtures of Poisson distributions are identifiable, and has also shown binomial and uniform families do not generate identifiable mixtures. In this paper it is proven that a family $\mathscr{F}$ of cumulative distribution functions (cdf's) induces identifiable finite mixtures if and only if $\mathscr{F}$ is linearly independent in its span over the field of real numbers. Also we demonstrate that finite mixtures of $\mathscr{F}$ are identifiable if $\mathscr{F}$ is any of the following: the family of $n$ products of exponential distributions, the multivariate Gaussian family, the union of the last two families, the family of one-dimensional Cauchy distributions, and the non-degenerate members of the family of one-dimensional negative binomial distributions. Finally it is shown that the translation-parameter family generated by any one-dimensional cdf yields identifiable finite mixtures.
Article information
Source
Ann. Math. Statist., Volume 39, Number 1 (1968), 209-214.
Dates
First available in Project Euclid: 27 April 2007
Permanent link to this document
https://projecteuclid.org/euclid.aoms/1177698520
Digital Object Identifier
doi:10.1214/aoms/1177698520
Mathematical Reviews number (MathSciNet)
MR224204
Zentralblatt MATH identifier
0155.25703
JSTOR |
# Hygroscopic behaviour of DMSO - how bad is it?
For a fluid flow experiment I am using DMSO (dimethylsulfoxide), because of its low volatility, reasonably high surface tension, low viscosity and relative safety.
In the MSDS of DMSO I find that the liquid is hygroscopic, meaning it adsorbs moisture from the surrounding environment. Unfortunately, I cannot find anywhere how much water will be absorbed (mass%) or how quickly this will happen, is it saturated in minutes, hours, days?
Therefore, my question is twofold: (i) how much water will DMSO at maximum absorb and what is the rate at which this absorption occurs? (ii) with the absorbed water, how badly do the physical properties (viscosity, surface tension, density) of the (now) mixture change?
Being curious about the hygroscopicity I have tried the experiment using pure, dried DMSO in our lab where the temperature was $21^\circ \text{C}$ and the relative humidity about $60\%$.
Have a look at the two images below. This is a DMSO droplet originally with $V=50\; \mu\text{L}$. You can clearly see that the volume of the droplet has increased substantially in 20 minutes from the different height.
Using a crude approximation, assuming that both shapes are caps of a sphere, I calculated that the change in height from 100 px, to 112 px is equivalent to a volume increase of $15\%$, which is much too big for an accurate flow experiment.
I only used DMSO in other contexts and I have no idea how fast the water uptake is or how it changes the physical properties, but
• DMSO is miscible with water in any ratio
• it does not form an azeotrop with water
The typical protocol for drying is:
1. Keep over night over anhydrous $\ce{CaSO4}$ or powdered $\ce{BaO}$
2. Decant off
3. Distill over $\ce{CaH2}$ (about 10 g/liter) at reduced pressure (bp is around 75 °C at 16 hPa)
4. Store over 4 A molecular sieve in a dark and tightly closed bottle
• And before you do that you should be very certain, that you need it "that" dry. Did it once, hated it, turned theoretician. :D – Martin - マーチン Apr 2 '14 at 8:27
• @Martin :D I mostly did it for Swern oxidations or in fluorescence spectroscopy. It is a bit tedious and I don't know if it is necessary in Michiel's case, but it might be helpful do have somewhat defined conditions initially. – Klaus-Dieter Warzecha Apr 2 '14 at 8:52
• For these reactions it is absolutely necessary. For everyday reactions stirring it over night over $\ce{CaSO4, BaO, NaH, CaH2}$ is sufficient. btw, you are missing an $\ce{H}$. – Martin - マーチン Apr 2 '14 at 8:58
• It doesn't necessarily have to be very dry. My main concern is that the properties (in particular mass) of the liquid shouldn't change significantly over the course of about 1 hour of exposure to air at about 50% RH. – Michiel Apr 2 '14 at 9:21
• The liquid in my case, is a droplet of 50 $\mu L$ forming a roughly hemispherical cap. – Michiel Apr 2 '14 at 9:34 |
3 added 148 characters in body
Let $\mathcal{A}_{g,D}$ be the moduli space of abelian varieties of dimension $g$ and polarization $D$ of type $(d_1, \ldots, d_g)$.
Let $\mathcal{M}$ be the moduli space parametrizing pairs $(A, \mathcal{L})$, where $A \in \mathcal{A}_{g,D}$ and $\mathcal{L}$ is a non-trivial $2$-torsion line bundle on $A$, i.e. a non-zero element of $\textrm{Pic}^0(A)$ such that $\mathcal{L}^{\otimes 2}=\mathcal{O}_A$.
Then there is a covering
$\pi \colon \mathcal{M} \to longrightarrow \mathcal{A}_{g,D}$
of degree $2^{2g}-1$, given by $[A, \mathcal{L}] \to A$.
Question Is the monodromy group of $\pi$ transitive? Or, equivalently, is $\mathcal{M}$ connected?
The answer is yes when $g=1$, i.e. for elliptic curves. In fact in this case $\mathcal{M}$ is a particular case of a more general construction called the moduli space of spin curves, which was studied by several authors (Cornalba, Verra, Farkas, etc).
What about the case $g \geq 2$? Is there any reference? I'm particularly interested to the case where $g=2$ and $D=(1,2)$.
EDIT Let me explain better the case I'm interested in, hoping that this can be helpful. Let $(A, D)$ be an abelian surface with polarization of type $(1,2)$, which I assume to be not of product type. The linear system $|D|$ is a pencil, that is $h^0(D)=2$, its general element is irreducible and up to a translation we can take $\mathcal{O}_A(D)$ symmetric, i.e. $(-1)_A^* \mathcal{O}_A(D)= \mathcal{O}_A(D)$. Therefore the base locus of $|D|$ is given by the zero element $o$ of $A$ and by three $2$-division points $e_1$, $e_2$, $e_3$ such that $e_1+e_2=e_3$.
There are exactly three $2$-torsion line bundles $\mathcal{L}_1$, $\mathcal{L}_2$, $\mathcal{L}_3$ on $A$ such that there exists an element in the "translated pencil" $|D \otimes \mathcal{L}_i|$ having a double point at $o$ (which is easily proven to be a node). The set
$\{\mathcal{O}_A, \mathcal{L}_1, \mathcal{L}_2, \mathcal{L}_3\}$ |
0 like 0 dislike
1. Before you toss the coin, use theoretical probability to determine the probability of
the coin landing heads up and the probability of the coin landing heads down.
P(H) =
=
P(T) = |
# Teacher Notes
### Why use this resource?
Whilst recapping students’ knowledge of straight lines, and getting them to think about the links between the equations and geometry of their graphs, this resource also promotes the idea of specialising with simple cases first, before going on to try and generalise your findings. It would be beneficial for students to discuss their approaches, and their reasons for taking them, as there are likely to be varied ideas about how to start the problem.
This could also be a good opportunity to discuss how many solutions a system of equations will give you. Many students may assume there will be a finite number of solutions to this problem. You need to know two pieces of information to draw a straight line graph, and here you appear to be given three. However, understanding that there are $8$ ways of combining the information you are given, should help them to understand the structure of the task and why there are multiple solutions for any value of the gradient you choose.
### Possible approach
Students could start working individually or in pairs thinking about approaches to the Main problem.
### Key questions
• How could we start? What simplifications can we make?
• How many pieces of information do we need to define a line?
• With $3$ pieces of information how many ways can we combine them?
### Possible support
Encourage students to start with a very simple line and try to find a pair for it. They might use graphing software and adjust coefficients to try to achieve a solution.
### Possible extension
The Taking it further section starts to generalise more. An applet is provided that helps the students to see what the four possible lines could be for each example. However students could also create this applet for themselves, to show their understanding of the constraints given in the question. |
### The Lambert W-Function
In a recent post ( The Power Tower ) we described a function defined by iterated exponentiation:
$\displaystyle y(x) = {x^{x^{x^{.^{.^{.}}}}}}$
It would seem that when ${x>1}$ this must blow up. Surprisingly, it has finite values for a range of x>1.
Below, we show that the power tower function may be expressed in terms of a function called the Lambert W-Function.
The W-function has applications in a wide range of areas in pure and applied mathematics. Thus, in addition to being a source of innocent merriment, the power tower function is connected with many important practical problems.
Johann Heinrich Lambert
Johann Heinrich Lambert (1728–1777) was a Swiss mathematician, physicist and astronomer. Lambert was born about twenty years later than Euler. In one of his papers, Euler referred to his younger compatriot as “The ingenious engineer Lambert”.
Johann Heinrich Lambert (1728–1777).
Lambert is remembered as the first person to prove the irrationality of ${\pi}$. Euler had earlier proved that e is irrational. Lambert conjectured that e and ${\pi}$ were both transcendental numbers. But the proof of this was not found for about another century. The transcendence of ${e}$ was shown in 1873 by Charles Hermite and, in 1882, Ferdinand von Lindemann published a proof that ${\pi}$ is transcendental.
Lambert had very wide scientific interests. He introduced the hyperbolic functions into spherical geometry and proved some key results for hyperbolic triangles. He also devised several map projections that are still in use today.
Lambert conformal conic projection with standard parallels at 20 N and 50 N (image from Wikimedia Commons).
The Lambert W-Function
In studying the solutions of a family of algebraic equations, Lambert introduced a power series related to a function that has proved to be of wide value and importance. The Lambert W-function is defined as the inverse of the function ${z=w\exp(w)}$. Thus
$\displaystyle w = W(z) \qquad\Longleftrightarrow\qquad z = w\exp(w) \,.$
A plot of ${w=W(z)}$ is presented here.
Lambert W-function w=W(z), defined as the inverse of z=w exp(w).
We confine attention to real values of ${W(z)}$, which means that ${z\ge -1/e}$. The W-function is single-valued for ${z\ge 0}$ and double-valued for ${-1/e. The constraint ${W(z)>-1}$ defines a single-valued function on ${z\in[-1/e,+\infty)}$. This is the principal branch, denoted when appropriate as ${W_0(z)}$ (shown in blue above). The other branch, real on ${z\in[-1/e,0)}$, is denoted ${W_{-1}(z)}$ (shown in red).
Applications of the W-Function
The Lambert W-function occurs frequently in mathematics and physics. Indeed, it has been “re-discovered’ several times in various contexts. In pure mathematics, the W-function is valuable in solving transcendental and differential equations, in combinatorics (as the Tree function), for delay differential equations and for iterated epxonentials (which is the context in which we have introduced it).
In theoretical computer science, the W-function is used in the analysis of algorithms. Physical applications include water waves, combustion problems, population growth, eigenstates of the hydrogen molecule and, recently, quantum gravity.
The W-function also serves as a pedagogical aid. It is a useful example in introducing implicit functions. It is also a valuable test case for numerical solution methods. In the context of complex variable theory, it is a simple example of a function with both algebraic and logarithmic singularities.
Finally, W(z) has a range of interesting asymptotic behaviours. For further references on this, see the technical note linked at the end of this post.
The Power Tower Function and W
For the power tower function defined at the top of this post, we consider the iterative sequence of successive approximations:
$\displaystyle y_1 = x \, \qquad y_{n+1} = x^{y_n}.$
Through numerical experiments, we find that the sequence ${\{y_n\}}$ converges for ${e^{-e}. In fact, this result was first proved by Euler! When this sequence converges, we have an explicit expression for ${x}$ as a function of ${y}$:
$\displaystyle x = y^{1/y}$
Defining ${\xi=\log x}$, it follows that ${y=\exp(\xi y)}$. We can write this as
$\displaystyle (-\xi y)\exp(-\xi y) = (-\xi)$
We now define ${z = -\xi}$ and ${w = -\xi y}$ and have ${z=w\exp(w)}$. But, by the definition of the Lambert W-function, this means that ${w=W(z)}$.
Returning to variables ${x}$ and ${y}$, we conclude that
$\displaystyle y = \frac{W(-\log x)}{-\log x}$
which is the expression for the Power Tower function in terms of the Lambert W-function.
Further Information
• Technical Note with more detail:The Power Tower Function. Peter Lynch, UCD, 2013 (PDF).
• The canonical reference on the Lambert W-Function: Corless, R. M., Gonnet, G. H., Hare, D. E. G., Jeffrey, D. J. and Knuth, D. E. (1996). On the Lambert W function. Adv. Comp. Math. 5, 329–359 (Preprint (Postscript) )
• A lighthearted introduction to the Lambert W-function: Hayes, Brian, 2005: Why W? American Scientist, 93, 104-108 ( PDF ). |
# statsmodels.tsa.arima_process.deconvolve¶
statsmodels.tsa.arima_process.deconvolve(num, den, n=None)[source]
Deconvolves divisor out of signal, division of polynomials for n terms
calculates den^{-1} * num
Parameters
numarray_like
signal or lag polynomial
denomarray_like
coefficients of lag polynomial (linear filter)
nNone or int
number of terms of quotient
Returns
quotarray
quotient or filtered series
remarray
remainder
Notes
If num is a time series, then this applies the linear filter den^{-1}. If both num and den are both lag polynomials, then this calculates the quotient polynomial for n terms and also returns the remainder.
This is copied from scipy.signal.signaltools and added n as optional parameter. |
Asaf Karagila
I don't have much choice...
On this page you can find various tweet-size (or so) pieces of thought that I felt worth sharing. Since I don't have a Twitter account (and I don't intend to have a Twitter account any time soon either), I figured it must be a good enough place to post them.
(24) Can we make a point of teaching the technique of symmetric extensions as part of the basic forcing course, always? I'm sick of writing exposition to symmetric extensions in every single paper just so people will feel it's self-contained enough to read. Now I know how the authors of the first papers about forcing must have felt. (Link here)
(23) Random logician thought of the day: how do constructivists feel about the phrase "not unlike"? I know I dislike it (I mean, is it so hard to eliminate the double negation?). (Link here)
(22) Scientific research (mathematical or otherwise) is very much atomic, and the researcher is very much "electronic". You start at a ground state, and the more effort you put into it, the more excited you become and the further you get from the nucleus of previous knowledge. And of course, there is a huge vacuum of which we know very little in between. (Link here)
(21) Taking a symmetric extension is like having buyer's remorse with respect to your generic filter (in most cases, anyway). You buy a generic filter, and you think it's nice. You add it to your model, you use it for a little bit, then you immediate regret it and return it to the store. And you generally argue that you may have wanted a similar but slightly different generic filter instead. So you try a similar-but-slightly-different filter, but you decide it's not a good fit either. You end up using a bunch of these filters and then you give up and don't add a generic filter at all to your model... But you still have all those new sets that were left from using all those filters. And that ends up being a model of ZF. Okay, I went too far with this analogy. (Link here)
(20) Philosophical quadruple whammy: We are brains in a vat, and the entire reality fed to us is being manipulated by Descartes' demon, and that demon and the whole reality in which he exists (with our brains and the vats) is a computer simulation ran by Laplace's demon in another reality, which in itself is a thought experiment in my own personal brain. (Link here)
(19) Greenspun's Tenth Rule of programming states that "Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified bug-ridden slow implementation of half of Common Lisp." This extends to mathematics. Every elementary proof that avoids some abstract technical construction will contain an ad-hoc, informally-specified and slow implementation of half the technical concept it tried to avoid. For specifics see any proof about cardinal arithmetic that avoids ordinals in favor of Zorn's lemma. (Link here)
(18) New Year's resolution for 2016: To fulfill exactly those resolutions which I end up not fulfilling. Hey, wait a minute... (Link here)
(17) My phone is non-constructive. Whenever I plug it to the wall it tells me "Charging (AC)". Apparently the axiom of choice does have physical applications! (Link here)
(16) Irony is when someone who essentially tries to disprove the existence of infinite sets says that the way his ideas are being rejected is similar to how Cantor's ideas were rejected back at the day. (Link here)
(15) When your proof or theorem go down in flames, do not despair. From their ashes, like a phoenix, are born new theorems, new proofs and new methods. You only need the will and tenacity to dig through the cinders and find the unhatched eggs. (Link here)
(14) The motto for choiceless mathematics "If you can't decide which finger to choose, take the entire hand." By this I mean, of course, if you can't choose one - take everything! (Link here)
(13) Good indexing the the key for any sequential construction, and the bad indexing is the bane of every sequential construction. Did I mention how much I despite the process through which you get to the right indexing sometimes? I don't think I've complained about that enough by now. Indexing sucks. (Link here)
(12) Corollary from a conversation with Yair Hayut: Every theorem you prove is actually a theorem of Tarski, since it is equivalent to a theorem of Tarski. If you prove that something is in fact independent then it is not a theorem of Tarski, but the proof of its independence is in fact a theorem of Tarski... (Link here)
(11) I love "No Country For Old Men" by the Coen Brothers. It's a lovely movie, full of rich and wonderful characters. But as much as I admire the unstoppable angel of death which is Anton Chigurh, or the wonderful sheriff Ed Tom Bell, the best character in that movie is Ellis, without a doubt. And I dare anyone to dispute that without being wrong. (Link here)
(10) Being a mathematician means being wrong 99% of the time, and hopefully being only remembered for that other 1%. (Link here)
(9) The axioms of set theory only tells us what sort of basic properties sets should have. Much like some basic rules about what counts as a place fit for human living should be (running water, bed, kitchen, etc.). But then you can look at different structures and see different things. Some of them will look completely different than others; and some despite looking almost the same will be different from one another when you look at their content. Similarly models of set theory can be very similar, but still different, or entirely different in more way than one. (Link here)
(8) They tell me that every hydrogen atom in the universe is the same. Wouldn't it be great if we eventually learn to read the quantum field in which electrons live, and start to discern between different atoms of hydrogen? Wouldn't it be magnificent if each atom is unique, and everything has a unique footprint in the physical cosmos? I think it would be spectacular. But I don't know if any of us will live to see such wonder. (Link here)
(7) In a totally disconnected space no one can hear you scream. (Link here)
(6) I have grown to appreciate Ben Affleck, both as an actor and a director. But am I the only one afraid that Zack Snyder's direction of Batman in the upcoming "B versus S: DotJL" movie is going to be a blend of Rorschach, The Comedian, and Leonidas "The Scot"? I sure hope that I'm wrong, and there's nothing more that I'd love than that. Well, except maybe a lot of other things. In any case, I'm confident that Affleck will pull off his own Batman movie. And there's nothing more that I'd hate than to eat that last sentence (except maybe a lot of other other things). (Link here)
(5) Someone needs to convince Calvin Klein to name their next perfume "omega one", then the media will be full of $\omega_1^{CK}$ ads and banners. Wouldn't that be great? (Link here)
(4) Clarke's third law of set theory: Any sufficiently advanced argument with forcing, large cardinals and combinatorics is indistinguishable from black magic. (Link here)
(3) I never understood why people have such a hard time conceiving that real numbers and other mathematical objects can be represented as sets. Do they know that a computer converts their pdf files, the code of the reader used to open them, and the compiler used to write that reader into electric signals? Do they not have a problem with that? (Link here)
(2) People who say that "you don't really understand something until you can explain it your grandmother" have at least one living grandmother, or they didn't understand the proverb. (Link here)
(1) Sets are more fundamental than integers. We like to think that mathematics evolved from arithmetics and the need to count how many animals are in a group, or something. But first you need to identify a collection, and know that you want that specific collection to be counted. If anything, numbers evolved from the need to assign cardinality to sets. (Link here) |
# Question regarding Chebyshev function bound with product
So I stumbled upon this while trying to prove something else and cannot find a simple (read:elementary) proof for this (I'm old fashioned and still use $\log$ for natural log):
For large enough even $x$, $\log(8x+1)+\log(8x+3)+\cdots+\log(9x-1)-\log 1-\log 3-\cdots-\log(x-1)>\vartheta(9x)-\vartheta(8x)$.
For general $x$ as $\sum_{\substack{8x<k\leq9x\\k\text{ odd}}}\log k-\sum_{\substack{k\leq x\\k\text{ odd}}}\log k>\vartheta(9x)-\vartheta(8x)$.
I know it holds for at least $x\geq 11000$ by applying bounds to the Chebyshev function and using Stirling's approximation, but I would like to avoid using said Chebyshev bounds as they were proven with analytic results and I'm hoping for a more elementary approach.
• The trivial bound is $\displaystyle\vartheta(x+y)-\vartheta(x) = \sum_{p \in (x,x+y]} \log (p) \le \sum_{2n+1 \in (x,x+y]} \log (2n+1)$. Of course it is not true when $y$ is small that $\displaystyle\vartheta(x+y)-\vartheta(x) \le \sum_{2n+1 \in (x,x+y]} \log (2n+1)-\sum_{2n+1 \in [1,x]} \log (2n+1)$, but it will be true when $y/x$ is large enough. You can use $\sum_{n \le x} \log(n) \sim x \log x$. – reuns Aug 5 '17 at 22:23 |
### Home > MC1 > Chapter 6 > Lesson 6.2.6 > Problem6-110
6-110.
Mrs. Ferguson, your school librarian, asks you to conduct a survey of how many books students read during the year. You get the following results: $12, 24, 10, 36, 12, 21, 35, 10, 8, 12, 15, 20, 18, 25, 21$, and $9$.
1. Use the data to create a stem-and-leaf plot.
$0$ $8\ \ 9$ $1$ $0\ \ 0\ \ 2\ \ 2\ \ 2\ \ 5\ \ 8$ $2$ $0\ \ 1\ \ 1\ \ 4\ \ 5$ $3$ $5\ \ 6$
2. Calculate the mean, median, and mode for the data.
Refer to the example from the Math Notes box from Lesson 1.3.3 for help. |
# Chapter 3 - Derivatives - 3.10 Derivatives of Inverse Trigonometric Functions - 3.10 Execises: 11
${f^,}\,\left( x \right) = - \frac{{2{e^{ - 2x}}}}{{\sqrt {1 - {e^{ - 4x}}} }}$
#### Work Step by Step
$\begin{gathered} f\,\left( x \right) = {\sin ^{ - 1}}\,\left( {{e^{ - 2x}}} \right) \hfill \\ \hfill \\ use\,\,the\,\,formula\,\,\frac{d}{{dx}}\,\,\left[ {{{\sin }^{ - 1}}\,u} \right] = \frac{{{u^,}}}{{\sqrt {1 - {u^2}} }} \hfill \\ \hfill \\ then \hfill \\ \hfill \\ {f^,}\,\left( x \right) = \frac{{ - 2{e^{ - 2x}}}}{{\sqrt {1 - {e^{ - 4x}}} }} \hfill \\ \hfill \\ simplify \hfill \\ \hfill \\ {f^,}\,\left( x \right) = - \frac{{2{e^{ - 2x}}}}{{\sqrt {1 - {e^{ - 4x}}} }} \hfill \\ \end{gathered}$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. |
News
# Infineon Reports Decline in Current Results Longer-Term Growth Drivers Remain
February 06, 2020 by Paul Shepard
Infineon Technologies AG reported results for the first quarter of the 2020 fiscal year (period ended 31 December 2019), including Q1 FY 2020: Revenue of €1,916 million; Segment Result of €297 million; Segment Result Margin of 15.5 percent.
In addition, the company reported that the Cypress acquisition is expected to close towards the end of the current quarter or at the beginning of the following quarter.
"Our well-diversified business performed robustly at the beginning of the fiscal year. Under difficult conditions, revenue fell in line with expectations. Our cost reduction measures are beginning to take effect. Those measures and several non-recurring factors caused the Segment Result to come in slightly better than expected," said Dr. Reinhard Ploss, CEO of Infineon.
"Demand for the latest generation of our silicon microphones is growing dynamically. We are also seeing signs of improvement in individual areas such as the server business. Overall, however, we do not expect to see a broad based recovery of demand before the second half of the fiscal year. Our long-term growth drivers remain intact and we are making a crucial contribution to shaping the future of mobility and energy efficiency," concluded Dr. Ploss.
Outlook for FY 2020: Based on an assumed exchange rate of US$1.13 to the euro, revenue still expected to grow at 5 percent year-on-year (plus or minus 2 percentage points), with Segment Result Margin of about 16 percent at mid-point of revenue guidance. Investments of around 1.3 billion euros planned. Free cash flow in range of €500 to €700 million anticipated. Outlook for Q2 FY 2020: Based on an assumed exchange rate of US$1.13 to the euro, quarter-on-quarter revenue growth of 5 percent (plus or minus 2 percentage points); Segment Result Margin of about 14 percent predicted at mid-point of revenue guidance. |
Change the chapter
Question
What is the energy obtained when 10 g of mass is converted to energy with an efficiency of 70%?
1. $3.93\times 10^{27}\textrm{ MeV}$
2. $3.93\times 10^{30}\textrm{ MeV}$
3. $5.23\times 10^{27}\textrm{ MeV}$
4. $5.23\times 10^{30}\textrm{ MeV}$
Solution Video |
## Srikanth Srinivasan - A Robust Version of Hegedűs's Lemma, with Applications
theoretics:9143 - TheoretiCS, March 1, 2023, Volume 2 - https://doi.org/10.46298/theoretics.23.5
A Robust Version of Hegedűs's Lemma, with Applications
Authors: Srikanth Srinivasan
Hegedűs's lemma is the following combinatorial statement regarding polynomials over finite fields. Over a field $\mathbb{F}$ of characteristic $p > 0$ and for $q$ a power of $p$, the lemma says that any multilinear polynomial $P\in \mathbb{F}[x_1,\ldots,x_n]$ of degree less than $q$ that vanishes at all points in $\{0,1\}^n$ of some fixed Hamming weight $k\in [q,n-q]$ must also vanish at all points in $\{0,1\}^n$ of weight $k + q$. This lemma was used by Hegedűs (2009) to give a solution to \emph{Galvin's problem}, an extremal problem about set systems; by Alon, Kumar and Volk (2018) to improve the best-known multilinear circuit lower bounds; and by Hrubeš, Ramamoorthy, Rao and Yehudayoff (2019) to prove optimal lower bounds against depth-$2$ threshold circuits for computing some symmetric functions. In this paper, we formulate a robust version of Hegedűs's lemma. Informally, this version says that if a polynomial of degree $o(q)$ vanishes at most points of weight $k$, then it vanishes at many points of weight $k+q$. We prove this lemma and give three different applications.
Volume: Volume 2
Published on: March 1, 2023
Accepted on: December 20, 2022
Submitted on: February 28, 2022
Keywords: Computer Science - Computational Complexity |
# Leetcode Offer 58 - Rotate string leftly
Note:
• Normal approach is easy to come up with, but try to using O(1) extra space.
• Original length is length = s.length
• Along with concatenating the last char to s, we find that we are always using newS[oriLength - 1].
Question:
Reverse string from index k to the left.
Example:
Code: |
# Kinetic energy in rotating objects.
by Duely Cack
Tags: diameter, kinetic energy, math, rotation
P: 328 $$K_e = \frac{1}{2}I\omega^2$$ You have to use omega as radians per second, mass has to be in kilograms. |
# 3-Valued Semantics
Before we can move from three-valued logic to fuzzy logic, we need to take a look at semantics – both how conventional two-valued logic handle semantics, and how three-valued logics extend the basic semantic structure. This isn’t exactly one of the more exciting topics I’ve ever written about – but it is important, and going through it now will set the groundwork for the interesting stuff – the semantics of a true fuzzy logic.
What we’ve looked at so far has been propositional 3-valued logics. Propositional logics aren’t particularly interesting. You can’t do or say much with them. What we really care about is predicate logics. But all we need to do is take the three-valued logics we’ve seen, and allow statements to be predicate(object).
In a conventional first-order predicate logic, we define the semantics in terms of a model or interpretation of the logic. (Technically, a logic and an interpretation aren’t quite the same thing, but for our purposes here, we don’t need to get into the difference.)
An interpretation basically takes a domain consisting of a set of objects or values, and does two things:
1. For each atomic symbol in the logic, it assigns an object from the domain. That value is called the interpretation of the symbol.
2. For each predicate in the logic, it assigns a set, called the extension of the predicate. The extension contains the tuples for which the predicate is true.
For example, we could use logic to talk about Scientopia. The domain would be the set of bloggers, and the set of blogs. Then we could have predicates like “Writes”, which takes two parameters – A, and B – and which is true is A is the author of the blog B.
Then the extension of “Writes” would be a set of pairs: { (MarkCC, Good Math/Bad Math), (Scicurious, Neurotic Physiology), … }.
We can also define the counterextension, which is the set of pairs for whiche the predicate is not true. So the counterextension of “writes” would contain values like { (MarkCC, Neurotic Physiology), …}.
Given a domain of objects, and the extension of each predicate, we know the meaning of statements in the logic. We know what objects we’re reasoning about, and we know the truth or falsehood of every statement. Importantly, we don’t need to know the counterextension of a predicate: if we know the extension, then the counterextension is simple the complement of the extension.
In three-valued Lukasiewicz logic, that’s not true, for reasons that should be obvious: if $I(A)$ is the interpretation of the predicate $A$, then the complement of $I(A)$ is not the same thing as $I(lnot A)$. $L_3$ requires three sets for a predicate: the extension, the counterextension, and the fringe. The fringe of a predicate $P$ is the set of values $x$ for which $P(x)$ is $N$.
To be more precise, an interpretation $I$ for first order $L_3$ consists of:
1. A set of values, $D$, called the domain of the logic. This is the set of objects that the logic can be used to reason about.
2. For each predicate $P$ of arity $n$ (that is, taking $n$ arguments), three sets $ext(P)$, $cext(P)$, and $fringe(P)$, such that:
• the values of the members of all three sets are members of $D^n$.
• the sets are mutually exclusive – that is, there is no value that is in more than one of the sets.
• the sets are exhaustive: $ext(P) cup cext(P) cup fringe(P) = D^n$.
3. For each constant symbol $a$ in the logic, an assignment of $a$ to some member of $D$: $I(a) in D$
With the interpretation, we can look at statements in the logic, and determine their truth or falsehood. But when we go through a proof, we’ll often have statements that don’t operate on specific values – they use variables inside of them. In order to make a statement have a truth value, all of the variables in that statement have to be bound by a quantifier, or assigned to a specific value by a variable assignment. So given a statement, we can frequently only talk about its meaning in terms of variable assignments for it.
So, for example, consider a simple statement: P(x,y,z). In the interpretation I, P(x, y, z) is satisfied if $(I(x), I(y), I(z)) in ext(P)$; it’s dissatisfied if $(I(x), I(y), I(z)) in cext(P)$. Otherwise, $(I(x), I(y), I(z))$ must be in $fringe(P)$, and then the statement is undetermined.
The basic connectives – and, or, not, implies, etc., all have defining rules like the above – they’re obvious and easy to derive given the truth tables for the connectives, so I’m not going to go into detail. But it does get at least a little bit interesting when we get to quantified statements. But to talk about we need to first define a construction called a variant. Given a statement with variable assignment v, which maps all of the variables in the statement to values, an x-variant $v$ of $v$ is a variable assignment where for every variable $y$ except $x$, $v$. In other words, it’s an assignment where all of the variables except x have the same value as in $v$.
Now we can finally get to the interpretation of quantified statements. Given a statement $forall x P$, $P$ is satisfied by a variable assignment $v$ if $P$ is satisfied by every x-variant of $v$; it’s dissatisfied if $P$ is dissatisfied by at least one x-variant of $v$. Otherwise, it’s undetermined.
Similarly, an existentially quantified statement $exists x P$ is satisfied by $v$ if $P$ is satisfied by at least one x-variant of $v$; it’s dissatisfied if $P$ is dissatisfied by every x-variant of v. Otherwise, it’s undetermined.
Finally, now, we can get to the most important bit: what it means for a statement to be true or false in $L_3$. A statement $S$ is $T$ (true) if it is satisfied by every variable assignment on $I$; it’s $F$ (false) if it’s dissatisfied by every variable assignment on $I$, and it’s $N$ otherwise.
# Fuzzy Logic vs Probability
In the comments on my last post, a few people asked me to explain the difference between fuzzy logic and probability theory. It’s a very good question.
The two are very closely related. As we’ll see when we start looking at fuzzy logic, the basic connectives in fuzzy logic are defined in almost the same way as the corresponding operations in probability theory.
The key difference is meaning.
There are two major schools of thought in probability theory, and they each assign a very different meaning to probability. I’m going to vastly oversimplify, but the two schools are the frequentists and the Bayesians
First, there are the frequentists. To the frequentists, probability is defined by experiment. If you say that an event E has a probability of, say, 60%, what that means to the frequentists is that if you could repeat an experiment observing the occurrence or non-occurrence of E an infinite number of times, then 60% of the time, E would have occurred. That, in turn, is taken to mean that the event E has an intrinsic probability of 60%.
The other alternative are the Bayesians. To a Bayesian, the idea of an event having an intrinsic probability is ridiculous. You’re interested in a specific occurrence of the event – and it will either occur, or it will not. So there’s a flu going around; either I’ll catch it, or I won’t. Ultimately, there’s no probability about it: it’s either yes or no – I’ll catch it or I won’t. Bayesians say that probability is an assessment of our state of knowledge. To say that I have a 60% chance of catching the flu is just a way of saying that given the current state of our knowledge, I can say with 60% certainty that I will catch it.
In either case, we’re ultimately talking about events, not facts. And those events will either occur, or not occur. There is nothing fuzzy about it. We can talk about the probability of my catching the flu, and depending on whether we pick a frequentist or Bayesian interpretation, that means something different – but in either case, the ultimate truth is not fuzzy.
In fuzzy logic, we’re trying to capture the essential property of vagueness. If I say that a person whose height is 2.5 meters is tall, that’s a true statement. If I say that another person whose height is only 2 meters is tall, that’s still true – but it’s not as true as it was for the person 2.5 meters tall. I’m not saying that in a repeatable experiment, the first person would be tall more often than the second. And I’m not saying that given the current state of my knowledge, it’s more likely than the first person is tall than the second. I’m saying that both people possess the property tall – but in different degrees.
Fuzzy logic is using pretty much the same tools as probability theory. But it’s using them to trying to capture a very different idea. Fuzzy logic is all about degrees of truth – about fuzziness and partial or relative truths. Probability theory is interested in trying to make predictions about events from a state of partial knowledge. (In frequentist terms, it’s about saying that I know that if I repeated this 100 times, E would happen in 60; in Bayesian, it’s precisely a statement of partial knowledge: I’m 60% certain that E will happen.) But probability theory says nothing about how to reason about things that aren’t entirely true or false.
And, in the other direction: fuzzy logic isn’t particularly useful for talking about partial knowledge. If you allowed second-order logic, you could have fuzzy meta-predicates that described your certainty about crisp first-order predicates. But with first order logic (which is really where we want to focus our attention), fuzzy logic isn’t useful for the tasks where we use probability theory.
So probability theory doesn’t capture the essential property of meaning (partial truth) which is the goal of fuzzy logic – and fuzzy logic doesn’t capture the essential property of meaning (partial knowledge) which is the goal of probability theory. |
# The derivation of $\delta_j = \frac{\partial E_n}{ \partial a_j}$ errors for hidden units in back propagation for neural networks with the chain rule
I was trying to understand the derivation for back propagation for multi-layer neural networks from Bishop's Pattern Recognition and Machine Learning book. Specifically I was reading section 5.3.1 from page 242 to 244.
The equation that is specifically confusing me is equation 5.55
$$\delta_j \equiv \frac{\partial E_n}{ \partial a_j} = \sum_k \frac{\partial E_n}{ \partial a_k} \frac{\partial a_k}{ \partial a_j}$$
where Bishop goes on to say:
where the sum runs over all units $k$ to which $j$ sends connections.
furthermore, what confuses me is the use of the chain rule and how the connections the affect the partial derivative.
To make this discussion easier recall the definition of $E_n$ (the error of a the neural network for the nth training point, equation 5.46):
$$E_n = \frac{1}{2} \sum_k (y_{nk} - t_{nk} )^2 = \frac{1}{2} \|y_n - t_n \|^2$$
where $y_n$ is the vector of outputs of our neural and $t_n$ is the true target output we are trying to learn.
My confusion is specifically how he applied the multivariable chain rule. Usually the way I think the multivariable chain rule is as follows; given a function $f(x_1(t), ..., x_N(t) )$, if we want its derivative wrt to t then we get:
$$\frac{df}{dt} = \sum^N_{k=1} \frac{\partial f}{\partial x_k} \frac{d x_k}{d t}$$
However, I am having some difficulties understanding how that equations was applied in the context of deriving $\frac{\partial E_n}{ \partial a_j}$ for the hidden units of the neural network.
In particular, I thought $E_n$ is a function of all the $a_j$'s, at every layer, so why wouldn't all the $a_j$'s from each layer be part of the summation but only the ones from one layer before?
Also, the part $\frac{\partial a_k}{ \partial a_j}$ from equation 5.55 is never zero, right? because the $a_j$ vs $a_k$ are from different nodes in the network, right? Wouldn't it be easier/better to indicate this dependence with superscripts indicating the layers?
Is somebody knows how to explain how that equation come about, I would be super grateful!
For reference I will paste the relevant section of the chapter as reference:
I will include superscripts and bias term to make it more clear. It seems to me $\frac{\partial a_k}{a_j}$ is totally redundant as it is equal to 1 when $k=j$ and 0 $otherwise$. It would be better to use notation:
$\delta_j^l = \frac{\partial E}{a_j^l} = \sum_k \frac{\partial E}{z_k^l} \frac{\partial z_k^l}{a_j^{l-1}}$ (Eq.1)
where $z_k^l$ is activation of k-th unit at l-th layer, $a_j^l$ is pre-activation of k-th unit of l-th layer.
Here is a few equation before we process;
$a_k^l = \sum_k W_{kj}^{l}*z_j^{l} + b_k^{l}$ (Eq.2)
$z_k^{l+1} = h(a_k^l)$ (Eq.3) where $h(.)$ is activation function and next layer starts after activation function.
$\frac{\partial z_k^{l+1}}{\partial a_k^l} = h'(a_k^l)$ (Eq.4)
$\frac{\partial a_k^l}{\partial W_{kj}^{l}} = z_j^{l}$ (Eq.5)
$\frac{\partial a_k^l}{\partial b_k^{l}} = 1$ (Eq.6)
$E = \frac{1}{2} \sum_k(z_k^L−t_k)^2$, (Eq.7) where $L$ is the index of last layer;
$\delta_j^L = \frac{\partial E}{a_j^{L-1}} = \sum_k \frac{\partial E}{z_k^L} \frac{\partial z_k^L}{a_j^{L-1}} = (z_k^L−t) .* h'(a_k^l)$, (Eq.8) in here I'm assuming activation of each unit only depends on its own pre-activation. So, if softmax is used it changes a bit. As we have error term at the last layer, gradients with respect to weights and biases at the previous layer can be evaluated by:
$\frac{\partial E}{\partial W_{kj}^{L-1}} = \frac{\partial E}{\partial a_k^{L-1}} \frac{\partial a_k^{L-1}}{\partial W_{kj}^{L-1}} = \delta_k^L \frac{\partial a_k^{L-1}}{\partial W_{kj}^{L-1}} = \delta_j^L z_k^{L-1}$, from Eq.5
$\frac{\partial E}{\partial b_{k}^{L-1}} = \frac{\partial E}{\partial a_k^{L-1}} \frac{\partial a_k^{L-1}}{\partial b_{k}^{L-1}} = \delta_k^L$, from Eq.6
In order to evaluate gradients of previous layer, error at the current layer s evaluated by using error at the later layer.
$\delta_j^{L-1} = \frac{\partial E}{\partial a_{j}^{L-2}} = \delta_k^{L} \frac{\partial a_j^{L-1}}{\partial z_k^{L-1}} \frac{\partial z_j^{L-1}}{\partial a_k^{L-2}} = (\delta_k^{L}* W_{kj}^{L-1}).*h'(a_j^{L-2})$
To sum up, summation over all units on a layer is only used when activation of a units depends on pre-activation of other units in the same layer like softmax. So, if units are independent you can more easly apply chain rule as in:
$\frac{\partial E}{\partial a_{N_1}^{1}} = \frac{\partial E}{\partial a_{N_L}^{L}} \frac{\partial a_{N_L}^{L}}{\partial a_{N_{L-1}}^{L-1}} \frac{\partial a_{N_{L-1}}^{L-1}}{\partial a_{N_{L-2}}^{L-2}}.... \frac{\partial a_{N_3}^{3}}{\partial a_{N_2}^{2}} \frac{\partial a_{N_2}^{2}}{\partial a_{N_1}^{1}}$ where $N_l$ is number of units at layer $l$ |
# Application of inverse function theorem to get short time existence
I am reading a book on curve shortening flow. Optionally, please see this image for the page that is confusing me (I am not allowed to include it in this post since I'm new): http://i.stack.imgur.com/L54lm.png [Thanks to user Leonid from SE for the image. Page 17 of The Curve Shortening Problem by Kai Seng Chou and Xi-Ping Zhu]
The authors construct a map $\mathcal{F}$ from $\tilde{C}^{k+2, \alpha}(S^1 \times (0,t))$ to $\tilde{C}^{k, \alpha}(S^1 \times (0,t))$, find its Frechet derivative and show it's an isomorphism, so we can use the inverse function theorem. They say there exists a $t_0$, $\epsilon$ and $\delta$ such that for any $f$ with $\lVert f - \mathcal{F}(v) \rVert < \epsilon$ there exists a unique $u$ such that $\lVert u - v \rVert < \delta$ and $\mathcal{F}(u) = f$ for all $t \leq t_0$.
I am confused about the part they say that "there exists a $t_0$ ... such that $\mathcal{F}(u) = f$ for all $t \leq t_0$". How does this time dependence come into this from the inverse function theorem?
The inverse function theorem I know doesn't state anything about this time dependence. The proof is confusingly written (for me anyway). If they fix the space to be $\tilde{C}^{k, \alpha}(S^1 \times (0,t))$ then how can they only say that the solution exists within a neighbourhood of the $(0,t)$? I thought you don't get control of that, only the space of functions.
Can anyone explain this? Are they using some other theorem? Thanks.
-
Perhaps something can be gleaned from the analogous proof of short-time existence for scalar ODEs:
Consider the initial value problem $x' = f(x)$ with $x(0) = x_0$ and define $F(x)(t) = x(t) - x_0 - \int_0^t f(x(s))ds$ so that zeros of F correspond to solutions of the IVP. Regard the function F as acting on some space of functions whose elements obey $x(0) = x_0$. The Derivative of F is $(F'(x)y)(t) = y(t) - \int_0^t f'(x(s))y(s)ds$. To show that F' is an isomorphism, one wants to show that the norm of $y \mapsto \int_0^t f'(x(s))y(s)ds$ is less than one. If you are working on a space of functions from $[-T,T] \to R$, then a cheap estimate is given by T times the maximum value that the absolute value of $f'$ takes. This can be made smaller than one by choosing T sufficiently small. Of course f' has to have a maximum value in the first place. This is dealt with through a short song-and-dance in which one works in an open subset of the function space for which the functions x take a restricted set of values so that on these values f' does take a maximum absolute value.
I suspect that something similar is going on here.
More generally, given an operator A that can be regarded as acting on either a Banach space X or some other Banach space Y, the spectrum of A in general will depend upon the Banach space. That the note produced by a vibrating harp string depends on the length of the string furnishes an example of this phenomenon. (A is the second derivative, X is the set of functions from $[0,L_x]$ to R with Dirichlet boundary conditions and Y is the set of functions from $[0,L_y]$ to R with Dirichlet boundary conditions.)
In your situation again the different function spaces contain points which themselves are functions defined on shorter or longer time intervals.
-
Indeed, the argument for short time existence of a parabolic PDE is essentially the same as the proof for the short time existence of a system of first order ODE's. The only difference is that the curve is a map into a carefully chosen Banach space instead of $R^n$. – Deane Yang Jun 21 '12 at 16:44
Thanks for the reply. – user24394 Jun 23 '12 at 17:46 |
• 论文 •
增长依赖于不可测状态非线性系统量化输出反馈控制设计
1. 山东大学控制科学与工程学院, 济南, 250061
• 出版日期:2012-06-25 发布日期:2012-08-22
MAN Yongchao, LIU Yungang. QUANTIZED OUTPUT FEEDBACK CONTROL DESIGN FOR NONLINEAR SYSTEMS WITH UNMEASURED STATES DEPENDENT GROWTH[J]. Journal of Systems Science and Mathematical Sciences, 2012, 32(6): 705-718.
QUANTIZED OUTPUT FEEDBACK CONTROL DESIGN FOR NONLINEAR SYSTEMS WITH UNMEASURED STATES DEPENDENT GROWTH
MAN Yongchao, LIU Yungang
1. School of Control Science and Engineering, Shandong University, Ji’nan 250061
• Online:2012-06-25 Published:2012-08-22
This paper considers the quantized output feedback control design for a class of ncertain nonlinear systems. Different from the existing literature, the onlinear growth of the ystems under investigation depends on the unmeasured states, which makes the observer design inthe existing literature inapplicable, and renders the performance analysis for the closed-loop ystem more complex and difficult. First, a new high-gain observer is introduced to reconstruct he unmeasured states. Then, by applying the set-valued maps and recursive control design ethod in the existing literature, a quantized output feedback controller is designed. Finally, y the cyclic-small-gain theorem and dynamic quantization strategy, a sufficient condition is iven to guarantee that all states of the closed-loop system are bounded and that output can beultimately arbitrarily small.
MR(2010)主题分类:
()
[1] 胡慧,刘国荣,郭鹏,胡俊达. 一类非仿射非线性系统的神经网络自适应跟踪控制[J]. 系统科学与数学, 2013, 33(9): 1007-1016. [2] 满永超,刘允刚. 高阶不确定非线性系统切换自适应镇定[J]. 系统科学与数学, 2013, 33(6): 661-670. [3] 李宝全,方勇纯,张雪波,何万峰. 基于选择策略的移动机器人视觉伺服镇定方法[J]. 系统科学与数学, 2012, 32(6): 750-767. [4] 高芳征,尚艳玲,袁付顺. 更一般高阶非完整系统的指数调节[J]. 系统科学与数学, 2012, 32(2): 149-160. [5] 毛学志, 徐勇, 刘建平, 马会泉. 各向异性的群集行为分析[J]. 系统科学与数学, 2011, 31(8): 913-920. [6] 毕卫萍, 张俊锋. 一类非线性上三角系统的实用输出跟踪控制[J]. 系统科学与数学, 2011, 31(7): 775-785. |
# Equilibrium constant
Jump to: navigation, search
For a general chemical equilibrium
${\displaystyle \alpha A+\beta B...\rightleftharpoons \sigma S+\tau T...}$
the equilibrium constant can be defined by[1]
${\displaystyle K={\frac {{\{S\}}^{\sigma }{\{T\}}^{\tau }...}{{\{A\}}^{\alpha }{\{B\}}^{\beta }...}}}$
where {A} is the activity of the chemical species A, etc. (activity is a dimensionless quantity). It is conventional to put the activities of the products in the numerator and those of the reactants in the denominator.
For equilibria in solution, activity is the product of concentration and activity coefficient. Most chemists determine equilibrium constants in a solution with a high ionic strength. In high strength solutions, the quotient of activity coefficients changes very little. So, the equilibrium constant is defined as a concentration quotient:
${\displaystyle K_{c}={\frac {{[S]}^{\sigma }{[T]}^{\tau }...}{{[A]}^{\alpha }{[B]}^{\beta }...}}}$
However, the value of Kc will depend on the ionic strength. (The square brackets mean the concentration of A, B and so on.)
This is a simple idea. In an equilibrium, atoms can combine or break apart because the reaction can work in both directions. For the reaction to work, all of the parts must be present to combine. This is more likely to happen if the reactants have a higher concentration. So, the concentrations of all the necessary pieces are multiplied together to get the probability that they will be in the same place for the reaction. (If the reaction requires two molecules of a particular compound, then the concentration of that compound is squared.) Going the other way, all of the concentrations of those necessary pieces are multiplied together to get the probability that they will be in the same place to react in the opposite direction. The ratio between those two numbers represents how popular each side of the reaction will be when equilibrium is reached. An equilibrium constant of 1 means that both sides are equally popular. Chemists perform experiments to measure the equilibrium constant of various reactions.
## References
1. F.J,C. Rossotti and H. Rossotti, The Determination of Stability Constants, McGraw-Hill, 1961. |
# Dispersion (optics): Wikis
Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.
# Encyclopedia
In a prism, material dispersion (a wavelength-dependent refractive index) causes different colors to refract at different angles, splitting white light into a rainbow.
In optics, dispersion is the phenomenon in which the phase velocity of a wave depends on its frequency,[1] or alternatively when the group velocity depends on the frequency. Media having such a property are termed dispersive media. Dispersion is sometimes called chromatic dispersion to emphasize its wavelength-dependent nature, or group-velocity dispersion (GVD) to emphasize the role of the group velocity.
The most familiar example of dispersion is probably a rainbow, in which dispersion causes the spatial separation of a white light into components of different wavelengths (different colors). However, dispersion also has an effect in many other circumstances: for example, GVD causes pulses to spread in optical fibers, degrading signals over long distances; also, a cancellation between group-velocity dispersion and nonlinear effects leads to soliton waves. Dispersion is most often described for light waves, but it may occur for any kind of wave that interacts with a medium or passes through an inhomogeneous geometry (e.g. a waveguide), such as sound waves.
There are generally two sources of dispersion: material dispersion and waveguide dispersion. Material dispersion comes from a frequency-dependent response of a material to waves. For example, material dispersion leads to undesired chromatic aberration in a lens or the separation of colors in a prism. Waveguide dispersion occurs when the speed of a wave in a waveguide (such as an optical fiber) depends on its frequency for geometric reasons, independent of any frequency dependence of the materials from which it is constructed. More generally, "waveguide" dispersion can occur for waves propagating through any inhomogeneous structure (e.g. a photonic crystal), whether or not the waves are confined to some region. In general, both types of dispersion may be present, although they are not strictly additive. Their combination leads to signal degradation in optical fibers for telecommunications, because the varying delay in arrival time between different components of a signal "smears out" the signal in time.
## Material dispersion in optics
The variation of refractive index vs. wavelength for various glasses. The wavelengths of visible light are shaded in red.
Influences of selected glass component additions on the mean dispersion of a specific base glass (nF valid for λ = 486 nm (blue), nC valid for λ = 656 nm (red))[2]
Material dispersion can be a desirable or undesirable effect in optical applications. The dispersion of light by glass prisms is used to construct spectrometers and spectroradiometers. Holographic gratings are also used, as they allow more accurate discrimination of wavelengths. However, in lenses, dispersion causes chromatic aberration, an undesired effect that may degrade images in microscopes, telescopes and photographic objectives.
The phase velocity, v, of a wave in a given uniform medium is given by
$v = \frac{c}{n}$
where c is the speed of light in a vacuum and n is the refractive index of the medium.
In general, the refractive index is some function of the frequency f of the light, thus n = n(f), or alternately, with respect to the wave's wavelength n = n(λ). The wavelength dependence of a material's refractive index is usually quantified by an empirical formula, the Cauchy or Sellmeier equations.
Because of the Kramers–Kronig relations, the wavelength dependence of the real part of the refractive index is related to the material absorption, described by the imaginary part of the refractive index (also called the extinction coefficient). In particular, for non-magnetic materials (μ = μ0), the susceptibility χ that appears in the Kramers–Kronig relations is the electric susceptibility χe = n2 − 1.
The most commonly seen consequence of dispersion in optics is the separation of white light into a color spectrum by a prism. From Snell's law it can be seen that the angle of refraction of light in a prism depends on the refractive index of the prism material. Since that refractive index varies with wavelength, it follows that the angle that the light is refracted by will also vary with wavelength, causing an angular separation of the colors known as angular dispersion.
For visible light, most transparent materials (e.g. glasses) have:
$1 < n(\lambda_{\rm red}) < n(\lambda_{\rm yellow}) < n(\lambda_{\rm blue})\ ,$
or alternatively:
$\frac{{\rm d}n}{{\rm d}\lambda} < 0,$
that is, refractive index n decreases with increasing wavelength λ. In this case, the medium is said to have normal dispersion. Whereas, if the index increases with increasing wavelength the medium has anomalous dispersion.
At the interface of such a material with air or vacuum (index of ~1), Snell's law predicts that light incident at an angle θ to the normal will be refracted at an angle arcsin(sin(θ)/n). Thus, blue light, with a higher refractive index, will be bent more strongly than red light, resulting in the well-known rainbow pattern.
## Group and phase velocity
Another consequence of dispersion manifests itself as a temporal effect. The formula v = c / n calculates the phase velocity of a wave; this is the velocity at which the phase of any one frequency component of the wave will propagate. This is not the same as the group velocity of the wave, that is the rate at which changes in amplitude (known as the envelope of the wave) will propagate. For a homogeneous medium, the group velocity vg is related to the phase velocity by (here λ is the wavelength in vacuum, not in the medium):
$v_g = c \left( n - \lambda \frac{dn}{d\lambda} \right)^{-1}.$
The group velocity vg is often thought of as the velocity at which energy or information is conveyed along the wave. In most cases this is true, and the group velocity can be thought of as the signal velocity of the waveform. In some unusual circumstances, called cases of anomalous dispersion, the rate of change of the index of refraction with respect to the wavelength changes sign, in which case it is possible for the group velocity to exceed the speed of light (vg > c). Anomalous dispersion occurs, for instance, where the wavelength of the light is close to an absorption resonance of the medium. When the dispersion is anomalous, however, group velocity is no longer an indicator of signal velocity. Instead, a signal travels at the speed of the wavefront, which is c irrespective of the index of refraction.[3] Recently, it has become possible to create gases in which the group velocity is not only larger than the speed of light, but even negative. In these cases, a pulse can appear to exit a medium before it enters.[4] Even in these cases, however, a signal travels at, or less than, the speed of light, as demonstrated by Stenner, et al.[5]
The group velocity itself is usually a function of the wave's frequency. This results in group velocity dispersion (GVD), which causes a short pulse of light to spread in time as a result of different frequency components of the pulse travelling at different velocities. GVD is often quantified as the group delay dispersion parameter (again, this formula is for a uniform medium only):
$D = - \frac{\lambda}{c} \, \frac{d^2 n}{d \lambda^2}.$
If D is less than zero, the medium is said to have positive dispersion. If D is greater than zero, the medium has negative dispersion. If a light pulse is propagated through a normally dispersive medium, the result is the higher frequency components travel slower than the lower frequency components. The pulse therefore becomes positively chirped, or up-chirped, increasing in frequency with time. Conversely, if a pulse travels through an anomalously dispersive medium, high frequency components travel faster than the lower ones, and the pulse becomes negatively chirped, or down-chirped, decreasing in frequency with time.
The result of GVD, whether negative or positive, is ultimately temporal spreading of the pulse. This makes dispersion management extremely important in optical communications systems based on optical fiber, since if dispersion is too high, a group of pulses representing a bit-stream will spread in time and merge together, rendering the bit-stream unintelligible. This limits the length of fiber that a signal can be sent down without regeneration. One possible answer to this problem is to send signals down the optical fibre at a wavelength where the GVD is zero (e.g. around ~1.3-1.5 μm in silica fibres), so pulses at this wavelength suffer minimal spreading from dispersion—in practice, however, this approach causes more problems than it solves because zero GVD unacceptably amplifies other nonlinear effects (such as four wave mixing). Another possible option is to use soliton pulses in the regime of anomalous dispersion, a form of optical pulse which uses a nonlinear optical effect to self-maintain its shape—solitons have the practical problem, however, that they require a certain power level to be maintained in the pulse for the nonlinear effect to be of the correct strength. Instead, the solution that is currently used in practice is to perform dispersion compensation, typically by matching the fiber with another fiber of opposite-sign dispersion so that the dispersion effects cancel; such compensation is ultimately limited by nonlinear effects such as self-phase modulation, which interact with dispersion to make it very difficult to undo.
Dispersion control is also important in lasers that produce short pulses. The overall dispersion of the optical resonator is a major factor in determining the duration of the pulses emitted by the laser. A pair of prisms can be arranged to produce net negative dispersion, which can be used to balance the usually positive dispersion of the laser medium. Diffraction gratings can also be used to produce dispersive effects; these are often used in high-power laser amplifier systems. Recently, an alternative to prisms and gratings has been developed: chirped mirrors. These dielectric mirrors are coated so that different wavelengths have different penetration lengths, and therefore different group delays. The coating layers can be tailored to achieve a net negative dispersion.
## Dispersion in waveguides
Optical fibers, which are used in telecommunications, are among the most abundant types of waveguides. Dispersion in these fibers is one of the limiting factors that determine how much data can be transported on a single fiber.
The transverse modes for waves confined laterally within a waveguide generally have different speeds (and field patterns) depending upon their frequency (that is, on the relative size of the wave, the wavelength) compared to the size of the waveguide.
In general, for a waveguide mode with an angular frequency ω(β) at a propagation constant β (so that the electromagnetic fields in the propagation direction z oscillate proportional to eiz − ωt)), the group-velocity dispersion parameter D is defined as:[6]
$D = -\frac{2\pi c}{\lambda^2} \frac{d^2 \beta}{d\omega^2} = \frac{2\pi c}{v_g^2 \lambda^2} \frac{dv_g}{d\omega}$
where λ = 2πc / ω is the vacuum wavelength and vg = dω / dβ is the group velocity. This formula generalizes the one in the previous section for homogeneous media, and includes both waveguide dispersion and material dispersion. The reason for defining the dispersion in this way is that |D| is the (asymptotic) temporal pulse spreading Δt per unit bandwidth Δλ per unit distance travelled, commonly reported in ps / nm km for optical fibers.
A similar effect due to a somewhat different phenomenon is modal dispersion, caused by a waveguide having multiple modes at a given frequency, each with a different speed. A special case of this is polarization mode dispersion (PMD), which comes from a superposition of two modes that travel at different speeds due to random imperfections that break the symmetry of the waveguide.
## Higher-order dispersion over broad bandwidths
When a broad range of frequencies (a broad bandwidth) is present in a single wavepacket, such as in an ultrashort pulse or a chirped pulse or other forms of spread spectrum transmission, it may not be accurate to approximate the dispersion by a constant over the entire bandwidth, and more complex calculations are required to compute effects such as pulse spreading.
In particular, the dispersion parameter D defined above is obtained from only one derivative of the group velocity. Higher derivatives are known as higher-order dispersion.[7] These terms are simply a Taylor series expansion of the dispersion relation β(ω) of the medium or waveguide around some particular frequency. Their effects can be computed via numerical evaluation of Fourier transforms of the waveform, via integration of higher-order slowly varying envelope approximations, by a split-step method (which can use the exact dispersion relation rather than a Taylor series), or by direct simulation of the full Maxwell's equations rather than an approximate envelope equation.
## Dispersion in gemology
In the technical terminology of gemology, dispersion is the difference in the refractive index of a material at the B and G Fraunhofer wavelengths of 686.7 nm and 430.8 nm and is meant to express the degree to which a prism cut from the gemstone shows "fire", or color. Dispersion is a material property. Fire depends on the dispersion, the cut angles, the lighting environment, the refractive index, and the viewer.
## Dispersion in imaging
In photographic and microscopic lenses, dispersion causes chromatic aberration, distorting the image, and various techniques have been developed to counteract it such as the use of multielement lenses with glasses with different dispersion characteristics: the net effect is to recombine (at least approximately) all colors.
## Dispersion in pulsar timing
Pulsars are spinning neutron stars that emit pulses at very regular intervals ranging from milliseconds to seconds. Astronomers believe that the pulses are emitted simultaneously over a wide range of frequencies. However, as observed on Earth, the components of each pulse emitted at higher radio frequencies arrive before those emitted at lower frequencies. This dispersion occurs because of the ionised component of the interstellar medium, which makes the group velocity frequency dependent. The extra delay added at a frequency ν is
$t = k_\mathrm{DM} \times \left(\frac{\mathrm{DM}}{\nu^2}\right)$
where the dispersion constant kDM is given by
$k_\mathrm{DM} = \frac{e^2}{2 \pi m_\mathrm{e}c} \simeq 4.149 \mathrm{GHz}^2\mathrm{pc}^{-1}\mathrm{cm}^3\mathrm{ms}$,
and the dispersion measure DM is the free electron column density (total electron content) ne integrated along the path traveled by the photon from the pulsar to the Earth, and is given by
$\mathrm{DM} = \int_0^d{n_e\;dl}$
with units of parsecs per cubic centimetre (1pc/cc = 30.857×1021 m−2).[8]
Typically for astronometric observations, this delay cannot be measured directly, since the emission time is unknown. What can be measured is the difference in arrival times at two different frequencies. The delay ΔT between a high frequency νhi and a low frequency νlo component of a pulse will be
$\Delta t = k_\mathrm{DM} \times \mathrm{DM} \times \left( \frac{1}{\nu_{\mathrm{lo}}^2} - \frac{1}{\nu_{\mathrm{hi}}^2} \right)$
Re-writing the above equation in terms of DM allows one to determine the DM by measuring pulse arrival times at multiple frequencies. This in turn can be used to study the interstellar medium, as well as allow for observations of pulsars at different frequencies to be combined.
## References
1. ^ Born, Max; Wolf, Emil (October 1999). Principle of Optics. Cambridge: Cambridge University Press. pp. 14–24. ISBN 0521642221.
2. ^ Calculation of the Mean Dispersion of Glasses
3. ^ Brillouin, Léon. Wave Propagation and Group Velocity. (Academic Press: San Diego, 1960). See esp. Ch. 2 by A. Sommerfeld.
4. ^ Wang, L.J., Kuzmich, A., and Dogariu, A. (2000). "Gain-assisted superluminal light propagation". Nature 406: 277.
5. ^ Stenner, M. D., Gauthier, D. J., and Neifeld, M. A. (2003). "The speed of information in a 'fast-light' optical medium". Nature 425: 695.
6. ^ Rajiv Ramaswami and Kumar N. Sivarajan, Optical Networks: A Practical Perspective (Academic Press: London 1998).
7. ^ Chromatic Dispersion, Encyclopedia of Laser Physics and Technology (Wiley, 2008).
8. ^ Lorimer, D.R., and Kramer, M., Handbook of Pulsar Astronomy, vol. 4 of Cambridge Observing Handbooks for Research Astronomers, (Cambridge University Press, Cambridge, U.K.; New York, U.S.A, 2005), 1st edition. |
# Java program to calculate mean of given numbers
Java Programming Java8Object Oriented Programming
Mean is an average value of given set of numbers. It is calculated similarly to that of the average value. Adding all given number together and then dividing them by the total number of values produces mean.
For Example
Mean of 3, 5, 2, 7, 3 is (3 + 5 + 2 + 7 + 3) / 5 = 4
## Algorithm
• Take an integer set A of n values.
• Add all values of A together.
• Divide result of Step 2 by n.
• The result is mean of A's values.
## Program
public class CaculatingMean {
public static void main(String args[]){
float mean;
int sum, i;
int n = 5;
int a[] = {2,6,7,4,9};
sum = 0;
for(i = 0; i < n; i++) {
sum+=a[i];
}
System.out.println("Mean ::"+ sum/(float)n);
}
}
## Output
Mean::5.6
Published on 26-Apr-2018 10:56:52
Advertisements |
In the last year I’ve attended talks by Marshall Clow and Chandler Carruth on C++ tooling and caught the fuzzing bug from them. This post is an attempt to show how to use this fun and productive technique to find problems in your own code.
# Fuzzing
The basic idea behind fuzzing is to try massive numbers of random inputs to code in order to trigger a vulnerability. You create a testbench for the code of interest, pair it with a fuzzing engine that generates random data, and launch it on some server somewhere. Hours, days, or weeks later - if your testbench is solid - it comes back with a set of inputs that cause the code to crash. This process may be accelerated by:
• Using a sanitizer: Compiler-supported sanitizers instrument binaries with extra code to check for illegal conditions - such as out-of-bounds memory accesses - that may not cause an immediate crash. This makes the code under test more likely to fail, and thus reduces the fuzzer running time.
• Using coverage-driven fuzzing: fuzzers can monitor program states reached under different inputs, and guide the inputs in a way that tends to produce new (and potentially erroneous) ones.
This post will demonstrate the use of coverage-driven fuzzers with sanitizing, applied to an open source JSON parsing library.
# The json_spirit Library
For my test case I selected an open-source library I’m familiar with: json_spirit. It provides a very simple interface to parsing and generating JSON:
The advantages of using this library as a test case are:
1. The input, a single string, matches perfectly the random data source supplied by fuzzers, and
2. Almost all of the functionality of the library can be accessed by exercising the input (parsing) and output (rendering) operations.
# Applying the Fuzzer Data
Fuzzers treat your code as a black box they are trying to exercise. They supply random input strings, and observe what code paths are executed. It is up to the user to construct a meaningful set of code inputs from random data with a test driver. In my chosen example, this is as simple as initializing a string using the supplied data. For a templating engine, you may choose to regard a portion of the input as the template and another portion as the model data. More complex examples can require even more interpretation.
In addition to constructing input data from the random string, you may also need to filter out strings that represent inputs you want to exclude from consideration. In the case of json_spirit, non-ASCII characters outside of double quotes are not handled, and the library does not (yet) handle invalid UTF-8 within double quotes. I therefore filter out such cases with some code of my own and a little help from Boost.Locale.
# Fuzzing Engines
The two fuzzers I tried out were libFuzzer, from the LLVM project, and the standalone tool American Fuzzy Lop.
libFuzzer can be checked out from LLVM’s Subversion repository and built using their directions. You supply a test driver as a function called LLVMFuzzerTestOneInput with C linkage. The result is a standalone program that exercises the code inside that function. It uses some Clang compiler-supplied instrumentation, via the -fsanitize-coverage option, to monitor which paths are exercised, so gcc is not an option.
AFL is a standalone tool that uses binary rewriting to instrument the code being tested. It supplies wrapper compilers that call either Clang or gcc as necessary. The test driver is written as a main program that takes the random string from standard input, which means each run is a separate process. However, if you use Clang, there is a special “fast” mode that instruments your code as a compiler pass, rather than a final object code rewrite. This means the instrumentation itself can be optimized, producing faster binaries.
A infrequently used, but potentially powerful, type of fuzzing engine is based on symbolic execution. In the last few years there has been significant advances in SMT solvers, upon which this technology relies. Many symbolic execution engines are proprietary tools but I’ve heard positive things about Klee and hope to try it out someday.
# The Build System
I created the build flow in CMake in my own fork of json_spirit. From the CMake command line users can specify which sanitizer to use (address or memory) with the SANITIZER option. We choose the test driver based on whether the selected compiler (from the standard CMAKE_CXX_COMPILER option) appears to be one of the AFL wrappers. If not, LLVM_ROOT points us to the location of the libFuzzer code and we use the function-based driver.
Building with the memory sanitizer presents some unique challenges. This sanitizer tries to find uses of uninitialized memory, and accordingly must track the state of values throughout their lifetime. It intercepts calls to the C library for this purpose. Every other library used must be compiled with -fsanitize=memory to ensure no initialization is missed. This includes the C++ standard library. Even libFuzzer (if used) must be compiled this way. In the case of json_spirit, the libraries Boost.Locale and its dependency ICU need to be built separately with memory sanitizing enabled. Users supply paths to these libraries with another pair of command-line options.
Users of gcc have very limited options. libFuzzer requires a Clang-only compile switch, and gcc doesn’t have a memory sanitizer at this time, so the only supported choice is AFL with address sanitizing.
# Results and Recommendations
Fuzzing json_spirit has so far found only a single bug, in Boost.Spirit, where an inappropriate check for ASCII characters produces an assertion failure. It may be that more running time is required to access more paths in the code. I also suspect that using C++ strings and other higher-level abstractions (streams, variants etc.) tends to reduce the sort of bugs found in C-style code where pointer arithmetic, fixed-size buffers, memcpy etc. are common.
Going forward my default fuzzing approach will probably be AFL in “fast Clang” mode with address sanitizing. AFL is more mature and has more sophisticated mutation algorithms, and though its one-process-per-test approach is slower, the special Clang support compensates. Address sanitizing seems much faster than memory sanitizing, and you can always re-run all the “interesting” (unique path) test cases afterwards with msan turned on instead.
# Quickstart
If you’d like to run fuzzing on your own code using this infrastructure, I suggest:
1. Copy fuzzing/CMakeLists.txt and replace json_spirit references with your code, built as a library.
2. Rewrite fuzzing/fuzz_onecase.cpp to be your own test driver.
I hope this proves helpful to someone. |
# Connectivity of Local Fusion Graphs for Finite Simple Groups
Ballantyne, John and Rowley, Peter (2012) Connectivity of Local Fusion Graphs for Finite Simple Groups. [MIMS Preprint]
## Abstract
The main result proved here is that for a finite simple group $G$ and a $G$-conjugacy class of involutions $X$ the local fusion graph $\mathcal{F}(G,X)$ is a connected graph.
Item Type: MIMS Preprint finite simple group; local fusion graph; connected; involution MSC 2010, the AMS's Mathematics Subject Classification > 20 Group theory and generalizations Dr John Ballantyne 09 Nov 2012 08 Nov 2017 18:18 http://eprints.maths.manchester.ac.uk/id/eprint/1911 |
## Stream: new members
### Topic: Showing X is a subset of itself
#### Anna Hollands (Jun 29 2020 at 23:10):
Hi! I'm trying to prove a lemma that requires me to find a subset of a set X st.... X works, but use X doesn't. How do I get it to recognise X as a valid subset of X?
#### Patrick Massot (Jun 29 2020 at 23:11):
(univ : set X) is what you are looking for.
#### Patrick Massot (Jun 29 2020 at 23:12):
X has type Type u for some universe u. It doesn't have type set X.
Last updated: May 08 2021 at 04:14 UTC |
Mark the correct alternative in the following question:
Question:
Mark the correct alternative in the following question:
Let R be a relation on the set N of natural numbers defined by nRm iff n divides m. Then, R is
(a) Reflexive and symmetric
(b) Transitive and symmetric
(c) Equivalence
(d) Reflexive, transitive but not symmetric
[NCERT EXEMPLAR]
Solution:
We have,
$R=\{(m, n): n$ divides $m ; m, n \in \mathbf{N}\}$
As, $m$ divides $m$
$\Rightarrow(m, m) \in R \forall m \in \mathbf{N}$
So, $R$ is reflexive
Since, $(2,1) \in R$ i. e. 1 divides 2
but 2 cannot divide 1 i. e. $(2,1) \notin R$
So, $R$ is not symmetric
Let $(m, n) \in R$ and $(n, p) \in R$. Then,
$n$ divides $m$ and $p$ divides $n$
$\Rightarrow p$ divides $m$
$\Rightarrow(m, p) \in R$
So, $R$ is transitive
Hence, the correct alternative is option (d). |
• question_answer The enthalpies of formation of ${{C}_{2}}{{H}_{2(g)}}$ and ${{C}_{6}}{{H}_{6(g)}}$at 298 K are 230 and $85\text{ }kJ\text{ }mo{{l}^{-1}}$respectively. The enthalpy change $\Delta H$ for the polymerisation of acetylene at 298 K is A) $+205\text{ }kJ\text{ }mo{{l}^{-1}}$ B) $-205\text{ }kJ\text{ }mo{{l}^{-1}}$C) $+605\text{ }kJ\text{ }mo{{l}^{-1}}$ D) $-605\text{ }kJ\text{ }mo{{l}^{-1}}$
$3{{C}_{2}}{{H}_{2}}\xrightarrow{Polymerisation}{{C}_{6}}{{H}_{6}}$ $\Delta H$ = enthalpy of product - enthalpy of reactant $=85-3(230)$ $=85-690$ $=-605\,kJ\,mo{{l}^{-1}}$ |
# Why does sampling from the posterior predictive distribution $p(x_{new} \mid x_1, \ldots x_n)$ work without having to average out the integral?
In a Bayesian model, the posterior predictive distribution is usually written as:
$$p(x_{new} \mid x_1, \ldots x_n) = \int_{-\infty}^{\infty} p(x_{new}\mid \mu) \ p(\mu \mid x_1, \ldots x_n)d\mu$$
for a mean parameter $\mu$. Then, inside most books, such as this link:
Sampling MCMC
It is claimed that it is often easier to sample from $p(x_{new} \mid x_1, \ldots x_n)$ using Monte Carlo methods. Commonly, the algorithm is to:
for $j=1 \ldots J$:
1) Sample $\mu^{\ j}$ from $p(\mu \mid x_1, \ldots x_n)$ then
2) Sample $x^{\ * j}$ from $p(x_{new} \mid \mu^{\ j})$.
Then, $x^{\ * 1}, \ldots, x^{\ * J}$ will be an iid sample from $p(x_{new} \mid x_1, \ldots x_n)$.
What confuses me is the validity of this technique. My understanding is that Monte Carlo approaches will approximate the integral, so in this case, why do the $x^{\ * j}$'s each constitute a sample from $p(x_{new} \mid x_1, \ldots x_n)$?
Why isn't is the case that the average of all those samples instead will be distributed as $p(x_{new} \mid x_1, \ldots x_n)$? I am under the assumption that I am creating a finite partition to approximate the integral above. Am I missing something? Thanks!
What you are actually doing with the two-step process you've outlined is sampling from the joint distribution $p(x_{new}, \mu \thinspace | \thinspace x_1, \dots, x_n)$, then ignoring the sampled values of $\mu$. It's not altogether intuitive, but, by ignoring the sampled values of $\mu$, you are integrating over it.
A simple example may make this clear. Consider sampling from $p_X(x \thinspace | \thinspace y) = 1/y \thinspace \text{I}(0,y)$, uniform over $(0,y)$, and $p_Y(y) = 1$, uniform over $(0,1)$. You should be able to see, intuitively, what $\int_0^1p_X(x \thinspace | \thinspace y)p_Y(y)dy$ will look like. We construct some simple, horribly inefficient, R code (written this way for expository purposes) to generate the samples:
samples <- data.frame(y=rep(0,10000), x=rep(0,10000))
for (i in 1:nrow(samples)) {
samples$y[i] <- runif(1) samples$x[i] <- runif(1, 0, samples$y[i]) } hist(samples$x)
samples is clearly a random sample from the joint distribution of $x$ and $y$. We ignore the $y$ values and construct a histogram of only the $x$ values, which looks like:
which hopefully matches your intuition.
If you think carefully about it, you will see that the samples of $x$ do not depend upon any particular value of $y$. Instead, they depend (collectively) on a sample of values of $y$. This is why ignoring the $y$ values is equivalent to integrating out $y$, at least from a random number generation perspective.
On the other hand, consider what happens if you average. You'll get just one number from your Monte Carlo run, namely, the average of the $x_{new}$ samples. This isn't what you want (in your case)!
• Thanks for your post, do you know if there's a mathematically rigorous way to think about it? Dec 6, 2015 at 17:40
I think you definetly have to mix over the sampled values eventually. There are also lecture notes by Peter Hoff on "Introduction to Bayesian Statistics for the Social Sciences" saying so. Otherwise you wouldn't have taken into account the masses recieved from the posterior. So, you build the empirical distribution of your samled values x^{*j} and then sample again from this distribution.
As an example: If your posterior was discrete (only point masses on atoms) then some of your parameter samples are going to take on the same values. If you finally mix over them, you take into account "how often" such parameter emerged from the posterior - put differently, how likely it is. Then averaging according to this appearances gives the posterior predictive which should appro. be the same as doing above procedure with the eventual mixing, at least if the sample size(s) is(are) large.
I think that the existing answers, which are very good, might be enhanced by an example with discrete random variables. We have $$p(x_{new} \mid x_1, \ldots x_n) = \int_{-\infty}^{\infty} p(x_{new},\mu \mid x_1, \ldots x_n)d\mu = \int_{-\infty}^{\infty} p(x_{new}\mid \mu) \ p(\mu \mid x_1, \ldots x_n)d\mu$$
To simplify, consider a $$\mu$$ that is binary: $$p(\mu = 1 \mid x_1 \dots x_n) = p$$ and $$p(\mu = 0 \mid x_1 \dots x_n) = 1-p$$. Suppose further that $$x_{new}$$ is binary with $$p(X=1)=\mu-1$$ and $$p(X=0)=\mu$$. I won't use these probabilities going forward, but you can see that $$x_{new}$$ depends on $$\mu$$.
Say we then draw 14 samples using $$\mu \sim p(\mu \mid x_1,\dots, x_n)$$ and $$x_{new} \sim p(x_{new} \mid \mu )$$. We get the following. As mentioned by @jbowman, we are actually sampling from $$p(x_{new}, \mu \mid x_1 \dots x_n)$$.
mu x_new
1. 1 0
2. 1 1
3. 0 0
4. 1 1
5. 0 0
6. 0 0
7. 0 0
8. 1 1
9. 1 1
10. 0 1
11. 1 0
12. 1 1
13. 0 1
14. 1 1
We can illustrate the fact that we are sampling from the joint $$p(x_{new}, \mu \mid x_1,\dots, x_n)$$ more explicitly by first constructing a table of counts.
x_new
0 1
-----------
0 6 1
mu
1 2 5
Dividing each entry by the total (6 + 1 + 2 +5 = 14) gives
x_new
0 1
-----------
0 0.43 0.07
mu
1 0.14 0.36
Which is the empirical joint distribution. Eg, our estimate of $$p(x_{new}=0, \mu=0)=.43$$. Hence our sampling procedure has given us the joint.
Finally, we will see why it is actually necessary to "evaluate" the integral (although not to average out the integral). This is implicit in @jbowman's answer when they said
It's not altogether intuitive, but, by ignoring the sampled values of 𝜇, you are integrating over it.
To obtain $$p(x_{new} \mid x_1 \dots x_n)$$, we simply sum over rows.
x_new
0 1
-----------
.57 .43
This is what's implied by "ignoring the sampled values of $$\mu$$" and this is the marginalization step. Another way this is commonly done is by constructing a histogram (by summing over rows, we have kind of constructed a histogram here).
So, the sampling procedure does not give us the marginal - in other words, it doesn't "work" according to your definition in the question. Rather, it gives us the joint, and we commonly (by ignoring $$\mu$$, by constructing a histogram, or by getting quantiles) marginalize over $$\mu$$. |
• July 10, 2020, 09:52:42 PM
• Welcome, Guest
Pages: 1 [2] Go Down
### AuthorTopic: FMB Plane drops from sky (Read 2122 times)
0 Members and 1 Guest are viewing this topic.
#### western0221
• Modder
• member
• Offline
• Posts: 6762
• Live in Japan
##### Re: FMB Plane drops from sky
« Reply #12 on: March 15, 2016, 07:16:50 AM »
Do you want to set a new object on the map ?
That is not mouse's left button click.
In default key settings, that "Insert" function is assigned on keyboard's Ctrl key + mouse left click , or keyborad's Insert key / 10key pad's "0" key.
In conf.ini
Code: [Select]
[HotKey builder]......Ctrl MouseLeft=insert+Insert=insert+NumPad-0=insert+
Logged
#### tomoose
• Modder
• member
• Offline
• Posts: 1237
• Iiiiiiiit's ME! Hurrah!!
##### Re: FMB Plane drops from sky
« Reply #13 on: March 15, 2016, 07:23:31 AM »
Torch;
That being said, the usual cause of "exploding on spawning" or "instant crashing" is either an aircraft mod which is incompatible or a loadout which doesn't exist.
Try the following;
Same mission and setup but make sure your aircraft is stock (i.e. in HSFX it should not have an asterisk beside the name). If it's not clear which F4U is stock then simply select any aircraft that you know for a fact is stock. Do not include any bombs/rockets simply set loadout as default. Do not include any other aircraft (i.e. just one aircraft). Do not use any taxi to takeoff etc, simply place your takeoff icon at the end of the runway. NOTE: the TAKEOFF icon indicates where the wheels leave the ground.
If that works then go back into FMB and try your original aircraft and see what happens. It's essentially a process of elimination to nail down what the cause is. Any problem like this I always recommend to go back and use stock objects/aircraft etc as that will usually indicate that the problem is mod-related. Then it's a question of trying to determine what mod or part of a mod is the guilty party.
One thing to note with HSFX and the "group takeoff" feature (if you intend to use it). I found this out while creating Battle of Britain scenarios.
If you use stock takeoffs (i.e. the planes appear on the runway nicely lined up behind each other), the TAKEOFF icon indicates where the wheels leave the ground.
If you use the "group" takeoff option (i.e. planes are lined up two-abreast, or four-abreast etc) then the TAKEOFF icon indicates where the takeoff run will start (i.e. the opposite of the stock takeoff). In other words the planes will start rolling from the TAKEOFF icon instead of that being where they actually leave the ground. Make sense??
Logged
#### Torch
• member
• Offline
• Posts: 35
##### Re: FMB Plane drops from sky
« Reply #14 on: March 15, 2016, 08:06:27 AM »
Western0221
Thanks for the tip.
Tomoose,
Makes perfect sense. I only used the stock Aircraft. However your additional information will surely make the difference.
Thanks!
Logged
Pages: 1 [2] Go Up
Page created in 0.013 seconds with 26 queries. |
5
# 7 Whak d 2 "Smflsqvz...
## Question
###### 7 Whak d 2 "Smflsqvz
7 Whak d 2 "Smflsqvz
## Answers
#### Similar Solved Questions
5 answers
##### Question HelpTho table bolow contains the amount thal & sample ol nine customers spent far Ilnch (S) ata fast-'ood restaurant Complete Pans bolox: 5.37 6.13 744 7,54 8 47 9.51 Consinuc 99% confconce Inlerva @sbmna t (or the populalion nean amount spent Ix lunch = fast-"ood restaurant; Jssuming normal distntuvon:( conidence Interva eaunale ftom 5.10 8.92 _ (Round = two decimal places needed ) Interprot the intarval constuco t Choose cone chaanstnn neioy9934 tho gamplo dala butuun
Question Help Tho table bolow contains the amount thal & sample ol nine customers spent far Ilnch (S) ata fast-'ood restaurant Complete Pans bolox: 5.37 6.13 744 7,54 8 47 9.51 Consinuc 99% confconce Inlerva @sbmna t (or the populalion nean amount spent Ix lunch = fast-"ood restaurant...
5 answers
2 answers
##### Scores on a recent national statistics exam were normally distributed with a mean of 82 and a standard deviation of 9. (You may need to use the appropriate appendix table or technology to answer this question.)(a)What is the probability that a randomly selected exam will have a score of at least 71? (Round your answer to four decimal places.) (b)What percentage of exams will have scores between 89 and 92? (Round your answer to two decimal places.) %(c)If the top 2.5% of test scores receive meri
Scores on a recent national statistics exam were normally distributed with a mean of 82 and a standard deviation of 9. (You may need to use the appropriate appendix table or technology to answer this question.)(a)What is the probability that a randomly selected exam will have a score of at least 71?...
4 answers
##### 31. Proof Let A be a diagonalizable n X n matrix and let P be an invertible n X n matrix such that B = P-IAP is the diagonal form of A. Prove that Ak = PBkP-1 where kis a positive integer: Finding a Power of a Matrix In Exercises 33-36, use the result of Exercise 31 to find the power of A shown: 2 0 ~2 35. A = 0 2 ~2 As 3 0 ~3
31. Proof Let A be a diagonalizable n X n matrix and let P be an invertible n X n matrix such that B = P-IAP is the diagonal form of A. Prove that Ak = PBkP-1 where kis a positive integer: Finding a Power of a Matrix In Exercises 33-36, use the result of Exercise 31 to find the power of A shown: 2 0...
5 answers
##### Moving to another quastion will savo this responsaQuestion 2For givon chemical raction; AHO 56.0 kJlmol nnd 450 -183 J(morK: What is the value of tha equllibrium constant, K, for this reaction at & temporature 0f 65, 0C? 0.404+0.7243.0BE- 36Mouing t0 another questionYave Ihls response
Moving to another quastion will savo this responsa Question 2 For givon chemical raction; AHO 56.0 kJlmol nnd 450 -183 J(morK: What is the value of tha equllibrium constant, K, for this reaction at & temporature 0f 65, 0C? 0.404+ 0.724 3.0BE- 36 Mouing t0 another question Yave Ihls response...
5 answers
##### Suppose that Bis a 5 x 5 matrix that is not diagonalizable and has an eigenspace of dimensionWhich of the following could be the characteristic polynomial of B?14?( - %)? 242( _ 1)2(+10) 3. A(A - 1)(A - 2)(A + 1)( - 4) A(A - 1)2( - 2)( - 4 5.A(A _ 5)4Select each valid possibility:OptionOption 2Option 3Option 4Option 5
Suppose that Bis a 5 x 5 matrix that is not diagonalizable and has an eigenspace of dimension Which of the following could be the characteristic polynomial of B? 14?( - %)? 242( _ 1)2(+10) 3. A(A - 1)(A - 2)(A + 1)( - 4) A(A - 1)2( - 2)( - 4 5.A(A _ 5)4 Select each valid possibility: Option Option 2...
5 answers
##### 33. (7 pointsB Below is - partial pedigree of hemophilia (X-linked recessive) F the Brteh E Family descended from Quecn Victoria, who belicved be the originav ecinite Bn this Rosjig pedigred; (a) (4 points) (no correchuntter point; correct; point; twa correcl, pointa; _II trte corrcct points; wrong answer; point) Circle three persons uto arc ccnain be carners (in addition - Qucen Victoria)(b) ( _ Pefnft What the probability that the person indicated bycanier?Albert}Vtom (141y eo34. (5 points)
33. (7 pointsB Below is - partial pedigree of hemophilia (X-linked recessive) F the Brteh E Family descended from Quecn Victoria, who belicved be the originav ecinite Bn this Rosjig pedigred; (a) (4 points) (no correchuntter point; correct; point; twa correcl, pointa; _II trte corrcct points; wrong ...
-- 0.021989-- |
## Friday, October 23, 2009
A jet engine is a reaction engine that discharges a fast moving jet of fluid to generate thrust in accordance with Newton's laws of motion. This broad definition of jet engines includes turbojets,
turbofans, rockets, ramjets, pulse jets and pump-jets. In general, most jet engines are internal combustion engines[1] but non-combusting forms also exist.
Simulation of a low bypass turbofan's airflow
In some common parlance, the term 'jet engine' is loosely referred to as an internal combustion duct engine, which typically consists of an engine with a rotary (rotating) air compressor powered by a turbine ("Brayton cycle"), with the leftover power providing thrust via a propelling nozzle. These types of jet engines are primarily used by jet aircraft for long distance travel. Early jet aircraft used turbojet engines which were relatively inefficient for subsonic flight. Modern subsonic jet aircraft usually use high-bypass turbofan engines which give high speeds, as well as (over long distances) better fuel efficiency than many other forms of transport.[citation needed]
About 7.2% of the oil used in 2004 was consumed by jet engines.[2] In 2007, the cost of jet fuel, while highly variable from one airline to another, averaged 26.5% of total operating costs, making it the single largest operating expense for most airlines.[3
## History
Jet engines can be dated back to the invention of the aeolipile before the first century AD. This device used steam power directed through two nozzles so as to cause a sphere to spin rapidly on its axis. So far as is known, it was not used for supplying mechanical power, and the potential practical applications of this invention were not recognized. It was simply considered a curiosity.
Jet propulsion only literally and figuratively took off with the invention of the rocket by the Chinese in the 13th century. Rocket exhaust was initially used in a modest way for fireworks but gradually progressed to propel formidable weaponry; and there the technology stalled for hundreds of years.
Archytas, the founder of mathematical mechanics, as described in the writings of Aulus Gellius five centuries after him, was reputed to have designed and built the first artificial, self-propelled flying device. This device was a bird-shaped model propelled by a jet of what was probably steam, said to have actually flown some 200 meters.
In Ottoman, Turkey in 1633 Lagari Hasan Çelebi took off with what was described to be a cone-shaped rocket and then glided with wings into a successful landing, winning a position in the Ottoman army. However, this was essentially a stunt. The problem was that rockets are simply too inefficient at low speeds to be useful for general aviation.
The Coandă-1910.
In 1910 Henri Coandă designed, built and piloted the first 'thermojet'-powered aircraft, known as the Coandă-1910, which he demonstrated publicly at the second International Aeronautic Salon in Paris. The powerplant used a 4-cylinder piston engine to power a compressor, which fed two burners for thrust, instead of using a propeller. At the airport of Issy-les-Moulineaux near Paris, Coandă lost control of the jet plane, which went off of the runway and caught fire. Fortunately, he escaped with minor injuries to his face and hands. Around that time, Coandă abandoned his experiments due to a lack of interest from the public, scientific and engineering institutions. It would be nearly 30 years until the next thermojet-powered aircraft, the Caproni Campini N.1 (sometimes referred to as C.C.2).
In 1913 René Lorin came up with a form of jet engine, the subsonic pulsejet, which would have been somewhat more efficient, but he had no way to achieve high enough speeds for it to operate, and the concept remained theoretical for quite some time.
However, engineers were beginning to realize that the piston engine was self-limiting in terms of the maximum performance which could be attained; the limit was essentially one of propeller efficiency.[4] This seemed to peak as blade tips approached the speed of sound. If engine, and thus aircraft, performance were ever to increase beyond such a barrier, a way would have to be found to radically improve the design of the piston engine, or a wholly new type of powerplant would have to be developed. This was the motivation behind the development of the gas turbine engine, commonly called a "jet" engine, which would become almost as revolutionary to aviation as the Wright brothers' first flight.
The earliest attempts at jet engines were hybrid designs in which an external power source first compressed air, which was then mixed with fuel and burned for jet thrust. In one such system, called a thermojet by Secondo Campini but more commonly, motorjet, the air was compressed by a fan driven by a conventional piston engine. Examples of this type of design were Henri Coandă's Coandă-1910 aircraft, and the much later Campini Caproni CC.2, and the Japanese Tsu-11 engine intended to power Ohka kamikaze planes towards the end of World War II. None were entirely successful and the CC.2 ended up being slower than the same design with a traditional engine and propeller combination.
Albert Fonó's ramjet-cannonball from 1915
Albert Fonó's German patent for jet Engines (January 1928- granted 1932). The third illustration is a turbojet
The key to a practical jet engine was the gas turbine, used to extract energy from the engine itself to drive the compressor. The gas turbine was not an idea developed in the 1930s: the patent for a stationary turbine was granted to John Barber in England in 1791. The first gas turbine to successfully run self-sustaining was built in 1903 by Norwegian engineer Ægidius Elling. Limitations in design and practical engineering and metallurgy prevented such engines reaching manufacture. The main problems were safety, reliability, weight and, especially, sustained operation.
In Hungary, Albert Fonó in 1915 devised a solution for increasing the range of artillery, comprising a gun-launched projectile which was to be united with a ramjet propulsion unit. This was to make it possible to obtain a long range with low initial muzzle velocities, allowing heavy shells to be fired from relatively lightweight guns. Fonó submitted his invention to the Austro-Hungarian Army but the proposal was rejected. In 1928 he applied for a German patent on aircraft powered by supersonic ramjets, and this was awarded in 1932.[5][6][7]
The first patent for using a gas turbine to power an aircraft was filed in 1921 by Frenchman Maxime Guillaume.[8] His engine was an axial-flow turbojet.
In 1923, Edgar Buckingham of the US National Bureau of Standard published a report[9] expressing scepticism that jet engines would be economically competitive with prop driven aircraft at the low altitudes and airspeeds of the period: "there does not appear to be, at present, any prospect whatever that jet propulsion of the sort here considered will ever be of practical value, even for military purposes."
Instead, by the 1930s, the piston engine in its many different forms (rotary and static radial, aircooled and liquid-cooled inline) was the only type of powerplant available to aircraft designers. This was acceptable as long as only low performance aircraft were required, and indeed all that were available.
The Whittle W.2/700 engine flew in the Gloster E.28/39, the first British aircraft to fly with a turbojet engine, and the Gloster Meteor
In 1928, RAF College Cranwell cadet [10]Frank Whittle formally submitted his ideas for a turbo-jet to his superiors. In October 1929 he developed his ideas further.[11] . On 16 January 1930 in England, Whittle submitted his first patent (granted in 1932).[12] The patent showed a two-stage axial compressor feeding a single-sided centrifugal compressor. Practical axial compressors were made possible by ideas from A.A.Griffith in a seminal paper in 1926 ("An Aerodynamic Theory of Turbine Design"). Whittle would later concentrate on the simpler centrifugal compressor only, for a variety of practical reasons. Whittle had his first engine running in April 1937. It was liquid-fuelled, and included a self-contained fuel pump. Whittle's team experienced near-panic when the engine would not stop, accelerating even after the fuel was switched off. It turned out that fuel had leaked into the engine and accumulated in pools. So the engine would not stop until all the leaked fuel had burned off. Whittle was unable to interest the government in his invention, and development continued at a slow pace.
Heinkel He 178, the world's first aircraft to fly purely on turbojet power
Jendrassik Cs-1, The first Turboprop engine. built in Hungarian Ganz works in 1938
In 1935 Hans von Ohain started work on a similar design in Germany, apparently unaware of Whittle's work.[13] His first engine was strictly experimental and could only run under external power, but he was able to demonstrate the basic concept. Ohain was then introduced to Ernst Heinkel, one of the larger aircraft industrialists of the day, who immediately saw the promise of the design. Heinkel had recently purchased the Hirth engine company, and Ohain and his master machinist Max Hahn were set up there as a new division of the Hirth company. They had their first HeS 1 centrifugal engine running by September 1937. Unlike Whittle's design, Ohain used hydrogen as fuel, supplied under external pressure. Their subsequent designs culminated in the gasoline-fuelled HeS 3 of 1,100 lbf (5 kN), which was fitted to Heinkel's simple and compact He 178 airframe and flown by Erich Warsitz in the early morning of August 27, 1939, from Rostock-Marienehe aerodrome, an impressively short time for development. The He 178 was the world's first jet plane.[14]
The world's first turboprop was the Jendrassik Cs-1 designed by the Hungarian mechanical engineer György Jendrassik. It was produced and tested in the Ganz factory in Budapest between 1938 and 1942. It was planned to fit to the Varga RMI-1 X/H twin-engined reconnaissance bomber designed by László Varga in 1940, but the program was cancelled. Jendrassik had also designed a small-scale 75 kW turboprop in 1937.
Whittle's engine was starting to look useful, and his Power Jets Ltd. started receiving Air Ministry money. In 1941 a flyable version of the engine called the W.1, capable of 1000 lbf (4 kN) of thrust, was fitted to the Gloster E28/39 airframe specially built for it, and first flew on May 15, 1941 at RAF Cranwell.
A picture of an early centrifugal engine (DH Goblin II) sectioned to show its internal components
A Scottish aircraft engine designer, Frank Halford, working from Whittle's ideas developed a "straight through" version of the centrifugal jet; his design became the de Havilland Goblin.
One problem with both of these early designs, which are called centrifugal-flow engines, was that the compressor worked by "throwing" (accelerating) air outward from the central intake to the outer periphery of the engine, where the air was then compressed by a divergent duct setup, converting its velocity into pressure. An advantage of this design was that it was already well understood, having been implemented in centrifugal superchargers, then in widespread use on piston engines. However, given the early technological limitations on the shaft speed of the engine, the compressor needed to have a very large diameter to produce the power required. This meant that the engines had a large frontal area, which made it less useful as an aircraft powerplant due to drag. A further disadvantage was that the air flow had to be "bent" to flow rearwards through the combustion section and to the turbine and tailpipe, adding complexity and lowering efficiency. Nevertheless, these types of engines had the major advantages of light weight, simplicity and reliability, and development rapidly progressed to practical airworthy designs.
A cutaway of the Junkers Jumo 004 engine.
Austrian Anselm Franz of Junkers' engine division (Junkers Motoren or Jumo) addressed these problems with the introduction of the axial-flow compressor. Essentially, this is a turbine in reverse. Air coming in the front of the engine is blown towards the rear of the engine by a fan stage (convergent ducts), where it is crushed against a set of non-rotating blades called stators (divergent ducts). The process is nowhere near as powerful as the centrifugal compressor, so a number of these pairs of fans and stators are placed in series to get the needed compression. Even with all the added complexity, the resulting engine is much smaller in diameter and thus, more aerodynamic. Jumo was assigned the next engine number in the RLM numbering sequence, 4, and the result was the Jumo 004 engine. After many lesser technical difficulties were solved, mass production of this engine started in 1944 as a powerplant for the world's first jet-fighter aircraft, the Messerschmitt Me 262 (and later the world's first jet-bomber aircraft, the Arado Ar 234). A variety of reasons conspired to delay the engine's availability, this delay caused the fighter to arrive too late to decisively impact Germany's position in World War II. Nonetheless, it will be remembered as the first use of jet engines in service.
In the UK, their first axial-flow engine, the Metrovick F.2, ran in 1941 and was first flown in 1943. Although more powerful than the centrifugal designs at the time, the Ministry considered its complexity and unreliability a drawback in wartime. The work at Metrovick led to the Armstrong Siddeley Sapphire engine which would be built in the US as the J65.
Following the end of the war the German jet aircraft and jet engines were extensively studied by the victorious allies and contributed to work on early Soviet and US jet fighters. The legacy of the axial-flow engine is seen in the fact that practically all jet engines on fixed wing aircraft have had some inspiration from this design.
Centrifugal-flow engines have improved since their introduction. With improvements in bearing technology the shaft speed of the engine was increased, greatly reducing the diameter of the centrifugal compressor. The short engine length remains an advantage of this design, particularly for use in helicopters where overall size is more important than frontal area. Also as their engine components are more robust they are less liable to foreign object damage than axial-flow compressor engines.
Although German designs were more advanced aerodynamically, the combination of simplicity and the lack of requisite rare metals for the necessary advanced metallurgy (such as tungsten, chromium and titanium) for high-stress components such as turbine blades and bearings, etc) meant that the later produced German engines had a short service life and had to be changed after 10–25 hours. British engines were also widely manufactured under license in the US (see Tizard Mission), and were sold to Soviet Russia who reverse engineered them with the Nene going on to power the famous MiG-15. American and Soviet designs, independent axial-flow types for the most part, would strive to attain superior performance until the 1960s, although the General Electric J47 provided excellent service in the F-86 Sabre in the 1950s.
By the 1950s the jet engine was almost universal in combat aircraft, with the exception of cargo, liaison and other specialty types. By this point some of the British designs were already cleared for civilian use, and had appeared on early models like the de Havilland Comet and Avro Canada Jetliner. By the 1960s all large civilian aircraft were also jet powered, leaving the piston engine in such low-cost niche roles such as cargo flights.
Relentless improvements in the turboprop pushed the piston engine (an internal combustion engine) out of the mainstream entirely, leaving it serving only the smallest general aviation designs and some use in drone aircraft. The ascension of the jet engine to almost universal use in aircraft took well under twenty years.
However, the story was not quite at an end, for the efficiency of turbojet engines was still rather worse than piston engines, but by the 1970s with the advent of high bypass jet engines, an innovation not foreseen by the early commentators like Edgar Buckingham, at high speeds and high altitudes that seemed absurd to them, only then did the fuel efficiency finally exceed that of the best piston and propeller engines,[15] and the dream of fast, safe, economical travel around the world finally arrived, and their dour, if well founded for the time, predictions that jet engines would never amount to much, were killed forever.
## Types
There are a large number of different types of jet engines, all of which achieve forward thrust from the principle of jet propulsion.
Water jet For propelling water rockets and jetboats; squirts water out the back through a nozzle In boats, can run in shallow water, high acceleration, no risk of engine overload (unlike propellers), less noise and vibration, highly maneuverable at all boat speeds, high speed efficiency, less vulnerable to damage from debris, very reliable, more load flexibility, less harmful to wildlife Can be less efficient than a propeller at low speed, more expensive, higher weight in boat due to entrained water, will not perform well if boat is heavier than the jet is sized for
Motorjet Works like a turbojet but instead of a turbine driving the compressor a piston engine drives it. Higher exhaust velocity than a propeller, offering better thrust at high speed Heavy, inefficient and underpowered. Examples include: Coandă-1910 and Caproni Campini N.1.
Turbojet A tube with a compressor and turbine sharing a common shaft with a burner in between and a propelling nozzle for the exhaust.[16] Uses a high exhaust gas velocity to produce thrust. Has a much higher core flow than bypass type engines Simplicity of design, efficient at supersonic speeds (~M2) A basic design, misses many improvements in efficiency and power for subsonic flight, relatively noisy.
Low-bypass Turbofan One- or two-stage fan added in front bypasses a proportion of the air through a bypass duct straight to the nozzle/afterburner, avoiding the combustion chamber, with the rest being heated in the combustion chamber and passing through the turbine.[17] Compared with its turbojet ancestor, this allows for more efficient operation with somewhat less noise. This is the engine of high-speed military aircraft, some smaller private jets, and older civilian airliners such as the Boeing 707, the McDonnell Douglas DC-8, and their derivatives. As with the turbojet, the design is aerodynamic, with only a modest increase in diameter over the turbojet required to accommodate the bypass fan and chamber. It is capable of supersonic speeds with minimal thrust drop-off at high speeds and altitudes yet still more efficient than the turbojet at subsonic operation. Noisier and less efficient than high-bypass turbofan, with less static (Mach 0) thrust. Added complexity to accommodate dual shaft designs. More inefficient than a turbojet around M2 due to higher cross-sectional area.
High-bypass Turbofan First stage compressor drastically enlarged to provide bypass airflow around engine core, and it provides significant amounts of thrust. Compared to the low-bypass turbofan and no-bypass turbojet, the high-bypass turbofan works on the principle of moving a great deal of air somewhat faster, rather than a small amount extremely fast.[17] Most common form of jet engine in civilian use today- used in airliners like the Boeing 747, most 737s, and all Airbus aircraft. Quieter around 10 to 20 percent more than the turbojet engine due to greater mass flow and lower total exhaust speed and more efficient for a useful range of subsonic airspeeds for same reason, cooler exhaust temperature. Less noisy and exhibit much better efficiency than low bypass turbofans. Greater complexity (additional ducting, usually multiple shafts) and the need to contain heavy blades. Fan diameter can be extremely large, especially in high bypass turbofans such as the GE90. More subject to FOD and ice damage. Top speed is limited due to the potential for shockwaves to damage engine. Thrust lapse at higher speeds, which necessitates huge diameters and introduces additional drag.
Rocket Carries all propellants and oxidants on-board, emits jet for propulsion[18] Very few moving parts, Mach 0 to Mach 25+, efficient at very high speed (> Mach 5.0 or so), thrust/weight ratio over 100, no complex air inlet, high compression ratio, very high speed (hypersonic) exhaust, good cost/thrust ratio, fairly easy to test, works in a vacuum-indeed works best exoatmospheric which is kinder on vehicle structure at high speed, fairly small surface area to keep cool, and no turbine in hot exhaust stream. Needs lots of propellant- very low specific impulse — typically 100–450 seconds. Extreme thermal stresses of combustion chamber can make reuse harder. Typically requires carrying oxidiser on-board which increases risks. Extraordinarily noisy.
Ramjet Intake air is compressed entirely by speed of oncoming air and duct shape (divergent), and then it goes through a burner section where it is heated and then passes through a propelling nozzle[19] Very few moving parts, Mach 0.8 to Mach 5+, efficient at high speed (> Mach 2.0 or so), lightest of all air-breathing jets (thrust/weight ratio up to 30 at optimum speed), cooling much easier than turbojets as no turbine blades to cool. Must have a high initial speed to function, inefficient at slow speeds due to poor compression ratio, difficult to arrange shaft power for accessories, usually limited to a small range of speeds, intake flow must be slowed to subsonic speeds, noisy, fairly difficult to test, finicky to keep lit.
Turboprop (Turboshaft similar) Strictly not a jet at all — a gas turbine engine is used as a powerplant to drive a propeller shaft (or rotor in the case of a helicopter) High efficiency at lower subsonic airspeeds (300 knots plus), high shaft power to weight Limited top speed (aeroplanes), somewhat noisy, complex transmission
Propfan/Unducted Fan Turbojet engine that also drives one or more propellers. Similar to a turbofan without the fan cowling. Higher fuel efficiency, potentially less noisy than turbofans, could lead to higher-speed commercial aircraft, popular in the 1980s during fuel shortages Development of propfan engines has been very limited, typically more noisy than turbofans, complexity
Pulsejet Air is compressed and combusted intermittently instead of continuously. Some designs use valves. Very simple design, commonly used on model aircraft Noisy, inefficient (low compression ratio), works poorly on a large scale, valves on valved designs wear out quickly
Pulse detonation engine Similar to a pulsejet, but combustion occurs as a detonation instead of a deflagration, may or may not need valves Maximum theoretical engine efficiency Extremely noisy, parts subject to extreme mechanical fatigue, hard to start detonation, not practical for current use
Air-augmented rocket Essentially a ramjet where intake air is compressed and burnt with the exhaust from a rocket Mach 0 to Mach 4.5+ (can also run exoatmospheric), good efficiency at Mach 2 to 4 Similar efficiency to rockets at low speed or exoatmospheric, inlet difficulties, a relatively undeveloped and unexplored type, cooling difficulties, very noisy, thrust/weight ratio is similar to ramjets.
Scramjet Similar to a ramjet without a diffuser; airflow through the entire engine remains supersonic Few mechanical parts, can operate at very high Mach numbers (Mach 8 to 15) with good efficiencies[20] Still in development stages, must have a very high initial speed to function (Mach >6), cooling difficulties, very poor thrust/weight ratio (~2), extreme aerodynamic complexity, airframe difficulties, testing difficulties/expense
Turborocket A turbojet where an additional oxidizer such as oxygen is added to the airstream to increase maximum altitude Very close to existing designs, operates in very high altitude, wide range of altitude and airspeed Airspeed limited to same range as turbojet engine, carrying oxidizer like LOX can be dangerous. Much heavier than simple rockets.
Precooled jets / LACE Intake air is chilled to very low temperatures at inlet in a heat exchanger before passing through a ramjet and/or turbojet and/or rocket engine. Easily tested on ground. Very high thrust/weight ratios are possible (~14) together with good fuel efficiency over a wide range of airspeeds, mach 0-5.5+; this combination of efficiencies may permit launching to orbit, single stage, or very rapid, very long distance intercontinental travel. Exists only at the lab prototyping stage. Examples include RB545, Reaction Engines SABRE, ATREX. Requires liquid hydrogen fuel which has very low density and heavily insulated tankage.
## Uses
Jet engines are usually used as aircraft engines for jet aircraft. They are also used for cruise missilesgines they are used for fireworks, model rocketry, spaceflight, and military missiles.
Jet engines have also been used to propel high speed cars, particularly drag racers, with the all-time record held by a rocket car. A turbofan powered car ThrustSSC currently holds the land speed record.
Jet engine designs are frequently modified to turn them into gas turbine engines which are used in a wide variety of industrial applications. These include electrical power generation, powering water, natural gas, or oil pumps, and providing propulsion for ships and locomotives. Industrial gas turbine can create up to 50,000 shaft horsepower. Many of these engines are derived from older military turbojets such as the Pratt & Whitney J57 and J75 models. There is also a derivative of the P&W JT8D low-bypass turbofan that creates up to 35,000 HP.
and unmanned aerial vehicles.
In the form of rocket en
## Major components
The major components of a jet engine are similar across the major different types of engines, although not all engine types have all components. The major parts include:
• Cold Section:
• Air intake (Inlet) — For subsonic aircraft, the air intake to a jet engine consists essentially of an opening which is designed to minimise drag. The air reaching the compressor of a normal jet engine must be travelling below the speed of sound, even for supersonic aircraft, to allow smooth flow through compressor and turbine blades. At supersonic flight speeds, shockwaves form in the intake system, these help compress the air, but also there is some inevitable reduction in the recovered pressure at inlet to the compressor. Some supersonic intakes use devices, such as a cone or a ramp, to increase pressure recovery.
• Compressor or Fan — The compressor is made up of stages. Each stage consists of vanes which rotate, and stators which remain stationary. As air is drawn deeper through the compressor, its heat and pressure increases. Energy is derived from the turbine (see below), passed along the shaft.
• Bypass ducts — Much of the thrust of essentially all modern jet engines comes from air from the front compressor that bypasses the combustion chamber and gas turbine section that leads directly to the nozzle or afterburner (where fitted).
• Common:
• Shaft — The shaft connects the turbine to the compressor, and runs most of the length of the engine. There may be as many as three concentric shafts, rotating at independent speeds, with as many sets of turbines and compressors. Other services, like a bleed of cool air, may also run down the shaft.
• Diffuser section: - This section is a divergent duct that utilizes Bernoulli's principle to decrease the velocity of the compressed air to allow for easier ignition. And, at the same time, continuing to increase the air pressure before it enters the combustion chamber.
• Hot section:
• Combustor or Can or Flameholders or Combustion Chamber — This is a chamber where fuel is continuously burned in the compressed air.
A blade with internal cooling as applied in the high-pressure turbine
• Turbine — The turbine is a series of bladed discs that act like a windmill, gaining energy from the hot gases leaving the combustor. Some of this energy is used to drive the compressor, and in some turbine engines (ie turboprop, turboshaft or turbofan engines), energy is extracted by additional turbine discs and used to drive devices such as propellers, bypass fans or helicopter rotors. One type, a free turbine, is configured such that the turbine disc driving the compressor rotates independently of the discs that power the external components. Relatively cool air, bled from the compressor, may be used to cool the turbine blades and vanes, to prevent them from melting.
• Afterburner or reheat (chiefly UK) — (mainly military) Produces extra thrust by burning extra fuel, usually inefficiently, to significantly raise Nozzle Entry Temperature at the exhaust. Owing to a larger volume flow (i.e. lower density) at exit from the afterburner, an increased nozzle flow area is required, to maintain satisfactory engine matching, when the afterburner is alight.
• Exhaust or Nozzle — Hot gases leaving the engine exhaust to atmospheric pressure via a nozzle, the objective being to produce a high velocity jet. In most cases, the nozzle is convergent and of fixed flow area.
• Supersonic nozzle — If the Nozzle Pressure Ratio (Nozzle Entry Pressure/Ambient Pressure) is very high, to maximize thrust it may be worthwhile, despite the additional weight, to fit a convergent-divergent (de Laval) nozzle. As the name suggests, initially this type of nozzle is convergent, but beyond the throat (smallest flow area), the flow area starts to increase to form the divergent portion. The expansion to atmospheric pressure and supersonic gas velocity continues downstream of the throat, whereas in a convergent nozzle the expansion beyond sonic velocity occurs externally, in the exhaust plume. The former process is more efficient than the latter.
The various components named above have constraints on how they are put together to generate the most efficiency or performance. The performance and efficiency of an engine can never be taken in isolation; for example fuel/distance efficiency of a supersonic jet engine maximises at about mach 2, whereas the drag for the vehicle carrying it is increasing as a square law and has much extra drag in the transonic region. The highest fuel efficiency for the overall vehicle is thus typically at Mach ~0.85.
For the engine optimisation for its intended use, important here is air intake design, overall size, number of compressor stages (sets of blades), fuel type, number of exhaust stages, metallurgy of components, amount of bypass air used, where the bypass air is introduced, and many other factors. For instance, let us consider design of the air intake.
## Common types
There are two types of jet engine that are seen commonly today, the turbofan which is used on almost all commercial airliners, and rocket engines which are used for spaceflight and other terrestrial uses such as ejector seats, flares, fireworks etc.
### Turbofan engines
an animated turbofan
Most modern jet engines are actually turbofans, where the low pressure compressor acts as a fan, supplying supercharged air not only to the engine core, but to a bypass duct. The bypass airflow either passes to a separate 'cold nozzle' or mixes with low pressure turbine exhaust gases, before expanding through a 'mixed flow nozzle'.
Turbofans are used for airliners because they give an exhaust speed that is better matched for subsonic airliners. At airliners' flight speed, conventional turbojet engines generate an exhaust that ends up travelling very fast backwards, and this wastes energy. By emitting the exhaust so that it ends up travelling more slowly, better fuel consumption is achieved as well as higher thrust at low speeds. In addition, the lower exhaust speed gives much lower noise.
In the 1960 s there was little difference between civil and military jet engines, apart from the use of afterburning in some (supersonic) applications. Civil turbofans today have a low exhaust speed (low specific thrust -net thrust divided by airflow) to keep jet noise to a minimum and to improve fuel efficiency. Consequently the bypass ratio (bypass flow divided by core flow) is relatively high (ratios from 4:1 up to 8:1 are common). Only a single fan stage is required, because a low specific thrust implies a low fan pressure ratio.
Today's military turbofans, however, have a relatively high specific thrust, to maximize the thrust for a given frontal area, jet noise being of less concern in military uses relative to civil uses. Multistage fans are normally needed to reach the relatively high fan pressure ratio needed for high specific thrust. Although high turbine inlet temperatures are often employed, the bypass ratio tends to be low, usually significantly less than 2.0.
### Rocket engines
A common form of jet engine is the rocket engine.
Rocket engines are used for high altitude flights because they give very high thrust and their lack of reliance on atmospheric oxygen allows them to operate at arbitrary altitudes.
This is used for launching satellites, space exploration and manned access, and permitted landing on the moon in 1969.
However, the high exhaust speed and the heavier, oxidiser-rich propellant results in more propellant use than turbojets, and their use is largely restricted to very high altitudes, very high speeds, or where very high accelerations are needed as rocket engines themselves have a very high thrust-to-weight ratio.
An approximate equation for the net thrust of a rocket engine is:
$F = \dot m g_0 I_{sp-vac} - A_e P \;$
Where F is the thrust, Isp(vac) is the specific impulse, g0 is a standard gravity, $\dot m$ is the propellant flow in kg/s, Ae is the area of the exhaust bell at the exit, and P is the atmospheric pressure.
## General physical principles
All jet engines are reaction engines that generate thrust by emitting a jet of fluid rearwards at relatively high speed. The forces on the inside of the engine needed to create this jet give a strong thrust on the engine which pushes the craft forwards.
Jet engines make their jet from propellant from tankage that is attached to the engine (as in a 'rocket') as well as in duct engines (those commonly used on aircraft) by ingesting an external fluid (very typically air) and expelling it at higher speed.
### Thrust
The motion impulse of the engine is equal to the fluid mass multiplied by the speed at which the engine emits this mass:
I = mc
where m is the fluid mass per second and c is the exhaust speed. In other words, a vehicle gets the same thrust if it outputs a lot of exhaust very slowly, or a little exhaust very quickly. (In practice parts of the exhaust may be faster than others, but it is the average momentum that matters, and thus the important quantity is called the effective exhaust speed - c here.)
However, when a vehicle moves with certain velocity v, the fluid moves towards it, creating an opposing ram drag at the intake:
mv
Most types of jet engine have an intake, which provides the bulk of the fluid exiting the exhaust. Conventional rocket motors, however, do not have an intake, the oxidizer and fuel both being carried within the vehicle. Therefore, rocket motors do not have ram drag; the gross thrust of the nozzle is the net thrust of the engine. Consequently, the thrust characteristics of a rocket motor are different from that of an air breathing jet engine, and thrust is independent of speed.
The jet engine with an intake duct is only useful if the velocity of the gas from the engine, c, is greater than the vehicle velocity, v, as the net engine thrust is the same as if the gas were emitted with the velocity cv. So the thrust is actually equal to
S = m(cv)
This equation shows that as v approaches c, a greater mass of fluid must go through the engine to continue to accelerate at the same rate, but all engines have a designed limit on this. Additionally, the equation implies that the vehicle can't accelerate past its exhaust velocity as it would have negative thrust.
### Energy efficiency
Dependence of the energy efficiency (η) upon the vehicle speed/exhaust speed ratio (v/c) for air-breathing jet and rocket engines
Energy efficiency (η) of jet engines installed in vehicles has two main components, cycle efficiency (ηc)- how efficiently the engine can accelerate the jet, and propulsive efficiency (ηp)-how much of the energy of the jet ends up in the vehicle body rather than being carried away as kinetic energy of the jet.
Even though overall energy efficiency η is simply:
η = ηpηc
For all jet engines the propulsive efficiency is highest when the engine emits an exhaust jet at a speed that is the same as, or nearly the same as, the vehicle velocity as this gives the smallest residual kinetic energy.(Note:[21]) The exact formula for air-breathing engines moving at speed v with an exhaust velocity c is given in the literature as:[22] is
$\eta_p = \frac{2}{1 + \frac{c}{v}}$
And for a rocket:
$\eta_p= \frac {2 \frac {v} {c}} {1 + ( \frac {v} {c} )^2 }$[23]
In addition to propulsive efficiency, another factor is cycle efficiency; essentially a jet engine is typically a form of heat engine. Heat engine efficiency is determined by the ratio of temperatures that are reached in the engine to that they are exhausted at from the nozzle, which in turn is limited by the overall pressure ratio that can be achieved. Cycle efficiency is highest in rocket engines (~60+%), as they can achieve extremely high combustion temperatures and can have very large, energy efficient nozzles. Cycle efficiency in turbojet and similar is nearer to 30%, the practical combustion temperatures and nozzle efficiencies are much lower.
Specific impulse as a function of speed for different jet types with kerosene fuel (hydrogen Isp would be about twice as high). Although efficiency plummets with speed, greater distances are covered, it turns out that efficiency per unit distance (per km or mile) is roughly independent of speed for jet engines as a group; however airframes become inefficient at supersonic speeds
### Fuel/propellant consumption
A closely related (but different) concept to energy efficiency is the rate of consumption of propellant mass. Propellant consumption in jet engines is measured by Specific Fuel Consumption, Specific impulse or Effective exhaust velocity. They all measure the same thing, specific impulse and effective exhaust velocity are strictly proportional, whereas specific fuel consumption is inversely proportional to the others.
For airbreathing engines such as turbojets energy efficiency and propellant (fuel) efficiency are much the same thing, since the propellant is a fuel and the source of energy. In rocketry, the propellant is also the exhaust, and this means that a high energy propellant gives better propellant efficiency but can in some cases actually can give lower energy efficiency.
Engine type scenario SFC in lb/(lbf·h) SFC in g/(kN·s) Isp in s Effective exhaust velocity (m/s)
NK-33 rocket engine vacuum 10.9 309 330 3,240
SSME rocket engine Space Shuttle vacuum 7.95 225 453 4,423
Ramjet M1 4.5 127 800 7,877
J-58 turbojet SR-71 at M3.2 (wet) 1.9 53.8 1,900 18,587
Rolls-Royce/Snecma Olympus 593 Concorde M2 cruise (dry) 1.195[24] 33.8 3,012 29,553
CF6-80C2B1F turbofan Boeing 747-400 cruise 0.605[24] 17.1 5,950 58,400
General Electric CF6 turbofan sea level 0.307 8.696 11,700 115,000
[24]
It can be seen that the subsonic turbofans such as General Electric's CF6 uses a lot less fuel to generate thrust for a second than Concorde's turbojet, the 593. However, since energy is force times distance and the distance per second is greater for Concorde, the actual power generated by the engine for the same amount of fuel is higher for Concorde at Mach 2 cruise than the CF6- Concorde's engines are more efficient for thrust per mile, indeed, the most efficient ever.[25]
### Thrust-to-weight ratio
The thrust to weight ratio of jet engines of similar principles varies somewhat with scale, but mostly is a function of engine construction technology. Clearly for a given engine, the lighter the engine, the better the thrust to weight is, the less fuel is used to compensate for drag due to the lift needed to carry the engine weight, or to accelerate the mass of the engine.
As can be seen in the following table, rocket engines generally achieve very much higher thrust to weight ratios than duct engines such as turbojet and turbofan engines. This is primarily because rockets almost universally use dense liquid or solid reaction mass which gives a much smaller volume and hence the pressurisation system that supplies the nozzle is much smaller and lighter for the same performance. Duct engines have to deal with air which is 2-3 orders of magnitude less dense and this gives pressures over much larger areas, and which in turn results in more engineering materials being needed to hold the engine together and for the air compressor.
Jet or Rocket engine Mass, kg Jet or rocket thrust, kN Thrust-to-weight ratio
RD-0410 nuclear rocket engine[26][27] 2000 35.2 1.8
J-58 (SR-71 Blackbird jet engine)[28]
5.2
Concorde's Rolls-Royce/Snecma Olympus 593
turbojet with reheat[29][30]
3175 169.2 5.4
RD-0750 rocket engine, three-propellant mode[31] 4621 1413 31.2
RD-0146 rocket engine[26] 260 98 38.5
Space shuttle's SSME rocket engine[32]
73.12
RD-180 rocket engine[33] 5393 4152 78.6
F-1 (Saturn V first stage)[34] 8391 7740.5 94.1
NK-33 rocket engine sea-level[35] 1222 1510 126.1
NK-33 rocket engine vaccum[35] 1222 1638 136.8
Rocket thrusts are vaccuum thrusts unless otherwise noted
### Comparison of types
and Comparative suitability for (left to right) turboshaft, low bypassturbojet to fly at 10 km altitude in various speeds. Horizontal axis - speed, m/s. Vertical axis displays engine efficiency.
Turboprops obtain little thrust from jet effect, but are useful for comparison. They are gas turbine engines that have a rotating fan that takes and accelerates the large mass of air but by a relatively small change in speed. This low speed limits the speed of any propeller driven airplane. When the plane speed exceeds this limit, propellers no longer provide any thrust (c-v <>
Turbojets accelerate a much smaller mass of the air and burned fuel, but they emit it at the much higher speeds possible with a de Laval nozzle. This is why they are suitable for supersonic and higher speeds.
Low bypass turbofans have the mixed exhaust of the two air flows, running at different speeds (c1 and c2). The thrust of such engine is
S = m1 (c1 - v) + m2 (c2 - v)
where m1 and m2 are the air masses, being blown from the both exhausts. Such engines are effective at lower speeds, than the pure jets, but at higher speeds than the turboshafts and propellers in general. For instance, at the 10 km altitude, turboshafts are most effective at about Mach 0.4 (0.4 times the speed of sound), low bypass turbofans become more effective at about Mach 0.75 and turbojets become more effective than mixed exhaust engines when the speed approaches Mach 2-3.
Rocket engines have extremely high exhaust velocity and thus are best suited for high speeds (hypersonic) and great altitudes. At any given throttle, the thrust and efficiency of a rocket motor improves slightly with increasing altitude (because the back-pressure falls thus increasing net thrust at the nozzle exit plane), whereas with a turbojet (or turbofan) the falling density of the air entering the intake (and the hot gases leaving the nozzle) causes the net thrust to decrease with increasing altitude. Rocket engines are more efficient than even scramjets above roughly Mach 15.[36]
### Altitude and speed
With the exception of scramjets, jet engines, deprived of their inlet systems can only accept air at around half the speed of sound. The inlet system's job for transonic and supersonic aircraft is to slow the air and perform some of the compression.
The limit on maximum altitude for engines is set by flammability- at very high altitudes the air becomes too thin to burn, or after compression, too hot. For turbojet engines altitudes of about 40 km appear to be possible, whereas for ramjet engines 55 km may be achievable. Scramjets may theoretically manage 75 km.[37] Rocket engines of course have no upper limit.
Flying faster compresses the air in at the front of the engine, but ultimately the engine cannot go any faster without melting. The upper limit is usually thought to be about Mach 5-8, except for scramjets which may be able to achieve about Mach 15 or more, as they avoid slowing the air.
### Noise
Noise is due to shockwaves that form when the exhaust jet interacts with the external air. The intensity of the noise is proportional to the thrust as well as proportional to the fourth power of the jet velocity.Generally then, the lower speed exhaust jets emitted from engines such as high bypass turbofans are the quietest, whereas the fastest jets are the loudest.
Although some variation in jet speed can often be arranged from a jet engine (such as by throttling back and adjusting the nozzle) it is difficult to vary the jet speed from an engine over a very wide range. Therefore since engines for supersonic vehicles such as Concorde, military jets and rockets inherently need to have supersonic exhaust at top speed, so these vehicles are especially noisy even at low speeds.
### J-58 combined ramjet/turbojet
The SR-71 Blackbird's Pratt & Whitney J58 engines were rather unusual. They could convert in flight from being largely a turbojet to being largely a compressor-assisted ramjet. At high speeds (above Mach 2.4), the engine used variable geometry vanes to direct excess air through 6 bypass pipes from downstream of the fourth compressor stage into the afterburner.[38] 80% of the SR-71's thrust at high speed was generated in this way, giving much higher thrust, improving specific impulse by 10-15%, and permitting continuous operation at Mach 3.2. The name coined for this setup is turbo-ramjet.
### Hydrogen fuelled air-breathing jet engines
Jet engines can be run on almost any fuel. Hydrogen is a highly desirable fuel, as, although the energy per mole is not unusually high, the molecule is very much lighter than other molecules. The energy per kg of hydrogen is twice that of more common fuels and this gives twice the specific impulse. In addition, jet engines running on hydrogen are quite easy to build—the first ever turbojet was run on hydrogen. Also, although not duct engines, hydrogen-fueled rocket engines have seen extensive use.
However, in almost every other way, hydrogen is problematic. The downside of hydrogen is its density; in gaseous form the tanks are impractical for flight, but even in the form of liquid hydrogen it has a density one fourteenth that of water. It is also deeply cryogenic and requires very significant insulation that precludes it being stored in wings. The overall vehicle would end up being very large, and difficult for most airports to accommodate. Finally, pure hydrogen is not found in nature, and must be manufactured either via steam reforming or expensive electrolysis. Nevertheless, research is ongoing and hydrogen-fueled aircraft designs do exist that may be feasible.
### Precooled jet engines
An idea originated by Robert P. Carmichael in 1955[39] is that hydrogen-fueled engines could theoretically have much higher performance than hydrocarbon-fueled engines if a heat exchanger were used to cool the incoming air. The low temperature allows lighter materials to be used, a higher mass-flow through the engines, and permits combustors to inject more fuel without overheating the engine.
This idea leads to plausible designs like Reaction Engines SABRE, that might permit single-stage-to-orbit launch vehicles,[40] and ATREX, which could permit jet engines to be used up to hypersonic speeds and high altitudes for boosters for launch vehicles. The idea is also being researched by the EU for a concept to achieve non-stop antipodal supersonic passenger travel at Mach 5 (Reaction Engines A2).
### Nuclear-powered ramjet
Project Pluto was a nuclear-powered ramjet, intended for use in a cruise missile. Rather than combusting fuel as in regular jet engines, air was heated using a high-temperature, unshielded nuclear reactor. This dramatically increased the engine burn time, and the ramjet was predicted to be able to cover any required distance at supersonic speeds (Mach 3 at tree-top height).
However, there was no obvious way to stop it once it had taken off, which would be a great disadvantage in any non-disposable application. Also, because the reactor was unshielded, it was dangerous to be in or around the flight path of the vehicle (although the exhaust itself wasn't radioactive). These disadvantages limit the application to warhead delivery system for all-out nuclear war, which it was being designed for.
### Scramjets
Scramjets are an evolution of ramjets that are able to operate at much higher speeds than any other kind of airbreathing engine. They share a similar structure with ramjets, being a specially-shaped tube that compresses air with no moving parts through ram-air compression. Scramjets, however, operate with supersonic airflow through the entire engine. Thus, scramjets do not have the diffuser required by ramjets to slow the incoming airflow to subsonic speeds.
Scramjets start working at speeds of at least Mach 4, and have a maximum useful speed of approximately Mach 17.[41] Due to aerodynamic heating at these high speeds, cooling poses a challenge to engineers.
## Environmental considerations
Jet engines are usually run on fossil fuel propellant, and are thus a source of carbon dioxide in the atmosphere. Jet engines can use biofuels or hydrogen, although the production of the latter is usually made from fossil fuels.
Some scientists believe that jet engines are also a source of global dimming due to the water vapour in the exhaust causing cloud formations.
Nitrogen compounds are also formed from the combustion process from atmospheric nitrogen. At low altitudes this is not thought to be especially harmful, but for supersonic aircraft that fly in the stratosphere some destruction of ozone may occur.
Sulphates are also emitted if the fuel contains sulphur.
## Safety and reliability
Jet engines are usually very reliable and have a very good safety record. However, failures do sometimes occur.
The most likely failure is compressor blade failure, and modern jet engines are designed with structures that can catch these blades and keep them contained within the engine casing. Verification of a jet engine design involves testing that this system works correctly.
### Bird strike
Bird strike is an aviation term for a collision between a bird and an aircraft. It is a common threat to aircraft safety and has caused a number of fatal accidents. In 1988 an Ethiopian Airlines Boeing 737 sucked pigeons into both engines during take-off and then crashed in an attempt to return to the Bahir Dar airport; of the 104 people aboard, 35 died and 21 were injured. In another incident in 1995, a Dassault Falcon 20 crashed at a Paris airport during an emergency landing attempt after sucking lapwings into an engine, which caused an engine failure and a fire in the airplane fuselage; all 10 people on board were killed. In 2009, on US Airways Flight 1549, a Airbus A320 aircraft sucked in one bird in each engine. The plane landed in the Hudson River after taking off from LaGuardia International Airport in New York City. There were no fatalities.[42]
Modern jet engines have the capability of surviving an ingestion of a bird. Small fast planes, such as military jet fighters, are at higher risk than big heavy multi-engine ones. This is due to the fact that the fan of a high-bypass turbofan engine, typical on transport aircraft, acts as a centrifugal separator to force ingested materials (birds, ice, etc.) to the outside of the fan's disc. As a result, such materials go through the relatively unobstructed bypass duct, rather than through the core of the engine, which contains the smaller and more delicate compressor blades. Military aircraft designed for high-speed flight typically have pure turbojet, or low-bypass turbofan engines, increasing the risk that ingested materials will get into the core of the engine to cause damage.
The highest risk of the bird strike is during the takeoff and landing, in low altitudes, which is in the vicinity of the airports.
### Uncontained failures
One class of failures that has caused accidents in particular is uncontained failures, where rotary parts of the engine break off and exit through the case. These can cut fuel or control lines, and can penetrate the cabin. Although fuel and control lines are usually duplicated for reliability, the crash of United Airlines Flight 232 was caused when hydraulic fluid lines for all three independent hydraulic systems were simultaneously severed by shrapnel from an uncontained engine failure. Prior to the United 232 crash, the probability of a simultaneous failure of all three hydraulic systems was considered as high as a billion-to-one. However, the statistical models used to come up with this figure did not account for the fact that the number-two engine was mounted at the tail close to all the hydraulic lines, nor the possibility that an engine failure would release many fragments in many directions. Since then, more modern aircraft engine designs have focused on keeping shrapnel from penetrating the cowling or ductwork, and have increasingly utilized high-strength composite materials to achieve the required penetration resistance while keeping the weight low. |
## Constructing Regular Polygons -- Primitive Roots of Unity
I have been working for quite a while on finding closed, simple expressions for primitive roots of unity, that is finding solutions of cyclotomic polynomials by radical expressions in the case of constructible regular polygons.
The long-term goal is to find solutions for the 5th, 17th, 257th, and 65537th root of unity. This is an onging project.
### Primitive 5th Root of Unity
$\sqrt[5]{1} = -\frac{1}{4} \quad +$ $\frac{1}{4} \cdot \sqrt{5} \quad +$ $\frac{1}{2} \cdot \sqrt{-\frac{5}{2} -\frac{1}{2} \cdot \sqrt{5}}$
### Primitive 17th Root of Unity
$\sqrt[17]{1} = -\frac{1}{16} \quad +$ $\frac{1}{16} \cdot \sqrt{17} \quad +$ $\frac{1}{8} \cdot \sqrt{\frac{17}{2} -\frac{1}{2} \cdot \sqrt{17}} \quad +$ $\frac{1}{4} \cdot \sqrt{\frac{17}{4} + \frac{3}{4} \cdot \sqrt{17} -\frac{1}{2} \cdot \sqrt{\frac{85}{2} + \frac{19}{2} \cdot \sqrt{17}}} \quad +$ $\frac{1}{4} \cdot \sqrt{ -\frac{17}{2} + \frac{1}{2} \cdot \sqrt{17} - \sqrt{\frac{17}{2} -\frac{1}{2} \cdot \sqrt{17}} + \sqrt{17 + 3 \cdot \sqrt{17} + \sqrt{170 + 38 \cdot \sqrt{17}}} }$
Go back |
# ACL 2012
What’s interesting?
A Deep Learning Tutorial by Richard Socher, Yoshua Bengio and Chris Manning.
– Selective Sharing for Multilingual Dependency Parsing. I always like the work from MIT people. This is an interesting paper on multilingual learning. The model does not assume the existing of parallel corpus, which makes it more practical. Moreover, it can transfer linguistic structures between unrelated languages.
– Unsupervised Morphology Rivals Supervised Morphology for Arabic MT. Very close to my thesis. In fact, I just need to read Mark Johnson’s 2009 paper Improving nonparameteric Bayesian inference: experiments on unsupervised word segmentation with adaptor grammars.
# One does not simply study Computational Complexity
Last night, I ran into an interesting question on Complexity while surfing the Internet. Here it is:
Let A be the language containing only the single string w where: w = 0 if God does exist and w = 1 otherwise. Is A decidable? Why or why not? (Note: the answer does not depend on your religious convictions.)
So what’s the proof. I come up with my proof that A is undecidable.
Consider a Turing Machine M used to decide A. If A is decidable, the Turing Machine M will stop and output God, the supreme being, in the output tape.
Forget about religious viewpoint, from logical point of view, let’s assume that God is the supreme being, who holds the most super power and can create anything. So, where does God come from? There must be another supreme being, who created God. The Turing Machine M, therefore continues working to output The One, who created God. But then, who created The One? Turing Machine M definitely enters infinite loop seeking for the original supreme being. Thus, A is undecidable.
Hey, does the proof sound familiar? Of course, it’s an analogue version of Turtles all the way down. In case you’ve never heard of this philosophical story, here is one of those version, found in the first page of my book Plato and a Platypus Walk into a Bar, Philogagging chapter.
Dimitri: If Atlas holds up the world, what holds up Atlas?
Tasso: Atlas stands on the back of a turtle.
Dimitri: But what does the turtle stand on?
Tasso: Another turtle.
Dimitri: And what does that turtle stand one?
Tasso: My dear Dimitri, it’s turtles all the way down!
# Recursion Theorem
(Kleene – Recursion Theorem) For any computable function
$f$ there exists an $e$ with $\phi_e = \phi_{f(e)}$.
Proof:
We define a partial recursive function $\theta$ by $\theta(u,x) = \phi_{\phi_u (u)} (x)$ . By s-m-n theorem or parameter theorem, we can find a recursive function $d$ such that
$\forall u: \phi_{d(u)}(x)=\theta(u,x)$ for $\forall x$.
Let $\phi_v = f \circ d$, choose $e=d(v)$ then we have
$\phi_e(x) = \phi_{d(v)}(x) = \theta(v,x) = \phi_{\phi_v (v)} (x) = \phi_{f \circ d(v)}(x) = \phi_{f(e)}(x)$
Using recursion theorem, we have a beautiful proof for Rice’s Theorem.
Let $C$ be a class of partial recursive functions. Set $\left\{e | \phi_e \in C \right\}$ is recursive, if and only if $C = \emptyset$ or $C$ contains all partial recursive functions.
Proof:
$C = \emptyset$ or $C = \mathbb{N}$ is trivial. We proof the case $C \ne \emptyset$ and $C \ne \mathbb{N}$.
Denote $S = \left\{e | \phi_e \in C \right\}$.
Exist $e_0 \in S$ and $e_1 \in \neg S$. If $S = \left\{e | \phi_e \in C \right\}$ is recursive, the following function is also recursive:
$f(x) = \left\{ {\begin{array}{*{20}c} {e_0 : x \in \neg S} \\ {e_1 : x \in S} \\ \end{array}} \right.$
By recursion theorem, $\exists e': \phi_e' =\phi_{f(e')}$. We consider 2 cases:
1. $e' \in S$, by index property we have $f(e') \in S$, so $e_1 \in S$ by the definition of $f$. We get contradiction.
2. $e' \in \neg S$, by index property $f(e') \in \neg S$, so $e_0 \in \neg S$ by the definition of $f$. Again we arrive to contradiction.
Functionception
Why’s recursion theorem interesting? A typical application of recursion theorem is to find a function that can print itself. Let’s say print(*program*) is that function. The output of print(*program*) is not *program* but the function that excuses print(*program*). In other words, we could ask if there is a function which can be self-conscious. I like to call it function-ception, which can be related to inception: does one know that he is dreaming in his dream?
Kleene recursion theorem offers an answer to our question: there is a function-ception, or a program that can print itself. We’ll construct such that function.
Let $\phi_e = \pi_1^2$, $\pi_i^n$ is the projection function $\pi_i^n(x_1,...,x_n) = x_i$.
By s-m-n theorem, there is total recursive function $s(e,x)$ such that $\phi_{s(e,x)}(y)=\phi_e(x,y)$. Let $g(x) = s(e,x)$. Apply Kleene recursion theorem, there is a number $n$ such that $\phi_{g(n)} = \phi_n$.
For each $x$: $\phi_n(x) = \phi_{g(n)}(x) = \phi_{s(e,n)}(x) = \phi_e(n,x) = \pi_1^2 = n$
My work here is done!
How many fixed points does a recursive function $f$ have?
Intuitively, we can see that a (total) recursive function $f$ has infinite fixed points. Recall the proof for Kleene recursion theorem, when we choose $v: \phi_v = f \circ d$. There is an infinite number of $v$ having that property. Analogically , there is an infinite number of ways to implement (write code) a function (by adding comments, using different variable names, …)
Now let’s work on some exercises using recursion theorem.
Lemma: There is a number $n \in \mathbb{N}$, such that $W_n = \left\{ n \right\}$
Proof:
We need to find a function $\phi_n$ that is only defined for input $n$ and undefined on the rest of natural numbers $\mathbb{N}-{n}$
I’m doing this in reversing way (how my thought leads me to the solution). By recursion theorem, $\exists n : \phi_n(x) = \phi_{f(n)}(x) = \phi_{s(e,n)}(x) = \phi_e(n,x)$.
Let $g(x,y)$ is the function that it is only defined for $(x,y)$ such that $x=y$. Such an $g(x,y)$ can be constructed as following:
$g(x,y) = \left\lfloor {\frac{1}{{\neg sign\left( {\left| {x - y} \right|} \right)}}} \right\rfloor$
My work here is done. There is an index $e$ such that $\phi_e(x,y) = g(x,y)$, by s-m-n theorem there is a total recursive function $s: \phi_{s(e,x)}(y) = \phi_e(x,y)$. Let $f(x) = s(e,x)$, and the last step is trivial.
Lemma: There is a number $n \in \mathbb{N}$, such that $\phi_n = \lambda x[n]$
Proof:
With the same line of proof for the previous lemma, we only need to construct a partial recursive function $g(x,y) = x \forall y$. One can easily recognize $g(x,y)$ is a projection function $\pi_1^2$
# Hello world!
I started Clojure past few days. Thanks to Jo and Milos, my nerdy classmates who are big fans of functional programing. Now I’m getting into it.
I heard of functional programming years ago, I played with Haskell but not much. Probably, the main reason I started liking Clojure because of Computational Complexity course I’ve taken at Charles University.
To be honest, I hated this course at the beginning. Machine Turing is not that bad, NP-complete and reduction techniques are pretty much fun, but recursive function and λ-calculus are so fucked up. I did hate all the lambda notations and hazy proofs with old math style.
I still hate λ-calculus if I don’t have to study for the exam. After several group studies with Jo, and he kept telling me how much he likes Scala and functional programing I thought “Hey, that’s pretty much similar to what we’re studying, you know, recursive functions and other stuff”
Yes, I like Clojure because I like λ-calculus, recursive function, partial recursive function, Kleene theorem and all other stuff. I decided to learn Clojure seriously, so I come up with an idea that I’m gonna blog about it. Of course, not only Clojure, FP but all other stuffs I like such as Computational Complexity, Machine Learning, Computational Linguistics, Lomography and so on.
Why is the blog named MyLomoWalk? I love Lomography, and Lomography is the art of coincidence. I’m a big fan of uncertainty, which often leads to coincidence. MyLomoWalk is my walk in the uncertain universe (or multiverse?) |
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 26 Jul 2016, 07:44
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# If b#1, what is the value of a in terms of b if ab/(a-b)=1 ?
Author Message
TAGS:
### Hide Tags
Intern
Joined: 22 Mar 2012
Posts: 4
Followers: 0
Kudos [?]: 6 [0], given: 0
If b#1, what is the value of a in terms of b if ab/(a-b)=1 ? [#permalink]
### Show Tags
23 Mar 2012, 08:37
00:00
Difficulty:
15% (low)
Question Stats:
86% (01:54) correct 14% (01:10) wrong based on 56 sessions
### HideShow timer Statistics
If $$b\neq{1}$$, what is the value of a in terms of b if ab/(a-b)=1?
A. $$\frac{b}{1-b}$$
B. $$\frac{b-1}{b}$$
C. $$\frac{b}{b+1}$$
D. $$\frac{b+1}{b}$$
E. $$\frac{1-b}{b}$$
The answer is b/1-b but I'm not sure how they got it.
[Reveal] Spoiler: OA
Last edited by Bunuel on 23 Mar 2012, 08:58, edited 2 times in total.
Edited the question and added the OA
Math Expert
Joined: 02 Sep 2009
Posts: 34060
Followers: 6086
Kudos [?]: 76520 [0], given: 9975
### Show Tags
23 Mar 2012, 08:55
Expert's post
Jakevmi80 wrote:
If b is not = 1 and ab/a-b =1, what is the value of a in terms of b?
The answer is b/1-b but I'm not sure how they got it.
If $$b\neq{1}$$, what is the value of $$a$$ in terms of $$b$$ if $$\frac{ab}{a-b}=1$$?
A. $$\frac{b}{1-b}$$
B. $$\frac{b-1}{b}$$
C. $$\frac{b}{b+1}$$
D. $$\frac{b+1}{b}$$
E. $$\frac{1-b}{b}$$
$$\frac{ab}{a-b}=1$$ --> cross-multiply: $$ab=a-b$$ --> rearrange: $$b=a-ab$$ --> factor out $$a$$: $$b=a(1-b)$$ --> $$a=\frac{b}{1-b}$$.
_________________
Re: isolating a variable [#permalink] 23 Mar 2012, 08:55
Similar topics Replies Last post
Similar
Topics:
A and B are integers. The expression (A+1)(B+1) is even. What can be 4 05 Jul 2016, 12:21
1 If b ≠ 1 and if ab/(a - b) = 1, what is the value of a in terms of b? 2 13 Dec 2015, 06:10
1 If a/b = 1/2, then the numerical value of which of the following 5 09 Nov 2015, 23:04
2 A&&B = b^(1/2) -a. What is the value of p in 16&&p = 9 ? 7 15 Apr 2015, 04:12
14 If a and b are different values and a – b = √a - √b, then in terms of 8 05 Nov 2014, 08:42
Display posts from previous: Sort by |
Question
# A person wants to invest Rs. 100000 in a fixed deposit scheme for 2 years. His financial advisor explained to him two types of schemes first is yielding 10% p.a. compounded annually, second is yielding 10% p.a. compounded semi-annually. Which scheme is better and why? Why investment is important for future life?
Hint: This question is the simple example of compound interest calculations. We need to compute the compound interest in two different time frames. And hence we can decide about the better investment plan based on their returns.
We know that total amount in the case of compound interest can be calculated by the formula as follows:
Amount = $P{(1 + \dfrac{r}{{100 \times n}})^{n \times t}}$
Where P is Principal Amount
n= No. of instalments in one year
t= Total number of years.
r = rate of interest.
For case-1:
r= 10
n=2
t=2
So, substituting these values in above formula, we get,
$Amount = 100000 \times {(1 + \dfrac{{10}}{{100 \times 1}})^2} \\ \Rightarrow Amount = 100000 \times \dfrac{{11}}{{10}} \times \dfrac{{11}}{{10}} \\ \Rightarrow Amount = 121000 \\$
For case-2:
r= 10
n=1
t=2
So, substituting these values in above formula, we get,
$Amount = 100000 \times {(1 + \dfrac{{10}}{{100 \times 2}})^{2 \times 2}} \\ \Rightarrow Amount = 100000 \times {(\dfrac{{21}}{{20}})^4} \\ \Rightarrow Amount = 100000 \times \dfrac{{21}}{{20}} \times \dfrac{{21}}{{20}} \times \dfrac{{21}}{{20}} \times \dfrac{{21}}{{20}} \\ \Rightarrow Amount = 121550.625 \\$
It is clear that the amount in case-2 is more than that of in case-1.
Therefore the second schedule is a better scheme.
Investing is to ensure our present as well as future through long-term financial security. The money which one can generate from the investments will definitely provide financial security and income. There are many ways for investments such as stocks, bonds, FDs, and ETFs etc.
Note: This problem is explaining the FD investment in two different time period plans. One is annually and the other one is semi-annually. Similarly, we may have other time durations like monthly, bimonthly, or even quarterly. |
# Lesson 13: Inference for Two Means: Independent Samples
These optional videos discuss the contents of this lesson.
## 1 Lesson Outcomes
By the end of this lesson, you should be able to:
• Confidence Intervals for difference of two means of independent samples:
• Calculate and interpret a confidence interval given a confidence level and a given parameter.
• Identify a point estimate and margin of error for the confidence interval.
• Show the appropriate connections between the numerical and graphical summaries that support the confidence interval.
• Check the requirements for the confidence interval.
• Hypothesis Testing for difference of two means of independent samples:
• State the null and alternative hypothesis.
• Calculate the test-statistic, degrees of freedom and p-value of the hypothesis test.
• Assess the statistical significance by comparing the p-value to the α-level.
• Check the requirements for the hypothesis test.
• Show the appropriate connections between the numerical and graphical summaries that support the hypothesis test.
• Draw a correct conclusion for the hypothesis test.
## 2 Independent Samples Versus Paired Data
In the previous reading (Inference for Two Means: Paired Data), we studied confidence intervals and hypothesis tests for the difference of two means, where the data are paired. One example of paired data is pre- and post-test scores, such as Mahon's weight loss study. Another example is paired comparisons, like the nosocomial infection study. How can you tell if data are paired? The key characteristic of dependent samples (or matched pairs) is that knowing which subjects will be in Group 1 determines which subjects will be in Group 2. The data for each subject in Group 1 is paired with the data for a corresponding subject in Group 2. In the case of the weight loss study, the same subject provided weight data for both groups: once in the pre-test (group 1) and once in the post-test (group 2).
In contrast to dependent samples, two samples are independent if knowing which subjects are in Group 1 tells you nothing about which subjects will be in Group 2. With independent samples, there is no pairing between the groups. Suppose you want to compare the incomes of men and women in the general population. A random sample of men would be collected, and each would be asked to report their income. Similarly, a random sample of women would be drawn, and they would also be asked to report their income. Notice that the groups are independent. Knowing the names of the men who are selected tells you nothing about which women would be selected. This is an example of independent samples.
We can compare the mean income of men to the mean income of women using the procedures of this section. We will conduct hypothesis tests and compute confidence intervals for the difference in the true population means of two groups ($\mu_1 - \mu_2$).
Some students make the association that samples are independent if they do not affect each other. This is a false notion. Instead, remember that samples are independent if knowing who was selected for Group A tells you nothing about who will be selected for group B.
## 3 Hypothesis Tests
### 3.1 Reading Practices of Children with Developmental or Behavioral Problems
Is there a difference in the amount of reading done by children with problematic behavior compared to other children?
Summarize the relevant background information
Researchers led by Arlene Butz published a study on the reading practices of children . They wanted to know if there was a difference in the reading practices of children with developmental or behavioral problems (the DEV group or Group 1) compared to children in the general population who do not have developmental problems (the GEN group or Group 2.) One of the factors they considered was the number of nights each week that the children participated in reading in the home. Data representative of their results are given in the file ReadingPractices.
State the null and alternative hypotheses and the level of significance
The null hypothesis is that there is no difference in the mean number of nights each week in which the two groups of children participate in reading in the home. The alternative hypothesis is that there is a difference in the mean number of nights that the children in the two groups participate in reading in the home. These hypotheses are expressed mathematically as: \begin{align} H_0: &~~ \mu_1 = \mu_2 \\ H_a: &~~ \mu_1 \ne \mu_2 \end{align}
We will use the $\alpha = 0.05$ level of significance.
Describe the data collection procedures
A group of children were enrolled in the study. Children who were identified to have developmental or behavioral problems were labeled as Group 1 (the DEV group). Children who did not display developmental or behavioral problems were labeled as Group 2 (the GEN group). A survey was administered to a parent of each of the children. One of the questions on the survey asked the number of nights that either their child read or that they read to their child during the week. This data is found in the file ReadingPractices.
1. For which group do you think the mean number of nights of reading will be higher?
Many students indicate they expect the group without behavioral problems will have a higher mean number of nights that they read in the home.
2. Do the data published by Arlene Butz and her colleagues represent paired data or independent samples? How can you tell?
The data represent independent samples. Knowing which children are in Group 1 tells you nothing about which children will be in Group 2.
Give the relevant summary statistics
We will use $\bar x_1$ to denote the mean of Group 1. Similarly, we use $s_1$ and $n_1$ for the standard deviation and sample size of Group 1. For Group 2, we indicate the mean, standard deviation and sample size with the symbols: $\bar x_2$, $s_2$, and $n_2$, respectively.
3. Find the mean, standard deviation and sample size for the two groups, separately. In other words, find $\bar x_1$, $s_1$, $n_1$, $\bar x_2$, $s_2$, and $n_2$.
Summary Statistics:
DEV Group GEN Group
Mean: $\bar x_1 = 4.1$ $\bar x_2 = 3.7$
Standard Deviation: $s_1 = 2.4$ $s_2 = 2.5$
Sample Size: $n_1 = 204$ $n_2 = 117$
Were you surprised that the sample mean $\bar x$ was higher for the group with behavioral problems? Why might this be the case?
4.Based on the summary statistics (means, standard deviations, and sample sizes) for the two groups, does the true mean number of nights each week that the children engage in reading seem to differ significantly between the DEV and GEN groups?
Answers will vary. We will conduct a hypothesis test to formally determine the answer to this question.
Make an appropriate graph to illustrate the data
There are two populations, and it is important to illustrate both of them separately. It is not sufficient to combine the groups and to produce a single graph. This would obscure the differences in the groups.
Excel Instructions
To create side-by-side histograms in Excel, do the following:
Use the file QuantitativeDescriptiveStatistics.xls and create separate histograms for the two groups. This requires pasting the data for one of the groups, deleting the data, and pasting the data for the other group.
Using 7 bins, the data are represented in the following histograms:
Tip for Excel users:
If you copy and paste the graphs into Microsoft Word, you will want to use the "Paste Special" option. If you simply paste the histogram from Excel, it is treated as a "Microsoft Office Graphic Object." This means that it will update if your Excel file is changed. To copy and paste side-by-side histograms into Microsoft word, do the following:
• Create the graph for the first group in Excel
• Select the graph by clicking on it
• Copy the graph
• In Microsoft Word, click on the lower part of the "Paste" icon under the "Home" ribbon:
• Then, choose "Paste Special"
• A list of options will appear; from this list choose one of the "Picture" options, such as "Picture (PNG)"
• This will paste a static image of the graph in the Word document, which you can move around or shrink as desired
• You should create a title for the graph and place it above the graph in your document
• Repeat this process for the second data set
Verify the requirements have been met
There are two requirements that need to be checked when conducting a hypothesis test for two means with independent samples:
• A simple random sample was drawn from each of the populations
• $\bar x$ is normally distributed for each group
Remember, the second requirement will be satisfied if the original populations are normally distributed or if the sample sizes are large.
Give the test statistic and its value
The test statistic for a hypothesis test comparing two means with independent samples is a $t$. We will use software tools to conduct the hypothesis test for two means with independent samples:
#### 3.1.1 Hypothesis Test
Excel Instructions
The following instructions will help you conduct a hypothesis test for two means with independent samples in Excel.
Use the file QuantitativeInferentialProcedures.xls to do the following:
• Click on the tab labeled "Two-sample t-test"
• Paste the data from the first group in the appropriate part of Column A
• Paste the data for the second group in the designated part of Column B
• Click on the drop-down menu in cell F11 and choose the appropriate alternative hypothesis
For the Reading Practices example, we choose "Not equal to"
• The test statistic, $t$, is given in cell E23
• The (approximate) degrees of freedom are presented in cell F23
• The $P$-value is reported in cell G23
Now, we will apply these steps to the data from the study on the reading practices of children with developmental and behavioral problems.
If you assign the DEV group to be Group 1, and the GEN group as Group 2, then the test statistic will be:
$$t = 1.455$$
If the group labels are switched, then the $t$ statistic will have the opposite sign.
State the degrees of freedom
The degrees of freedom are given as: $$df = 228.427$$
You will notice that this is not a whole number. This is called the Satterthwaite approximation for the degrees of freedom. Do not worry that this is not an whole number. Just record the value as it is given to you in the software.
Mark the test statistic and $P$-value on a graph of the sampling distribution
The image below represents the area under the curve. The phrase "for illustrative purposes only" reminds us that this image, which was taken from the applet, shows a normal distribution, not a $t$-distribution.
Find the $P$-value and compare it to the level of significance
From the output, we find that the $P$-value is 0.147.
$$P\text{-value}=0.147 > 0.05 = \alpha$$
Since the $P$-value is greater than the level of significance, we fail to reject the null hypothesis.
Present your conclusion in an English sentence, relating the result to the context of the problem
There is insufficient evidence to suggest that there is a difference in the mean number of nights children with developmental / behavioral disabilities read compared to children in the general population.
### 3.2 World Cup Heart Attacks
Do intense sporting events increase the probability of a person having a heart attack? We will consider this question in the next example.
Summarize the relevant background information
The FIFA Football (Soccer) World Cup is held every four years and is one of the biggest sporting events in the world. In 2006, Germany hosted the World Cup. A study was conducted by Dr. Wilbert-Lampen, et. al. to determine if the stress of viewing a soccer match would increase the risk of a heart attack or another cardiovascular event.
We will use the data on cardiovascular problems during the World Cup to test the hypothesis that the mean number of cardiovascular events is greater during the World Cup than during the control period.
State the null and alternative hypotheses and the level of significance
Let Group 1 be days in the Control Period and let Group 2 represent days during the 2006 World Cup. We are testing whether the mean number of cardiovascular events is greater during the World Cup than during the control period. So, the alternative hypothesis will be one-sided.
\begin{align} H_0: & ~~ \mu_1=\mu_2 \\ H_a: & ~~ \mu_1 < \mu_2 \end{align}
We will use the 0.01 level of significance.
Describe the data collection procedures
The 2006 World Cup was held from June 9, 2006 to July 9, 2006. The number of patients suffering cardiovascular events (e.g. heart attacks) was obtained from medical records of patients in the Greater Munich (Germany) area during this time period. To provide a control group, counts of patients suffering cardiovascular events was recorded from May 1 to June 8 and July 10 to July 30, 2006, as well as May 1 to July 30 in 2003 and 2005. The year 2004 was avoided, due to the European Soccer Championships held in Portugal. These data were extracted from Figure 1 in the article by Wilbert-Lampen, and are given in the file WorldCupHeartAttacks.
5. Give the relevant summary statistics
We need to find the the following: $\bar x_1$, $s_1$, $n_1$, $\bar x_2$, $s_2$, and $n_2$.
Summary Statistics:
Time Period Mean Std. Deviation Sample Size
Control $\bar x_1 = 14$ $s_1 = 4.2$ $n_1 = 182$
World Cup $\bar x_2 = 19$ $s_2 = 9.8$ $n_2 = 91$
6. Make an appropriate graph to illustrate the data
Side-by-side histograms are a great way to summarize the data:
7. Verify the requirements have been met
Even though a simple random sample of days was not taken, we can assume that the number of heart attacks on one day do not affect the number of heart attacks on other days.
The sample size is large for both groups, so we conclude that the sample means are normally distributed.
8. Give the test statistic and its value
The test statistic is a $t$ and its value is
$$t = -4.617$$
9. State the degrees of freedom
The degrees of freedom are:
$$df = 106.429$$
10. Mark the test statistic and $P$-value on a graph of the sampling distribution
For Illustrative Purposes Only
11. Find the $P$-value and compare it to the level of significance
$P\textrm{-value}=5.4636 \times 10^{-06}$
Since $P\textrm{-value}=5.4636 \times 10^{-06} < 0.01 = \alpha$, we reject the null hypothesis.
13. Present your conclusion in an English sentence, relating the result to the context of the problem
There is sufficient evidence to suggest that the mean number of heart attacks per day is greater during intense sporting events, such as the World Cup.
### 3.3 Theory of Statistics
In this course, we do not go very deep into statistical theory. For those students who are interested, there is a lot of theory undergirding statistical practice.
An important theoretical issue relates to this hypothesis test. If the variances of the two groups are equal, then traditional statistical theory suggests that you combine or pool the information about the variance in the two groups. If the variances are not equal, you do not combine the information about the spread. These two techniques usually lead to slightly different values for the $t$-statistic, degrees of freedom, and $P$-value.
If the variances observed in the sample data are very different from each other, you assume unequal variances and do not pool the data. However, if the variances are very similar to each other, the results of the two procedures will be nearly identical. In this case, it does not really matter which you choose.
So, if the variances differ significantly, we should not assume equal variances. If the variances do not differ significantly, it doesn't really matter if you assume equal variances or not. So, for this course, we will never assume the variances are equal. Stated, differently, we always assume unequal variances in this course. This provides a consistent framework for your learning.
## 4 Confidence Intervals
### 4.1 Reading Practices of Children with Developmental or Behavioral Problems
Summarize the relevant background information
Researchers led by Arlene Butz published a study on the reading practices of children [55]. They wanted to know if there was a difference in the reading practices of children with developmental or behavioral problems (the DEV group) compared to children in the general population who do not have developmental problems (the GEN group.) One of the factors they considered was the number of nights each week that the children participated in reading in the home.
We can use a 95% confidence interval to compare the difference between the true mean number of nights children in the DEV group participated in reading compared to children in the GEN group. We are trying to find an estimate for the difference in the true means of the two groups. Using math symbols, we want to estimate the value of $\mu_1 - \mu_2$. The confidence interval gives a range of plausible values for the unknown parameter $\mu_1 - \mu_2$.
Notice that if $\mu_1 - \mu_2 = 0$, if we add $\mu_2$ to both sides of the equation, we get: $\mu_1 = \mu_2$. Extending that idea, if zero is in the confidence interval, then it is plausible that $\mu_1 = \mu_2$. If zero is in the confidence interval, we conclude that there is no significant difference between the mean number of nights the children in the two groups read at home.
Software can be used to compute the confidence interval for a difference of two means with independent samples.
Describe the data collection procedures
In the study by Arlene Butz, et.al., on the reading practices of children, the researchers wanted to determine if there was a difference in the mean number of nights children read in the two groups (DEV and GEN). A survey was given to the child's caretaker (usually a parent). This survey included questions about the child's development and behavior. The survey also asked the number of nights each week that the child participated in reading in the home. Data representative of their results are given in the file ReadingPractices.
Give the relevant summary statistics
Using
Excel
, we compute the following:
Summary Statistics
DEV Group
Group 1
GEN Group
Group 2
Mean: $\bar x_1 = 4.1$ $\bar x_2 = 3.7$
Standard Deviation: $s_1 = 2.4$ $s_2 = 2.5$
Sample Size: $n_1 = 204$ $n_2 = 117$
Make an appropriate graph to illustrate the data
Verify the requirements have been met
The sample size is large, so we can use these methods without much concern about normality.
Find the confidence interval
In this example, we compute the 95% confidence interval for the difference in the mean number of nights the children in the DEV and GEN groups are reading. To obtain a confidence interval for the difference in two means with independent samples, follow these steps:
Excel Instructions
To compute a 95% confidence interval for two independent samples in Excel:
• Open the file QuantitativeInferentialProcedures.xls
• Click on the tab labeled, "Two-sample t-test"
• Paste the data for the first group in the appropriate place of column A
• Paste the data for group 2 in column B
• Enter the desired confidence level in cell F10
• The confidence interval will be given in cells H23 and I23
Present your observations in an English sentence, relating the result to the context of the problem
We are 95% confident that the true difference in the means is between $-0.149$ and $0.987$ days. Note that this confidence interval contains zero, so it is plausible that there is no difference in the mean number of nights the children in the two groups participate in reading.
When defining the confidence interval in
Excel
, if the two groups are reversed, the means are subtracted in the opposite order. This results in a confidence interval with the opposite sign. If we assign the GEN group as the first group and the DEV group as the second, we would get a confidence interval of $(-0.987, 0.149)$.
### 4.2 Chronic Obstructive Pulmonary Disease (COPD)
Summarize the relevant background information
The National Heart Lung and Blood Institute gives the following explanation of COPD :
COPD, or chronic obstructive pulmonary (PULL-mun-ary) disease, is a progressive disease that makes it hard to breathe. "Progressive" means the disease gets worse over time.
COPD can cause coughing that produces large amounts of mucus (a slimy substance), wheezing, shortness of breath, chest tightness, and other symptoms.
Cigarette smoking is the leading cause of COPD. Most people who have COPD smoke or used to smoke. Long-term exposure to other lung irritants, such as air pollution, chemical fumes, or dust, also may contribute to COPD.
Describe the data collection procedures
A study was conducted in the United Kingdom to determine if there is a difference in the effectiveness of community-based rehabilitation program compared to hospital-based rehabilitation . Patients suffering from COPD were randomly assigned to either the community or hospital group. Twice a week for six weeks, they participated in two-hour educational and exercise sessions. Patients were also encouraged to exercise between sessions.
The effectiveness of the program was measured based on the total distance patients could walk at one time at a particular pace. This is called the endurance shuttle walking test (ESWT). This was measured at the beginning of the study and again at the end of the six week rehabilitation period. Data representing the improvement of the patients in each group is given in the file COPD-Rehab(stacked). The data represent the increased distance (in meters) that each patient could walk. Negative values indicate that the patient was not able to walk as far at the end of the rehabilitation treatment as at the beginning.
Because hospital-based rehabilitation tends to be more expensive, the researchers wanted to assess if there is a significant difference in the patients' improvement under the two programs. If not, then it makes sense to refer patients to the less expensive treatment option. The purpose of this study was to determine if pulmonary rehabilitation in a community setting is as effective as rehabilitation in a hospital setting.
15. Give the relevant summary statistics.
Location
Community
Group 1
Hospital
Group 2
$\bar x = 216.1$ $\bar x_2 = 283.4$
$s = 339.9$ $s = 359.9$
$n = 76$ $n = 85$
16. Make an appropriate graph to illustrate the data.
17. What do you observe in the graphs you made in the previous question? Does there appear to be a difference in the mean responses of the two groups?
The two histograms look similar, although based on the graphs it looks like the mean of the Hospital Group may be slightly higher than the community group. This is just a visual observation. We need to conduct a hypothesis test or create a confidence interval to verify it.
18. Verify the requirements have been met
The sample sizes are large, so the distribution of sample means from both groups are normal. Therefore, we can conclude the requirements are met.
19. Find the confidence interval
Remember, we will never assume equal variances in this course.
Two different confidence intervals are possible, depending on how you defined the groups. The order in which we subtract the means determines the sign of the results.
• If we subtract Community - Hospital, we get:
$(-176.2,~41.7)$
• If we subtract Hospital - Community, we get:
$(-41.7,~176.2)$
If the problem does not specify the order in which we should subtract, either of these is acceptable.
20. Present your observations in an English sentence, relating the result to the context of the problem
We are 95% confident that the true mean difference in the improvement for the two groups is between -176.2 and 41.7.
If we had subtracted in the order Hospital - Community, we would say, we are 95% confident that the true mean difference in the improvement for the two groups is between -41.7 and 176.2.
21. Does there appear to be a difference in the mean improvement observed between the two groups? What does this suggest?
Zero is contained in the confidence interval, so zero is a plausible value for the difference in the means. In other words, we cannot conclude that there is a difference in the mean improvement of the two groups.
These results suggest that health care administrators should encourage community rehabilitation options for COPD whenever possible. It is generally less expensive than the hospital experience and does not lead to significantly different patient improvement.
22. Create a 90% confidence interval for the difference in the mean responses of the two groups.
• If we subtract: $Community - Hospital$, we get:
$(-158.5,~24.0)$
• If we subtract: $Hospital - Community$, we get:
$(-24.0,~158.5)$
23. Interpret the confidence interval you computed in Question 22.
We are 90% confident that the true mean difference in the improvement for the two groups is between $-158.5$ and $24.0$. A similar statement holds for $24.0$ and $-158.5$.
24. Why is the 95% confidence interval wider than the 90% confidence interval?
In order to be more confident that the true mean is between two values, we have to make the confidence interval wider.
## 5 Summary
Remember...
• In contrast to dependent samples, two samples are independent if knowing which subjects are in group 1 tells you nothing about which subjects will be in group 2. With independent samples, there is no pairing between the groups.
• When conducting inference using independent samples we use $\bar x_1$, $s_1$, and $n_1$ for the mean, standard deviation, and sample size, respectively, of group 1. We use the symbols $\bar x_2$, $s_2$, and $n_2$ for group 2.
• When working with independent samples it is important to graphically illustrate each sample separately. Combining the groups to create a single graph is not appropriate.
• When conducting hypothesis tests using independent samples, the null hypothesis is always $\mu_1=\mu_2$, indicating that there is no difference between the two populations. The alternative hypothesis can be left-tailed ($<$), right-tailed($>$), or two-tailed($\ne$).
• Whenever zero is contained in the confidence interval of the difference of the true means we conclude that there is no significant difference between the two populations. |
# Initialisation of the genetic system
Jump to: navigation, search
FORGEM
## Introduction
The genetic control of individual functional traits is modelled by additive-linear relationship between the allelic effect (i.e. "allele dose”) and the phenotypic value of the trait (Liu, 1998). In the simulation the genetic part of each trait is determined by a given initial number of loci. The contribution of a locus to the phenotype is proportionally to the effects of its alleles, which do not change during the course of the simulation, and the frequencies of the alleles, which do change during the course of the simulation.
The contribution of each locus on the phenotypic value of the individual is independent from other loci. I.e. there is no epistasis between loci. However, loci can be linked with a user-defined recombination fraction.
Each locus has initially 2 alleles, which is kept constant during the simulation. This means that there is neither mutation, nor immigration of new alleles for a particular loci. However, gene flow of known alleles per locus between populations can be simulated.
To obtain the actual phenotypic value of a traits, a random/environmental component is added to the value characterised by the genetic system. A user-defined initial heritability determines the additive genetic variance as fraction of the total phenotypic variance. During the course of the simulation genetic variation can be lost, resulting in a reduction of the heritability of the trait.
Thus, the following aspects of the genetic model need to be quantified for the initialisation of the ForGEM model:
a. the initial frequencies of the alleles for each locus contributing to the phenotypic values of the trait
b. the initial genetic and non-genetic variances
c. the allelic effects or 'allele dose' of each allele
If measured of otherwise observed values for these aspects are not available, they are determined by statistical methods during the initialisation of the model. These methods are described below. Observed values can always be used to overrule the statistically derived values.
Different evolutionary forces such as selection, random genetic drift, migration and mutation, act upon these frequencies and modify them through time. These genetic processes are all be modeled in the simulation in detail.
## initial allele frequencies
FORGEM
For polymorphic loci in trees, the distribution of the frequency of alleles over all loci that determining a trait is typically U-shaped. This means that alleles are either very common (allele frequency approaching unity) or very rare (allele frequency approaching zero), but rarely have a frequency in the population of around 0.5 (e.g. Hamrick, 2004; Chakraborty et al., 1980). In absence of observations on the frequency distribution of alleles of adaptive traits to initiate the genetic model, an equilibrium allele frequency distribution of neutral traits is used Crow and Kimura, 1970).
This equilibrium distribution of allelic frequencies (x) can expressed as (Nei, 1987, p. 367):
$\Gamma \left(M+M'\right)}{\Gamma \left(M\right)\Gamma \left(M'\right)} \left((1-x)^{M-1} x^{M'-1$
where:
M = 4Ne v
M' = M / (k -1) Ne is the effective population size, v is the mutation rate per locus and per generation, k is the number of alleles per locus, Γ() is the gamma function.
M can also be estimated from the average heterozygosity (H). If a large number of loci are examined, then: M = H/(1-H)
Figure 1 presents an example of the shape of this equation for different values of H and k.
Figure 1. Example of the equilibrium distribution of allele frequencies in a population for neutral traits. H = 0.25 and k = 2 (typical values for isozyme data)
Note that in Figure 1 the number of loci is indetermined as Eqn. 1 represents the distribution of allele frequencies over a very large number of loci. To arrive at initial allele frequencies for an actual number of loci (e.g. 5), most conveniently the cumulative distribution of &phi(x) is calculated and the allele frequencies for the actual number of loci are determined at the quantile values of the cumulative distribution. i.e. every 20% quantile in case of 5 loci.
To obtain the cumulative distribution of φ(x), φ(x)dx is numerically integrated between 0 and 1 (extreme are excluded because φ(x)→∞ when x→0 or x→1 ):
${\displaystyle \int _{0+}^{1-}\phi \left(x\right)dx\cong \sum _{x=0.000001}^{0.999999}\phi \left(x\right)\Delta x\cong 1}$
then compute a cumulative distribution function, p, of φ(x)x' as:
${\displaystyle p=\phi _{cumul}=\sum _{0.000001}^{x}\phi \left(x\right)\Delta x}$
To compute the allele frequency for a given number of loci, the inverse of the integral of φ(x) is required, where φ(x) is the distribution of allele frequencies of all loci in a population. This inverse can be obtained by linear interpolating after evaluating φ(x) over a large number of x-values.
As an example, to choose the 5 initial allelic frequencies (nLoci = 5; k = 2 and k=4) equally spaced points in the first half of distribution of cumulative φ(x) are selected, other half is determined by the other allele. Examples of Eqn. 2 for nLoci = 5 and k = 2 or k = 4 are presented in Figure 2.
Figure 2. Cumulative distribution of φ(x) and relative allelic frequencies (x). Dots indicate interpolated values for a 5 locus genetic system (the allele frequencies > 0.5 are 1 - the allele frequencies < 0.5).
These 5 points are the 10th, 20th, 30th, 40th and 50th percentile of the cumulative distribution. In case H = 0.25 and k = 2, their relative frequencies are 0.006, 0.044, 0.141, 0.299, 0.499. In this way, 5 allelic frequencies are obtained that take into account the natural distribution of frequencies, with many of loci with low (or high) allele frequencies and few loci with allele frequencies around 0.5.
## initializing allelic effects
FORGEM
For the ForGEM model, it is necessary to assign allelic effects to each of the alleles that compose the genotype of the individual tree. Allelic effects are kept constant during the entire simulation. If information is lacking on the actual number of loci, the number of alleles and the allelic effects that determine quantitative phenotypic traits, a statistical approach is taken. This is done by designing for each trait a genotype distribution over the population such that the observed mean and variance of the phenotypic trait of the population are attained, under the constraint that the allele frequencies follow the U-shaped initial distribution. If information becomes available on the QTLs or candidate genes of the phenotypic traits considered, this statisticaly procedure can be replaced by actual data on the genetic make-up of these traits for a particular population.
The approach followed in ForGEM to obtain the observed mean phenotypic value is:
1. assign initially arbitrary allelic effecs of i = +1 and j = -1 to each of the alleles
2. calculate mean and variance under the constraint of the the U-shaped distribution of allele frequencies
3. scale allelic effects such that the distribution of phenotypic values over over all possible genotypes is normalised (mean equals zero, variance equals unity)
4. add the mean and multiply with the variance of the functional trait in question
The mean and variance of a genotype are:
${\displaystyle m=\sum _{l=1}^{Nloci}pi+qj}$
${\displaystyle var=\sum _{l=1}^{Nloci}p(i-m)^{2}+q(j-m)^{2}}$
This assignment of +1 and -1 values can be done for all alleles in a multi-locus 2 allele system.
The following steps are made to arrive at a mean of zero, and a variance of unity for the whole population.
First, make expectations zero by offset and sum of individual effect.
${\displaystyle \sum _{l=1}^{Nloci}(p(i-c)+q(j+c)=0}$
${\displaystyle =>\sum _{l=1}^{Nloci}pi-\sum _{l=1}^{Nloci}pc+\sum _{l=1}^{Nloci}qj+\sum _{l=1}^{Nloci}qc=0}$
${\displaystyle =>\sum _{l=1}^{Nloci}(pi+qj)-c\sum _{l=1}^{Nloci}(p-q)=0}$
${\displaystyle =>m-c\sum _{l=1}^{Nloci}(p-q)=0}$
${\displaystyle =>m=c\sum _{l=1}^{Nloci}(p-q)}$
${\displaystyle =>c=m/\sum _{l=1}^{Nloci}(p-q)}$
${\displaystyle var=\sum _{l=1}^{Nloci}p(i-m-c)^{2}+q(j-m+c)^{2}}$
${\displaystyle =>var=\sum _{l=1}^{Nloci}pi^{2}+qj^{2}}$
This leads to a large number of possible allelic values. Arbitrarily, the first combination of allelic effects that yield the lowest expectancy (m) is selelected. in the example above this is:
q p
a b c d e A B C D E
0.006 0.044 0.141 0.299 0.499 0.994 0.956 0.859 0.701 0.501 m var c
-1 -1 -1 -1 -1 1 1 1 1 1 3.0306 2.5001 -1.0028
${\displaystyle i-c}$:
a b c d e A B C D E
0.006 0.044 0.141 0.299 0.499 0.994 0.956 0.859 0.701 0.501 m var
0.002848 0.002848 0.002848 0.002848 0.002848 -0.002848 -0.002848 -0.002848 -0.002848 -0.002848 -0.008606 0.000041
${\displaystyle m=0=>}$ ${\displaystyle var=\sum _{l=1}^{Nloci}pi^{2}=4.87}$
${\displaystyle (i-c)/sqrt(var):}$
a b c d e A B C D E
0.006 0.044 0.141 0.299 0.499 0.994 0.956 0.859 0.701 0.501 m var
0.447213595 0.447213595 0.447213595 0.447213595 0.447213595 -0.447213595 -0.447213595 -0.447213595 -0.447213595 -0.447213595 0 1
The population values are then be obtained by adding the observed mean and multiplying by the observed standard deviation.
The allelic effect thus depends on the number of loci.
## initializing phenotypic values
For each diploid individual and each locus two random drawings are done using the above probabilities. Each drawing results in a particular allele. |
# Math Help - trig identity
1. ## trig identity
prove:
sinx + sin(2x) + sin(3x) = sin(2x)(1+2cosx)
thanks!
2. Originally Posted by phthiriasis
prove:
sinx + sin(2x) + sin(3x) = sin(2x)(1+2cosx)
thanks!
Here's one way to do it: the key step is realizing that $\sin3x = \sin(2x + x)$ and applying the sum and difference formula:
$\sin x + \sin 2x + \sin 3x$
$=\sin x + 2\sin x\cos x + \sin(2x + x)$
$=\sin x + 2\sin x\cos x + \sin 2x\cos x + \sin x\cos 2x$
$=\sin x + 2\sin x\cos x + 2\sin x\cos x\cos x + \sin x(2\cos^2x - 1)$
$=\sin x + 2\sin x\cos x + 2\sin x\cos^2 x + 2\sin x\cos^2x - \sin x$
$=2\sin x\cos x + 4\sin x\cos^2 x$
Now, factor. |
Journal topic
Nonlin. Processes Geophys., 25, 267–278, 2018
https://doi.org/10.5194/npg-25-267-2018
Nonlin. Processes Geophys., 25, 267–278, 2018
https://doi.org/10.5194/npg-25-267-2018
Research article 04 Apr 2018
Research article | 04 Apr 2018
# Connection between encounter volume and diffusivity in geophysical flows
Connection between encounter volume and diffusivity in geophysical flows
Irina I. Rypina1, Stefan G. Llewellyn Smith2, and Larry J. Pratt1 Irina I. Rypina et al.
• 1Physical Oceanography Department, Woods Hole Oceanographic Institution, 266 Woods Hole Rd., Woods Hole, MA 02543, USA
• 2Department of Mechanical and Aerospace Engineering, Jacobs School of Engineering and Scripps Institution of Oceanography, UCSD, 9500 Gilman Dr., La Jolla, CA 92093-0411, USA
Correspondence: Irina I. Rypina ([email protected])
Abstract
Trajectory encounter volume – the volume of fluid that passes close to a reference fluid parcel over some time interval – has been recently introduced as a measure of mixing potential of a flow. Diffusivity is the most commonly used characteristic of turbulent diffusion. We derive the analytical relationship between the encounter volume and diffusivity under the assumption of an isotropic random walk, i.e., diffusive motion, in one and two dimensions. We apply the derived formulas to produce maps of encounter volume and the corresponding diffusivity in the Gulf Stream region of the North Atlantic based on satellite altimetry, and discuss the mixing properties of Gulf Stream rings. Advantages offered by the derived formula for estimating diffusivity from oceanographic data are discussed, as well as applications to other disciplines.
1 Introduction
The frequency of close encounters between different objects or organisms can be a fundamental metric in social and mechanical systems. The chances that a person will meet a new friend or contract a new disease during the course of a day is influenced by the number of distinct individuals that he or she comes into close contact with. The chances that a predator will ingest a poisonous prey, or that a mushroom hunter will mistakenly pick up a poisonous variety, is influenced by the number of distinct species or variety of prey or mushrooms that are encountered. In fluid systems, the exchange of properties such as temperature, salinity or humidity between a given fluid element and its surroundings is influenced by the number of other distinct fluid elements that pass close by over a given time period. In all these cases it is best to think of close encounters as providing the potential, if not necessarily the act, of transmission of germs, toxins, heat, salinity, etc.
In cases of property exchange within continuous media such as air or water, it may be most meaningful to talk about a mass or volume passing within some radius of a reference fluid element as this element moves along its trajectory. Rypina and Pratt (2017) introduce a trajectory encounter volume, V, the volume of fluid that comes in contact with the reference fluid parcel over a finite time interval. The increase in V over time is one measure of the mixing potential of the element, “mixing” being the irreversible exchange of properties between different water parcels. Thus, fluid parcels that have large encounter volumes as they move through the flow field have large mixing potential, i.e., an opportunity to exchange properties with other fluid parcels, and vice versa.
In order to formally define the encounter volume V, Rypina and Pratt (2017) subdivide the entire fluid into infinitesimal fluid elements with volumes dVi, and define the encounter volume for each fluid element to be the total volume of fluid that passes within a radius R of it over a finite time interval ${t}_{\mathrm{0}}, i.e.,
$\begin{array}{}\text{(1)}& V\left({\mathbit{x}}_{\mathrm{0}};{t}_{\mathrm{0}},T,R\right)=\underset{\mathrm{d}{V}_{i}\to \mathrm{0}}{lim}{\mathrm{\Sigma }}_{i}\mathrm{d}{V}_{i}.\end{array}$
In practice, for dense uniform grids of trajectories, ${\mathbit{x}}_{\mathbit{k}}\left({\mathbit{x}}_{\mathrm{0}k};{t}_{\mathrm{0}},T\right),k=\mathrm{1},\mathrm{\dots },K$, where t0 is the starting time, T is the trajectory integration time, and x0k is the trajectory initial position satisfying $\mathbit{x}\left({\mathbit{x}}_{\mathrm{0}};{t}_{\mathrm{0}},T=\mathrm{0}\right)={\mathbit{x}}_{\mathrm{0}}$, both the limit and the subscript in the above definition Eq. (1) can be dropped. In this case, the encounter volume can be approximated by
$\begin{array}{}\text{(2)}& V\phantom{\rule{0.125em}{0ex}}\approx \phantom{\rule{0.125em}{0ex}}N\phantom{\rule{0.125em}{0ex}}\mathit{\delta }V,\end{array}$
where the encounter number,
$\begin{array}{ll}& N\left({\mathbit{x}}_{\mathrm{0}\mathrm{ref}};{t}_{\mathrm{0}},T,R\right)=\sum _{\begin{array}{l}k=\mathrm{1}\\ k\ne \mathrm{ref}\end{array}}^{K}\\ \text{(3)}& & \phantom{\rule{1em}{0ex}}I\left(min\left(\left|{\mathbit{x}}_{\mathbit{k}}\left({\mathbit{x}}_{\mathrm{0}k};{t}_{\mathrm{0}},T\right)-{\mathbit{x}}_{\mathrm{ref}}\left({\mathbit{x}}_{\mathrm{0}\mathrm{ref}};{t}_{\mathrm{0}},T\right)\right|\right)\le R\right),\end{array}$
is the number of trajectories that come within a radius R of the reference trajectory, ${\mathbit{x}}_{\mathrm{ref}}\left({\mathbit{x}}_{\mathrm{0}\mathrm{ref}};{t}_{\mathrm{0}},T\right)$, over a time ${t}_{\mathrm{0}}. Here the indicator function I is 1 if true and 0 if false, and K is the total number of particles. As in Rypina and Pratt (2017), we define encounter volume based on the number of encounters with different trajectories, not the total number of encounter events (see the schematic diagram of trajectory encounters in Fig. 1). Rypina and Pratt (2017) discuss how the encounter volume can be used to identify Lagrangian coherent structures (LCS) such as stable and unstable manifolds of hyperbolic trajectories and regions foliated by the KAM-like tori surrounding elliptic trajectories in realistic geophysical flows. A detailed comparison between the encounter volume method and some other Lagrangian methods of LCS identification, as well as the dependences on parameters, ${t}_{\mathrm{0}},T,R$, and on grid spacing (or on the number of trajectories, K), and the relative advantages of different techniques, was given in Rypina and Pratt (2017). The interested reader is referred to that earlier paper for details. The current paper is concerned only with the question of finding the connection between the encounter volume and diffusivity, rather than identifying LCS.
Figure 1Schematic diagram of trajectory encounters, showing trajectories of nine particles, with dots indicating positions of particles at three time instances, at the release time, t0, and at two later times, t0+T1 and t0+T2. The reference trajectory and the encounter sphere are shown in black, trajectories that do not encounter the reference trajectory are in grey, and trajectories that encounter the reference trajectory are in green if encounters occur at t0+T1, and in blue if encounters occur at t0+T2. Time slices are schematically shown by dashed rectangles, and the encounter number, N, is indicated at the top of each time slice.
Given the seemingly fundamental importance of close encounters, it is of interest to relate metrics such as V to other bulk measures of interactions within the system. For example, in some cases it may be more feasible to count encounters rather than to measure interactions or property exchanges directly, whereas in other cases the number of encounters might be most pertinent to the process in question but difficult to measure directly. In many applications, including ocean turbulence, the most commonly used metric of mixing is the eddy diffusivity, κ, a quantity that relates transport of fluid elements by turbulent eddies to diffusion (LaCasce, 2008; Vallis, 2006; Rypina et al., 2012; Kamenkovich et al., 2015). The underlying assumption is that the eddy field drives downgradient tracer transfer, similar to molecular diffusion but with a different (larger) diffusion coefficient. This diffusive parameterization of eddies has been implemented in many non-eddy-resolving oceanic numerical models. The diffusivity can be measured by a variety of means, including dye release (Ledwell et al., 2000; Sundermeyer and Ledwell, 2001; Rypina et al., 2016), surface drifter dispersion (Okubo, 1971; Davis, 1991; LaCasce, 2008; La Casce et al., 2014; Rypina et al., 2012, 2016), and property budgets (Munk, 1966). In numerical models κ is often assumed constant in both time and space, or related in some simplified manner to the large-scale flow properties (Visbeck et al., 1997).
Because the purpose of the diffusivity coefficient κ is to quantify the intensity of the eddy-induced tracer transfer, i.e., the intensity of mixing, it is tempting to relate it to the encounter volume, V, which quantifies the mixing potential of a flow and thus is closely related to tracer mixing. Such an analytical connection between the encounter volume and diffusivity could potentially also be useful for the parameterizations of eddy effects in numerical models. The main goal of this paper is to develop a relationship between V and κ in one and two dimensions. Specifically, we seek an analytical expression for the encounter volume, V, i.e., the volume of fluid that passed within radius R from a reference particle over time, as a function of κ. The relationship is not as straightforward as one might first imagine, but can nevertheless be written down straightforwardly in the long-time limit. This is opportune, since the concept of eddy diffusivity is most relevant in the long-time limit.
2 Connection between encounter volume and diffusivity
This problem was framed in mathematical terms in Rypina and Pratt (2017), who outlined some initial steps towards deriving the analytical connection between encounter volume and diffusivity but did not finish the derivation. In this section, we complete the derivation.
## 2.1 Main idea for the derivation
Let us start by considering the simplest diffusive random walk process in one or two dimensions, where particles take steps of fixed length Δx in random directions along the x-axis in 1-D or along both the x- and y-axes in 2-D, respectively, at fixed time intervals Δt.
The single particle dispersion, i.e., the ensemble-averaged square displacement from the particle's initial position, is ${D}_{\text{1-D}}=〈{\left(x-{x}_{\mathrm{0}}\right)}^{\mathrm{2}}〉$ and ${D}_{\text{2-D}}=〈{\left(x-{x}_{\mathrm{0}}\right)}^{\mathrm{2}}+{\left(y-{y}_{\mathrm{0}}\right)}^{\mathrm{2}}〉$ in 1-D or 2-D, respectively. For a diffusive process, the dispersion grows linearly with time, and the constant proportionality coefficient is related to diffusivity. Specifically, D1-D=2κ1-Dt with ${\mathit{\kappa }}_{\text{1-D}}=\mathrm{\Delta }{x}^{\mathrm{2}}/\left(\mathrm{2}\mathrm{\Delta }t\right)$, and D2-D=4K2-Dt with ${\mathit{\kappa }}_{\text{2-D}}=\mathrm{\Delta }{x}^{\mathrm{2}}/\phantom{\rule{0.125em}{0ex}}\left(\mathrm{4}\mathrm{\Delta }t\right)$.
It is convenient to consider the motion in a reference frame that is moving with the reference particle. In that reference frame, the reference particle will always stay at the origin, while other particles will still be involved in a random walk motion, but with a diffusivity twice that in the stationary frame, κmoving=2κstationary (Rypina and Pratt, 2017).
The problem of finding the encounter number is then reduced to counting the number of randomly walking particles (with diffusivity κmoving) that come within radius R of the origin in the moving frame. This is related to a classic problem in statistics – the problem of a random walker reaching an absorbing boundary, usually referred to as “a cliff” (because once a walker reaches the absorbing boundary, it falls off the cliff), over a time interval from 0 to t.
In the next section we will provide formal solutions; here we simply outline the steps to streamline the derivation. We start by deriving the appropriate diffusion equation for the probability density function, p(x,t), of random walkers in 1-D or 2-D:
$\begin{array}{}\text{(4)}& \frac{\partial p}{\partial t}=\mathit{\kappa }{\mathrm{\nabla }}^{\mathrm{2}}p.\end{array}$
We place a cliff, xc, at the perimeter of the encounter sphere, i.e., at a distance R from the origin, and impose an absorbing boundary condition at a cliff,
$\begin{array}{}\text{(5a)}& p\left({\mathbit{x}}_{\mathrm{c}},t\right)=\mathrm{0},\end{array}$
which removes (or “absorbs”) particles that have reached the cliff (see Fig. 2 for a schematic diagram). We then consider a random walker that is initially located at a point x0 outside the cliff at t=0, i.e.,
$\begin{array}{}\text{(5b)}& p\left(\mathbit{x},t=\mathrm{0}\right)=\mathit{\delta }\left(\mathbit{x}-{\mathbit{x}}_{\mathrm{0}}\right),\end{array}$
and we write an analytical solution for the probability density function satisfying Eqs. (4)–(5),
$\begin{array}{}\text{(6)}& G\left(\mathbit{x},t;{\mathbit{x}}_{\mathrm{0}},\phantom{\rule{0.125em}{0ex}}{\mathbit{x}}_{\mathrm{c}}\right),\end{array}$
that quantifies the probability of finding a random walker initially located at x0 at any location x outside of the cliff at a later time t>0. In mathematical terms, G is Green's function of the diffusion equation.
Figure 2Schematic diagram in 1-D (a) and 2-D (b). Hatched areas show semi-infinite domains outside of the cliff.
The survival probability, which quantifies the probability that a random walker initially located at x0 at t=0 has “survived” over time t without falling off the cliff, is
$\begin{array}{}\text{(7)}& S\left(t;{\mathbit{x}}_{\mathrm{0}},{\mathbit{x}}_{\mathrm{c}}\right)=\int G\left(\mathbit{x},t;{\mathbit{x}}_{\mathrm{0}},{\mathbit{x}}_{\mathrm{c}}\right)\mathrm{d}\mathbit{x}\end{array}$
where the integral is taken over all locations outside of the cliff. The encounter, or “non-survival”, probability can then be written as the conjugate quantity,
$\begin{array}{}\text{(8)}& {P}_{\mathrm{en}}\left(t;{\mathbit{x}}_{\mathrm{0}},{\mathbit{x}}_{\mathrm{c}}\right)=\mathrm{1}-S\left(t;{\mathbit{x}}_{\mathrm{0}},{\mathbit{x}}_{\mathrm{c}}\right),\end{array}$
which quantifies the probability that a random walker initially located at x0 at t=0 has reached, or fallen off, the cliff over time t. This allows one to write the encounter volume, i.e., the volume occupied by particles that were initially located outside of the cliff and that have reached the cliff by time t, as
$\begin{array}{}\text{(9)}& V\left(t;{\mathbit{x}}_{\mathrm{c}}\right)=\int {P}_{\mathrm{en}}\left(t;{\mathbit{x}}_{\mathrm{0}},{\mathbit{x}}_{\mathrm{c}}\right)\mathrm{d}{\mathbit{x}}_{\mathrm{0}}\end{array}$
where the integral is taken over all initial positions outside of the cliff.
## 2.2 1-D case
Consider a random walker initially located at the origin, who takes, with a probability of 1∕2, a fixed step Δx to the right or to the left along the x-axis after each time interval Δt. Then the probability of finding a walker at a location x=nΔx after (m+1) steps is
$\begin{array}{ll}& p\left(n\mathrm{\Delta }x,\left(m+\mathrm{1}\right)\mathrm{\Delta }t\right)=\mathrm{1}/\mathrm{2}\left[p\left(\left(n-\mathrm{1}\right)\mathrm{\Delta }x,m\mathrm{\Delta }t\right)\\ \text{(10)}& & \phantom{\rule{1em}{0ex}}+p\left(\left(n+\mathrm{1}\right)\mathrm{\Delta }x,m\mathrm{\Delta }t\right)\right].\end{array}$
Using a Taylor series expansion in Δx and Δt, we can write down the finite-difference approximation to the above expression as
$\begin{array}{ll}p\left(x,t\right)& +\mathrm{\Delta }t\frac{\partial p}{\partial t}=\frac{\mathrm{1}}{\mathrm{2}}\left[p\left(x,t\right)-\mathrm{\Delta }x\frac{\partial p}{\partial x}+\frac{\mathrm{\Delta }{x}^{\mathrm{2}}}{\mathrm{2}}\frac{{\partial }^{\mathrm{2}}p}{\partial {x}^{\mathrm{2}}}+p\left(x,t\right)\right\\ & +\mathrm{\Delta }x\frac{\partial p}{\partial x}+\frac{\mathrm{\Delta }{x}^{\mathrm{2}}}{\mathrm{2}}\frac{{\partial }^{\mathrm{2}}p}{\partial {x}^{\mathrm{2}}}+\mathrm{O}\left(\mathrm{\Delta }{x}^{\mathrm{4}}\right)]\\ \text{(11)}& & =p\left(x,t\right)+\frac{\mathrm{\Delta }{x}^{\mathrm{2}}}{\mathrm{2}}\frac{{\partial }^{\mathrm{2}}p}{\partial {x}^{\mathrm{2}}}+\mathrm{O}\left(\mathrm{\Delta }{x}^{\mathrm{4}}\right),\end{array}$
yielding a diffusion equation
$\begin{array}{}\text{(12)}& \frac{\partial p}{\partial t}=\mathit{\kappa }\frac{{\partial }^{\mathrm{2}}p}{\partial {x}^{\mathrm{2}}}\end{array}$
with diffusivity coefficient $\mathit{\kappa }=\frac{\mathrm{\Delta }{x}^{\mathrm{2}}}{\mathrm{2}\mathrm{\Delta }t}$.
Green's function for the 1-D diffusion equation without a cliff is a solution with initial condition $p\left(x,t=\mathrm{0};{x}_{\mathrm{0}}\right)=\mathit{\delta }\left(x-{x}_{\mathrm{0}}\right)$ in an unbounded domain. It takes the form
$\begin{array}{}\text{(13)}& {G}_{\mathrm{unbounded}}\left(x,t;{x}_{\mathrm{0}}\right)=\frac{\mathrm{1}}{\sqrt{\mathrm{4}\mathit{\pi }\mathit{\kappa }t}}{e}^{-\frac{\left(x-{x}_{\mathrm{0}}{\right)}^{\mathrm{2}}}{\mathrm{4}\mathit{\kappa }t}}.\end{array}$
Green's function with the cliff (see Fig. 2 for a schematic diagram), i.e., a solution to the initial-value problem with $p\left(x,t=\mathrm{0};{x}_{\mathrm{0}}\right)=\mathit{\delta }\left(x-{x}_{\mathrm{0}}\right)$ in a semi-infinite domain, $x\in \left[-\mathrm{\infty };{x}_{\mathrm{c}}\right]$, with an absorbing boundary condition at a cliff, $p\left(x={x}_{\mathrm{c}},t;{x}_{\mathrm{0}}\right)=\mathrm{0}$, can be constructed by the method of images from two unbounded Green's functions as
$\begin{array}{}\text{(14)}& G\left(x,t;{x}_{\mathrm{0}},{x}_{\mathrm{c}}\right)=\frac{\mathrm{1}}{\sqrt{\mathrm{4}\mathit{\pi }\mathit{\kappa }t}}\left({e}^{-\frac{{\left(x-{x}_{\mathrm{0}}\right)}^{\mathrm{2}}}{\mathrm{4}\mathit{\kappa }t}}-{e}^{-\frac{\left(x-\left(\mathrm{2}{x}_{\mathrm{c}}-{x}_{\mathrm{0}}\right){\right)}^{\mathrm{2}}}{\mathrm{4}\mathit{\kappa }t}}\right).\end{array}$
It follows from Eqs. (7) to (9) that the survival or non-encounter probability is
$\begin{array}{}\text{(15)}& S\left(t;{x}_{\mathrm{0}},{x}_{\mathrm{c}}\right)={\int }_{-\mathrm{\infty }}^{{x}_{\mathrm{c}}}G\left(x,t;{x}_{\mathrm{0}},{x}_{\mathrm{c}}\right)\mathrm{d}x=\mathrm{Erf}\left[\frac{{x}_{\mathrm{c}}-{x}_{\mathrm{0}}}{\mathrm{2}\sqrt{\mathit{\kappa }t}}\right],\end{array}$
the encounter probability is
$\begin{array}{}\text{(16)}& {P}_{\mathrm{en}}\left(t;{x}_{\mathrm{0}},{x}_{\mathrm{c}}\right)=\mathrm{1}-S\left(t\right)=\mathrm{1}-\mathrm{Erf}\left(\frac{{x}_{\mathrm{c}}-{x}_{\mathrm{0}}}{\mathrm{2}\sqrt{\mathit{\kappa }t}}\right),\end{array}$
and the encounter volume is
$\begin{array}{ll}V\left(t;{x}_{\mathrm{c}}\right)& ={\int }_{-\mathrm{\infty }}^{{x}_{\mathrm{c}}}{P}_{\mathrm{en}}\left(t;{x}_{\mathrm{0}},{x}_{\mathrm{c}}\right)\mathrm{d}{x}_{\mathrm{0}}\\ \text{(17)}& & ={\int }_{-\mathrm{\infty }}^{{x}_{\mathrm{c}}}\left(\mathrm{1}-\mathrm{Erf}\left[\frac{{x}_{\mathrm{c}}-{x}_{\mathrm{0}}}{\mathrm{2}\sqrt{\mathit{\kappa }t}}\right]\right)\mathrm{d}{x}_{\mathrm{0}}=\frac{\mathrm{2}}{\sqrt{\mathit{\pi }}}\sqrt{\mathit{\kappa }t}.\end{array}$
The above formula accounts for the randomly walking particles that have reached the cliff from the left over time t. By symmetry, if the cliff was located to the right of the origin, the same number of particles would be reaching the cliff from the right, so the total encounter volume is
$\begin{array}{}\text{(18)}& V\left(t;{x}_{\mathrm{c}}\right)=\frac{\mathrm{4}}{\sqrt{\mathit{\pi }}}\sqrt{\mathit{\kappa }t}.\end{array}$
Note that formula (18) gives the encounter volume, i.e., the volume of fluid coming within radius R from the origin, in a reference frame moving with the reference particle, so the corresponding diffusivity on the right-hand side of Eq. (18) is κmoving=2κstationary.
## 2.3 2-D case
Consider a random walker in 2-D, who is initially located at the origin and who takes, with a probability of 1∕4, a fixed step of length Δx to the right, left, up or down after each time interval Δt. Then the probability of finding a walker at a location $x=n\mathrm{\Delta }x,y=m\mathrm{\Delta }y$ at time $t=\left(l+\mathrm{1}\right)\mathrm{\Delta }t$ is
$\begin{array}{ll}& p\left(n\mathrm{\Delta }x,m\mathrm{\Delta }y,\phantom{\rule{0.125em}{0ex}}\left(l+\mathrm{1}\right)\mathrm{\Delta }t\right)=\frac{\mathrm{1}}{\mathrm{4}}\left[p\left(\left(n-\mathrm{1}\right)\mathrm{\Delta }x,m\mathrm{\Delta }y,l\mathrm{\Delta }t\right)\right\\ & \phantom{\rule{1em}{0ex}}+p\left(\left(n+\mathrm{1}\right)\mathrm{\Delta }x,m\mathrm{\Delta }y,l\mathrm{\Delta }t\right)\\ & \phantom{\rule{1em}{0ex}}+p\left(n\mathrm{\Delta }x,\left(m-\mathrm{1}\right)\mathrm{\Delta }y,l\mathrm{\Delta }t\right)\\ \text{(19)}& & \phantom{\rule{1em}{0ex}}+p\left(n\mathrm{\Delta }x,\left(m+\mathrm{1}\right)\mathrm{\Delta }y,l\mathrm{\Delta }t\right)].\end{array}$
Using a Taylor series expansion in Δx, Δy and Δt, the finite-difference approximation leads to a diffusion equation
$\begin{array}{}\text{(20)}& \frac{\partial p}{\partial t}=\mathit{\kappa }\left(\frac{{\partial }^{\mathrm{2}}p}{\partial {x}^{\mathrm{2}}}+\frac{{\partial }^{\mathrm{2}}p}{\partial {y}^{\mathrm{2}}}\right)\end{array}$
with diffusivity coefficient $\mathit{\kappa }=\frac{\mathrm{\Delta }{x}^{\mathrm{2}}}{\mathrm{4}\mathrm{\Delta }t}$.
To proceed, we need an analytical expression for Green's function of Eq. (20) with a cliff at a distance R from the origin, i.e., a solution to the initial-value problem with $p\left(\mathbit{x},t=\mathrm{0};{\mathbit{x}}_{\mathrm{0}}\right)=\mathit{\delta }\left(\mathbit{x}-{\mathbit{x}}_{\mathrm{0}}\right)$ for the above 2-D diffusion equation on a semi-infinite plane ($r\ge R,\phantom{\rule{0.125em}{0ex}}\mathrm{0}<\mathit{\theta }\le \mathrm{2}\mathit{\pi }\right),$ bounded internally by an absorbing boundary (a cliff) located at r=R, so that $p\left(r=R,\mathit{\theta },t;{\mathbit{x}}_{\mathrm{0}}\right)=\mathrm{0}$ (see Fig. 2 right for a schematic diagram). Here (r,θ) are polar coordinates.
Carlslaw and Joeger (1939) give the answer as
$\begin{array}{ll}G\left(r,\mathit{\theta },\phantom{\rule{0.125em}{0ex}}t;{r}_{\mathrm{0}},{\mathit{\theta }}_{\mathrm{0}},R\right)& =u+w={\sum }_{n=-\mathrm{\infty }}^{\mathrm{\infty }}\left({u}_{n}\left(r,t;{r}_{\mathrm{0}},R\right)\\ \text{(21)}& & +{w}_{n}\left(r,t;{r}_{\mathrm{0}},R\right)\right)\mathrm{cos}n\left(\mathit{\theta }-{\mathit{\theta }}_{\mathrm{0}}\right)\end{array}$
where ${r}_{\mathrm{0}}\left(\ge R\right),{\mathit{\theta }}_{\mathrm{0}}$ denote the source location, and
$\mathit{\left\{}{u}_{n},\phantom{\rule{0.125em}{0ex}}{w}_{n}\mathit{\right\}}={L}^{-\mathrm{1}}\left\{{\stackrel{\mathrm{‾}}{u}}_{n},{\stackrel{\mathrm{‾}}{w}}_{n}\right\}=\frac{\mathrm{1}}{\mathrm{2}\mathit{\pi }i\phantom{\rule{0.125em}{0ex}}}\underset{T\to \mathrm{\infty }}{lim}{\int }_{\mathit{\gamma }-iT}^{\mathit{\gamma }+iT}{e}^{\mathrm{st}}\left\{{\stackrel{\mathrm{‾}}{u}}_{n},{\stackrel{\mathrm{‾}}{w}}_{n}\right\}\mathrm{d}s$
are the inverse Laplace transforms of
$\begin{array}{ll}{\stackrel{\mathrm{‾}}{u}}_{n}& =\frac{\mathrm{1}}{\mathrm{2}\mathit{\pi }\mathit{\kappa }}\left\{\begin{array}{c}{I}_{n}\left(qr\right){K}_{n}\left(q{r}_{\mathrm{0}}\right),\phantom{\rule{0.125em}{0ex}}R{r}_{\mathrm{0}}\end{array}\right\\\ \text{(22)}& & \mathrm{and}\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}{\stackrel{\mathrm{‾}}{w}}_{n}=-\frac{{I}_{n}\left(qR\right)}{{K}_{n}\left(qR\right)}{K}_{n}\left(q{r}_{\mathrm{0}}\right){K}_{n}\left(qr\right)\end{array}$
with $q=\sqrt{\frac{s}{\mathit{\kappa }}}$.
The survival probability (from Eq. 7) is
$\begin{array}{ll}S\left(t;{r}_{\mathrm{0}},R\right)& =\underset{{R}^{\mathrm{2}}}{\int }G\left(\mathbit{x},t;{\mathbit{x}}_{\mathrm{0}},R\right){d}^{\mathrm{2}}\mathbit{x}\\ & =\underset{\mathrm{0}}{\overset{\mathrm{2}\mathit{\pi }}{\int }}\underset{R}{\overset{\mathrm{\infty }}{\int }}\sum _{n=-\mathrm{\infty }}^{\mathrm{\infty }}\left({u}_{n}+{v}_{n}\right)\mathrm{cos}n\left(\mathit{\theta }-{\mathit{\theta }}_{\mathrm{0}}\right)\phantom{\rule{0.125em}{0ex}}r\phantom{\rule{0.125em}{0ex}}\mathrm{d}r\phantom{\rule{0.125em}{0ex}}\mathrm{d}\mathit{\theta }\\ \text{(23)}& & =\mathrm{2}\mathit{\pi }\underset{R}{\overset{\mathrm{\infty }}{\int }}\left({u}_{\mathrm{0}}+{v}_{\mathrm{0}}\right)r\mathrm{d}r.\end{array}$
Next, we take the Laplace transform of the survival probability and write it in terms of a Laplace variable s as
$\begin{array}{ll}\stackrel{\mathrm{‾}}{S}\left(s;{r}_{\mathrm{0}},R\right)& ={\int }_{\mathrm{0}}^{\mathrm{\infty }}{e}^{-\mathrm{st}}S\left(t;{r}_{\mathrm{0}},R\right)\mathrm{d}t=\mathrm{2}\mathit{\pi }{\int }_{R}^{\mathrm{\infty }}\left({\stackrel{\mathrm{‾}}{u}}_{\mathrm{0}}+{\stackrel{\mathrm{‾}}{w}}_{\mathrm{0}}\right)r\mathrm{d}r\\ & =\frac{\mathrm{1}}{\mathit{\kappa }}{\int }_{R}^{{r}_{\mathrm{0}}}{I}_{\mathrm{0}}\left(qr\right){K}_{\mathrm{0}}\left(q{r}_{\mathrm{0}}\right)r\mathrm{d}r\\ & +\frac{\mathrm{1}}{\mathit{\kappa }}{\int }_{{r}_{\mathrm{0}}}^{\mathrm{\infty }}{I}_{\mathrm{0}}\left(q{r}_{\mathrm{0}}\right){K}_{\mathrm{0}}\left(qr\right)r\mathrm{d}r\\ \text{(24)}& & -\frac{\mathrm{1}}{\mathit{\kappa }}{\int }_{R}^{\mathrm{\infty }}\frac{{I}_{\mathrm{0}}\left(qR\right)}{{K}_{\mathrm{0}}\left(qR\right)}{K}_{\mathrm{0}}\left(qr\right){K}_{\mathrm{0}}\left(q{r}_{\mathrm{0}}\right)r\mathrm{d}r.\end{array}$
Using $\int r{I}_{\mathrm{0}}\left(r\right)\mathrm{d}r=r{I}_{\mathrm{1}}\left(r\right)$ and $\int r{K}_{\mathrm{0}}\left(r\right)\mathrm{d}r=-r{K}_{\mathrm{1}}\left(r\right)$, and ${lim}_{x\to \mathrm{\infty }}x{K}_{\mathrm{1}}\left(x\right)=\mathrm{0}$ we find
$\begin{array}{ll}\stackrel{\mathrm{‾}}{S}\left(s;{r}_{\mathrm{0}},R\right)& =\frac{\mathrm{1}}{\mathit{\kappa }}{K}_{\mathrm{0}}\left(q{r}_{\mathrm{0}}\right)\left[\frac{r}{q}{I}_{\mathrm{1}}\left(qr\right)\right]{|}_{R}^{{r}^{\prime }}\\ & +\frac{\mathrm{1}}{\mathit{\kappa }}{I}_{\mathrm{0}}\left(q{r}_{\mathrm{0}}\right)\left[-\frac{r}{q}{K}_{\mathrm{1}}\left(qr\right)\right]{|}_{R}^{\mathrm{\infty }}\\ & -\frac{\mathrm{1}}{\mathit{\kappa }}\frac{{I}_{\mathrm{0}}\left(q{r}_{\mathrm{0}}\right)}{{K}_{\mathrm{0}}\left(qR\right)}{K}_{\mathrm{0}}\left(q{r}_{\mathrm{0}}\right)\left[-\frac{r}{q}{K}_{\mathrm{1}}\left(qr\right)\right]{|}_{R}^{\mathrm{\infty }}\\ & =\frac{\mathrm{1}}{\mathit{\kappa }}\left\{\frac{{r}_{\mathrm{0}}}{q}\left({I}_{\mathrm{1}}\left(q{r}_{\mathrm{0}}\right){K}_{\mathrm{0}}\left(q{r}_{\mathrm{0}}\right)+{I}_{\mathrm{0}}\left(q{r}_{\mathrm{0}}\right){K}_{\mathrm{1}}\left(q{r}_{\mathrm{0}}\right)\right)\right\\\ & -\frac{a}{q}\frac{{K}_{\mathrm{0}}\left(q{r}_{\mathrm{0}}\right)}{{K}_{\mathrm{0}}\left(qR\right)}\left({I}_{\mathrm{1}}\left(qR\right){K}_{\mathrm{0}}\left(qR\right)\right\\ \text{(25)}& & +{I}_{\mathrm{0}}\left(qR\right){K}_{\mathrm{1}}\left(qR\right))}.\end{array}$
But ${I}_{\mathrm{1}}\left(x\right){K}_{\mathrm{0}}\left(x\right)+{I}_{\mathrm{0}}\left(x\right){K}_{\mathrm{1}}\left(x\right)=\frac{\mathrm{1}}{x}$, so
$\begin{array}{ll}\stackrel{\mathrm{‾}}{S}\left(s;{r}_{\mathrm{0}},R\right)& =\frac{\mathrm{1}}{\mathit{\kappa }}\left(\frac{\mathrm{1}}{{q}^{\mathrm{2}}}-\frac{\mathrm{1}}{{q}^{\mathrm{2}}}\frac{{K}_{\mathrm{0}}\left(q{r}_{\mathrm{0}}\right)}{{K}_{\mathrm{0}}\left(qR\right)}\right)\\ \text{(26)}& & =\frac{\mathrm{1}}{s}\left(\mathrm{1}-\frac{{K}_{\mathrm{0}}\left(q{r}_{\mathrm{0}}\right)}{{K}_{\mathrm{0}}\left(qR\right)}\right).\end{array}$
From Eq. (8), the encounter probability ${P}_{\mathrm{en}}\left(t;{\mathbit{x}}_{\mathrm{0}},R\right)=\mathrm{1}-S\left(t;{\mathbit{x}}_{\mathrm{0}},R\right)$, and from Eq. (9) the encounter volume is
$\begin{array}{ll}V\left(t;R\right)& ={\int }_{{R}^{\mathrm{2}}}{P}_{\mathrm{en}}{d}^{\mathrm{2}}{\mathbit{x}}_{\mathrm{0}}={\int }_{\mathrm{0}}^{\mathrm{2}\mathit{\pi }}{\int }_{R}^{\mathrm{\infty }}{P}_{\mathrm{en}}\phantom{\rule{0.125em}{0ex}}{r}_{\mathrm{0}}\phantom{\rule{0.125em}{0ex}}\mathrm{d}{r}_{\mathrm{0}}\phantom{\rule{0.125em}{0ex}}\\ \text{(27)}& & =\mathrm{2}\mathit{\pi }{\int }_{R}^{\mathrm{\infty }}\left[\mathrm{1}-S\left(t;{r}_{\mathrm{0}},R\right)\right]{r}_{\mathrm{0}}\phantom{\rule{0.125em}{0ex}}\mathrm{d}{r}_{\mathrm{0}}.\end{array}$
We now take the Laplace transform of the encounter number to get
$\begin{array}{ll}\stackrel{\mathrm{‾}}{V}\left(s;R\right)& ={\int }_{\mathrm{0}}^{\mathrm{\infty }}{e}^{-\mathrm{st}}V\left(t;R\right)\mathrm{d}t=\mathrm{2}\mathit{\pi }{\int }_{R}^{\mathrm{\infty }}\left[\frac{\mathrm{1}}{s}-\stackrel{\mathrm{‾}}{S}\left(s;R\right)\right]{r}_{\mathrm{0}}\phantom{\rule{0.125em}{0ex}}\mathrm{d}{r}_{\mathrm{0}}\phantom{\rule{0.125em}{0ex}}\\ & =\mathrm{2}\mathit{\pi }{\int }_{R}^{\mathrm{\infty }}\frac{{K}_{\mathrm{0}}\left(q{r}_{\mathrm{0}}\right)}{{K}_{\mathrm{0}}\left(qR\right)}\frac{{r}_{\mathrm{0}}}{s}\mathrm{d}{r}_{\mathrm{0}}\\ & =\frac{\mathrm{2}\mathit{\pi }}{s{K}_{\mathrm{0}}\left(qR\right)}\left[-\frac{{r}_{\mathrm{0}}}{q}{K}_{\mathrm{1}}\left(q{r}_{\mathrm{0}}\right)\right]{|}_{R}^{\mathrm{\infty }}\\ \text{(28)}& & =\frac{\mathrm{2}\mathit{\pi }R}{sq}\frac{{K}_{\mathrm{1}}\left(qR\right)}{{K}_{\mathrm{0}}\left(qR\right)}=\frac{\mathrm{2}\mathit{\pi }R}{{s}^{\mathrm{3}/\mathrm{2}}\phantom{\rule{0.125em}{0ex}}{\mathit{\kappa }}^{-\frac{\mathrm{1}}{\mathrm{2}}}}\frac{{K}_{\mathrm{1}}\left(\sqrt{\frac{s}{\mathit{\kappa }}}R\right)}{{K}_{\mathrm{0}}\left(\sqrt{\frac{s}{\mathit{\kappa }}}R\right)}\end{array}$
where we used ${\int }_{\mathrm{0}}^{\mathrm{\infty }}{e}^{-\mathrm{st}}\mathrm{d}t=\frac{\mathrm{1}}{s}$, $\int {K}_{\mathrm{0}}\left(z\right)z\phantom{\rule{0.125em}{0ex}}\mathrm{d}z=-z{K}_{\mathrm{1}}\left(z\right)$, and ${lim}_{z\to \mathrm{\infty }}{K}_{\mathrm{1}}\left(z\right)=$0.
The explicit connection between the encounter volume and diffusivity is thus given by the inverse Laplace transform of the above expression (28),
$\begin{array}{}\text{(29)}& V\left(t;R\right)={L}^{-\mathrm{1}}\left\{\stackrel{\mathrm{‾}}{V}\left(s;R\right)\right\}.\end{array}$
Although numerically straightforward to evaluate, a non-integral analytic form does not exist for this inverse Laplace transform. To better understand the connection between V and κ and the growth of V with time, we next look at the asymptotic limits of small and large time. The small-t limit is transparent, while the long-t limit is more involved.
• a.
small-t asymptotics
In the small-t limit, the corresponding Laplace coordinate s is large, giving
$\begin{array}{}\text{(30)}& \stackrel{\mathrm{‾}}{V}\left(s;R\right)\phantom{\rule{0.125em}{0ex}}\sim \mathrm{2}\mathit{\pi }R{\mathit{\kappa }}^{\frac{\mathrm{1}}{\mathrm{2}}}\frac{\mathrm{1}}{{s}^{\mathrm{3}/\mathrm{2}}}\end{array}$
because ${lim}_{z\to \mathrm{\infty }}\frac{{K}_{\mathrm{1}}\left(z\right)}{{K}_{\mathrm{0}}\left(z\right)}=\mathrm{1}$. Noting that ${L}^{-\mathrm{1}}\left\{{s}^{-\frac{\mathrm{3}}{\mathrm{2}}}\right\}=\frac{\mathrm{2}\sqrt{t}}{\sqrt{\mathit{\pi }}}$, the inverse Laplace transform of the above gives the following simple expression connecting the encounter volume and diffusivity at short times:
$\begin{array}{}\text{(31)}& V\left(t;R\right)\stackrel{t\to \mathrm{0}}{\mathit{⟶}}\mathrm{4}R\sqrt{\mathit{\pi }}\phantom{\rule{0.125em}{0ex}}\sqrt{\mathit{\kappa }t}.\end{array}$
• b.
large-t asymptotics
In the large-t limit, the Laplace coordinate s is small and the asymptotic expansions for K0 and K1 take the form
$\begin{array}{}\text{(32)}& & {lim}_{z\to \mathrm{0}}{K}_{\mathrm{0}}\left(z\right)=-\mathit{\gamma }-\mathrm{ln}\left(\frac{z}{\mathrm{2}}\right)+\mathrm{O}\left({\left(\frac{z}{\mathrm{2}}\right)}^{\mathrm{2}}\mathrm{ln}\left(\frac{z}{\mathrm{2}}\right)\right),\text{(33)}& & {lim}_{z\to \mathrm{0}}{K}_{\mathrm{1}}\left(z\right)=\frac{\mathrm{1}}{z}+\frac{z}{\mathrm{2}}\left[\mathrm{ln}\left(\frac{z}{\mathrm{2}}\right)+\mathit{\gamma }-\frac{\mathrm{1}}{\mathrm{2}}\right]+\mathrm{O}\left({z}^{\mathrm{3}}\mathrm{ln}z\right),\end{array}$
where γ is the Euler–Mascheroni constant, giving
$\begin{array}{}\text{(34)}& \underset{s\to \mathrm{0}}{lim}\stackrel{\mathrm{‾}}{V}\left(s;R\right)=-\frac{\mathrm{4}\mathit{\pi }\mathit{\kappa }}{{s}^{\mathrm{2}}\mathrm{ln}\left(\mathit{\tau }s\right)}-\frac{\mathit{\pi }{R}^{\mathrm{2}}}{s}+\mathrm{O}\left(\frac{\mathrm{1}}{s\mathrm{ln}\left(\mathit{\tau }s\right)}\right),\end{array}$
where
$\begin{array}{}\text{(35)}& \mathit{\tau }=\frac{{R}^{\mathrm{2}}{e}^{\mathrm{2}\mathit{\gamma }}}{\mathrm{4}\mathit{\kappa }}.\end{array}$
We now need to take an inverse Laplace transform of $\stackrel{\mathrm{‾}}{V}$. The second term on the right-hand side gives ${L}^{-\mathrm{1}}\left\{\frac{\mathit{\pi }{R}^{\mathrm{2}}}{s}\right\}=\mathit{\pi }{R}^{\mathrm{2}}$. Llewelyn Smith (2000) discusses the literature for inverse Laplace transforms of the form (sαln s)−1 for small s. For our problem, the discussion in Olver (1974, Chap. 8, Sect. 11.4) is the most helpful approach. His result (11.13), discarding the exponential term which is not needed here, shows that the inverse Laplace transform of ${\left({s}^{\mathrm{2}}\mathrm{ln}s\right)}^{-\mathrm{1}}$ has the asymptotic expansion
$\begin{array}{}\text{(36)}& {L}^{-\mathrm{1}}\left\{\frac{\mathrm{1}}{{s}^{\mathrm{2}}\mathrm{ln}s\phantom{\rule{0.125em}{0ex}}}\right\}\stackrel{t\to \mathrm{\infty }}{\mathit{⟶}}-t\left(\frac{\mathrm{1}}{\mathrm{ln}t}+\frac{\mathrm{1}-\mathit{\gamma }}{\left(\mathrm{ln}t{\right)}^{\mathrm{2}}\phantom{\rule{0.125em}{0ex}}}+\mathrm{O}\left(\left(\mathrm{ln}t{\right)}^{-\mathrm{3}}\right)\right).\end{array}$
Using ${L}^{-\mathrm{1}}\left\{F\left(\mathit{\tau }s\right)\right\}=\frac{\mathrm{1}}{\mathit{\tau }}f\left(t/\mathit{\tau }\right)$, we thus obtain the desired connection between the encounter volume and diffusivity at long times:
$\begin{array}{ll}& V\left(t;R\right)\stackrel{t\to \mathrm{\infty }}{\mathit{⟶}}\mathrm{4}\mathit{\pi }\mathit{\kappa }t\left(\frac{\mathrm{1}}{\mathrm{ln}\frac{t}{\mathit{\tau }}}+\frac{\mathrm{1}-\mathit{\gamma }}{\left(\mathrm{ln}\frac{t}{\mathit{\tau }}{\right)}^{\mathrm{2}}\phantom{\rule{0.125em}{0ex}}}\right)-\mathit{\pi }{R}^{\mathrm{2}}\\ \text{(37)}& & \phantom{\rule{1em}{0ex}}+\mathrm{O}\left(\frac{t}{\left(\mathrm{ln}\frac{t}{\mathit{\tau }}{\right)}^{\mathrm{3}}}\right)+\mathrm{O}\left(\frac{\mathrm{1}}{\mathrm{ln}\frac{t}{\mathit{\tau }}}.\right).\end{array}$
Physically, the timescale τ (Eq. 35) defines the time at which the dispersion of random particles, D=4κτ, is comparable to the volume of the encounter sphere, i.e., R2e2γπR2 in 2-D. Thus for tτ, particles are coming to the encounter sphere “from far away.”
For practical applications, it is sufficient to only keep the leading-order term of the expansion, yielding a simpler connection between encounter volume and diffusivity,
$\begin{array}{}\text{(38)}& V\left(t;R\right)\stackrel{t\to \mathrm{\infty }}{\mathit{⟶}}\frac{\mathrm{4}\mathit{\pi }\mathit{\kappa }t}{\mathrm{ln}\frac{t}{\mathit{\tau }}}+\mathrm{O}\left(\frac{t}{\left(\mathrm{ln}\frac{t}{\mathit{\tau }}{\right)}^{\mathrm{2}}}\right).\end{array}$
Note again that the diffusivity on the right-hand side of Eqs. (28)–(29), (31) and (38) is κmoving, which is equal to 2κstationary.
## 2.4 Numerical tests of the derived formulas in 1-D and 2-D
Before applying our results to the realistic oceanic flow, we numerically tested the accuracy of the derived formulas in idealized settings by numerically simulating a random walk motion in 1-D and 2-D, as described in the beginning of Sects. 2.1 and 2.2, respectively. We then computed the encounter number and encounter volume using definitions (2)–(3), and compared the result with the derived exact formulas (18) and (28)–(29) and with the asymptotic formulas (31) and (38). Note that although formulas (28)–(29) are exact, the inverse Laplace transform still needs to be evaluated numerically and thus is subject to numerical accuracy, round-off errors, etc.; these numerical errors are, however, small, and we will refer to numerical solutions of (28)–(29) as “exact,” as opposed to the asymptotic solutions (31) and (38).
Figure 3Comparison between theoretical expression (red, green, blue) and numerical estimates (black) of the encounter volume for a random walk in 1-D (a) and 2-D (b). In both, κ=5 and Δt=05. In 2-D, τ≅20.
The comparison between numerical simulations and theory is shown in Fig. 3. Because the numerically simulated random walk deviates significantly from the diffusive regime over short (< O(100Δt)) timescales, the agreement between numerical simulation and theory is poor at those times in both 1-D and 2-D. Once the random walkers have executed > 100 time steps, however, the dispersion reaches the diffusive regime, and the agreement between the theory (red) and numerical simulation (black) rapidly improves for both the 1-D and 2-D cases, with the two curves approaching each other at long times. In 2-D, the long-time asymptotic formula (38) works well at long times, tτ, as expected. The 2-D short-time asymptotic formula (green) agrees well with the exact formula (red) at short times but not with the numerical simulations (black) for the same reason as discussed above, i.e., because the numerically simulated random walk has not yet reached the diffusive regime at those times.
3 Application to the altimetric velocities in the Gulf Stream region
Sea surface height measurements made from altimetric satellites provide nearly global estimates of geostrophic currents throughout the World Oceans. These velocity fields, previously distributed by AVISO, are now available from the Copernicus Marine and Environment Monitoring Service (CMEMS) website (http://marine.copernicus.eu/), both along satellite tracks and as a gridded mapped product in both near-real and delayed time. Here we use the delayed-time gridded maps of absolute geostrophic velocities with 1∕4 deg spatial resolution and a temporal step of 1 day, and focus our attention on the Gulf Stream extension region of the North Atlantic Ocean. There, the Gulf Stream separates from the coast and starts to meander, shedding cold- and warm-core Gulf Stream rings from its southern and northern flanks. These rings are among the strongest mesoscale eddies in the ocean. However, their coherence, interaction with each other and with other flow features, and their contribution to transport, stirring and mixing are still not completely understood (Bower et al., 1985; Cherian and Brink, 2016).
Figure 4Encounter volume (a, b), exact diffusivity (c, d), long-time diffusivity (e, f) and diffusive timescale (g, h) for the full flow (a, c, e, g) and for the eddy component of the flow (b, d, f, h). White shows land and the thick black curve shows the coastline. The encounter volume was computed on 11 January 2015 over 90 days with an encounter radius of 30 km.
Maps showing the encounter volume for fluid parcel trajectories in the region, and the corresponding diffusivity estimates (Fig. 4), could be useful both for understanding and interpreting the transport properties of the flow, as well as for benchmarking and parameterization of eddy effects in numerical models. In our numerical simulations, trajectories were released on a regular grid with $\mathrm{d}x=\mathrm{d}y\cong \phantom{\rule{0.125em}{0ex}}\mathrm{10}$ km on 11 January 2015 and were integrated forward in time for 90 days using a fifth-order variable-step Runge–Kutta integration scheme with bi-linear interpolation between grid points in space and time. The encounter radius was chosen to be R=30 km in both the zonal and meridional directions, i.e., about a third of a radius of a typical Gulf Stream ring. Similar parameter values were used in Rypina and Pratt (2017), although our new simulation was carried out using more recent 2015 velocities instead of 1997 as in that paper.
The encounter volume field, shown in the top left panel of Fig. 4, highlights the overall complexity of the flow and identifies a variety of features with different mixing potential, most notably several Gulf Stream rings with spatially small low-V (blue) cores and larger high-V (yellow) peripheries. Although the azimuthal velocities and vorticity-to-strain ratio are large within the rings, the coherent core regions with inhibited mixing potential are small, suggesting that the coherent transport by these rings might be smaller than anticipated from the Eulerian diagnostics such as the Okubo–Weiss or closed-streamline criteria (Chelton et al., 2011; Abernathey and Haller, 2018). On the other hand, the rings' peripheries, where the mixing potential is elevated compared to the surrounding fluid, cover a larger geographical area than the cores. Thus, while rings inhibit mixing within their small cores, the enhanced mixing on the periphery might be their dominant effect. This is consistent with the results from Rypina and Pratt (2017), but a more thorough analysis is needed to test this hypothesis. Notably, the encounter volume is also large along the northern and southern flanks of the Gulf Stream jet, with two separate yellow curves running parallel to each other and a valley in between (although the curves could not be traced continuously throughout the entire region). This enhanced mixing on both flanks of the Gulf Stream extension current is reminiscent of chaotic advection driven by the tangled stable and unstable manifolds at the sides of the jet (del-Castillo-Negrete and Morrison, 1993; Rogerson et al., 1999; Rypina et al., 2007; Rypina and Pratt, 2017), and is also consistent with the existence of critical layers (Kuo, 1949; Ngan and Sheppard, 1997).
We now apply the asymptotic formula (38) to convert the encounter volume to diffusivity. Because Eq. (38) is not invertible analytically, we converted V to κ numerically using a look-up table approach. More specifically, we used Eq. (38) to compute theoretically predicted V values at time T= 90 days for a wide range of κs spanning all possible oceanographic values from 0 up to 109 cm2 s−1, and we used the resulting look-up table to assign the corresponding κ values to V values in the third row of Fig. 4. Note that, instead of the long-time asymptotic formula (38) (as in the third row of Fig. 4), it is also possible to use the exact formulas (28)–(29) to convert V to κ via a look-up table approach. The resulting exact diffusivities, shown in the second row of Fig. 4, are similar to the long-time asymptotic values (third row). Because both exact and asymptotic formulas were derived under the assumption of a diffusive random walk, neither should work well in regions with a non-diffusive behavior. The asymptotic formula has the advantage of being simpler and it also provides for a numerical estimate of the “long-time-limit” timescale, τ, shown in the bottom row of Fig. 4.
As expected, the diffusivity maps in the second and third rows of Fig. 4, which resulted from converting V to κ using (28)–(29) or (38), respectively, have the same spatial variability as the V-map, with large κ at the peripheries of the Gulf Stream rings and at the flanks of the Gulf Stream and small κ at the cores of the rings, near the Gulf Stream centerline and far away from the Gulf Stream current, where the flow is generally slower. The diffusivity values range from O(105) to O(107) cm2 s−1. Using the 1971 Okubo's diffusivity diagram and scaling law, κOkubo[cm2 s−1]= 0.0103 L[cm]1.15, our diffusivity values correspond to spatial scales from 10 to 650 km, thus spanning the entire mesoscale range. This is not surprising considering the Lagrangian nature of our analysis, where trajectories inside the small (<50 km) low-diffusion eddy cores stay within the cores for the entire integration duration (90 days), whereas trajectories in the high-diffusivity regions near the ring peripheries and at the flanks of the Gulf Stream jet cover large distances, sometimes >650 km, over 90 days.
The performances of the exact and asymptotic diffusive formulas vary greatly throughout the domain, with better/poorer performances in high-/low-V areas. This is because in the low-V areas, the behavior of fluid parcels is non-diffusive, so the diffusive theoretical formulas work poorly. The breakdown of the long-time asymptotic formula is evident in the fourth row of Fig. 4, which shows the corresponding long-time scales, τ (from Eq. 35), throughout the domain. As suggested by our 2-D random walk simulations, the long-time asymptotic diffusive formula only works well when tτ, but in reality τ values are < 9 days (1∕10 of our integration time) only in the highest-V regions, and are much larger everywhere else, reaching values of ≅90 days within the cores of the Gulf Stream rings. More detailed comparison between theory, both exact and asymptotic, and numerical V(t) is shown in Fig. 5 for three reference trajectories that are initially located inside the core, on the periphery, and outside of a Gulf Stream ring (black, red, and blue, respectively) centered at approximately 36.8 N and 60 W. Clearly, the diffusive theory works poorly for the trajectory inside the eddy core (black curve). The agreement is better for the blue curves and even better for the red curves, corresponding to trajectories outside and on the periphery of the eddy, although deviations between the theory and numerics are still visible, raising questions about the general validity of the diffusive approximation in ocean flows on timescales of a few months.
Figure 5Comparison between numerically computed V (solid) and the exact (dotted) and long-time diffusive formulas (dashed) with the corresponding κ for the three reference trajectories located in the core, periphery and outside (black, red, blue) of a Gulf Stream ring.
The non-diffusive nature of the parcel motion over 90 days is because ocean eddies have finite lengthscales and timescales, so a variety of different transport regimes generally occur before separating parcels become uncorrelated and transport becomes diffusive, as in a random walk. At very short times the motion of fluid parcels is largely governed by the local velocity shear, so the resulting transport regime is ballistic, i.e., DT2 and VT (Rypina and Pratt, 2017). At longer times, when velocity shear can no longer be assumed constant in space and time, the regime may transition to a local Richardson regime (i.e., Dt3), where separation at a given scale is governed by the local features of a comparable scale (Richardson, 1926; Bennett, 1984; Beron-Vera and LaCasce, 2016), or to a non-local chaotic-advection spreading regime (i.e., D∝exp(λt)), where separation is governed by the large-scale flow features (Bennett, 1984; Rypina et al., 2010; Beron-Vera and LaCasce, 2016). The kinetic energy spectrum of a flow indicates whether a local or non-local regime will be relevant. The chaotic transport regime is generally expected to occur in mesoscale-dominated eddying flows, such as, for example, AVISO velocity fields, over timescales of a few eddy winding times. At times long enough for particles to sample many different flow features, such as Gulf Stream meanders or mesoscale eddies in the AVISO fields, the velocities of the neighboring particles become completely uncorrelated, and transport finally approaches the diffusive regime. With the mesoscale eddy turnover time being on the order of several weeks, it often takes longer than 90 days to reach the diffusive regime.
A number of diffusivity estimates other than Okubo's have been made for the Gulf Stream extension region (e.g., Zhurbas and Oh, 2004; LaCasce, 2008; Rypina et al., 2012; Abernathey and Marshall, 2013; Klocker and Abernathey, 2014; or Cole et al., 2015). These estimates are based on surface drifters (Zhurbas and Oh, 2004; LaCasce, 2008; Rypina et al., 2012), satellite-observed velocity fields (Abernathey and Marshall, 2013; Klocker and Abernathey, 2014; Rypina et al., 2012), and Argo float observations (Cole et al., 2015), and they use either the spread of drifters or the evolution of simulated or observed tracer fields to deduce diffusivity. The resulting diffusivities are spatially varying and span 2 orders of magnitude, from 2×104 m2 s−1 in the most energetic regions in the immediate vicinity of the Gulf Stream and its extension, to 103 m2 s−1 in less energetic areas, to 200 m2 s−1 in the coastal areas of the Slope Sea. Diffusivity estimates vary significantly depending on the initial tracer distribution used (Abernathey and Marshall, 2013) and depend on whether the suppression by the mean current has been taken into account (Klocker and Abernathey, 2014). The diffusivity tensor has also been shown to be anisotropic, with a large anisotropy ratio near the Gulf Stream (Rypina et al., 2012). Data resolution and coverage, as well as the choice of timescales and lengthscales also play a role in defining κ value (Cole at al., 2015). All of these issues complicate the reconciliation of different diffusivity estimates. Nevertheless, ignoring these complications for a moment, and avoiding the smallest diffusivities in those geographical areas of Fig. 4 where the diffusive approximation is invalid, our O(103 m2 s−1) encounter-volume-based diffusivity estimates tend to be in the middle of the range of available estimates for the western North Atlantic. Although not inconsistent with other estimates, the encounter volume method did not predict diffusivities to reach values of 104 m2 s−1 anywhere within the considered geographical domain.
Because the action of the real ocean velocity field on drifters or tracers is generally not exactly diffusive, all methods simply fit the diffusive approximation to the corresponding variable of interest, such as particle dispersion, tracer variance, or, in our case, encounter volume. The analytic form of the diffusive approximation is, however, different for different variables and different flow regimes. For example, for a diffusive random walk regime, dispersion grows linearly with time, whereas the growth of the encounter volume is nonlinear, as defined by Eq. (38). This generally leads to different diffusivity estimates resulting from different methods. In other words, the diffusivity value that fits best to the observed particle dispersion at 90 days does not necessarily provide the best fit to the observed encounter volume at 90 days, and vice versa.
To illustrate this more rigorously, we consider a linear strain flow,
$\begin{array}{ll}& u=\mathit{\alpha }\phantom{\rule{0.125em}{0ex}}x,\\ & v=-\mathit{\alpha }\phantom{\rule{0.125em}{0ex}}y,\end{array}$
with a constant strain coefficient α. The particle trajectories are given by $x={x}_{\mathrm{0}}{e}^{\mathit{\alpha }\mathrm{t}},\phantom{\rule{0.125em}{0ex}}y={y}_{\mathrm{0}}{e}^{-\mathit{\alpha }\mathrm{t}}$ where x0, y0 are particles' initial positions. The dispersion of a small cluster of particles that are initially uniformly distributed within a small square of side length 2dx is
$D=〈{\left(X-\stackrel{\mathrm{‾}}{X}\right)}^{\mathrm{2}}+{\left(Y-\stackrel{\mathrm{‾}}{Y}\right)}^{\mathrm{2}}〉,$
where $X=x-{x}_{\mathrm{0}}$ and $Y=y-{y}_{\mathrm{0}}$ are displacements of particles from their initial positions and the overbar denotes the ensemble mean. Since the linear strain velocity remains unchanged in a reference frame moving with a particle, without loss of generality we can restrict our attention to a cluster that is initially centered at the origin, so $\stackrel{\mathrm{‾}}{X}=\stackrel{\mathrm{‾}}{Y}=\mathrm{0}$. In the long-time limit, when ${e}^{\mathit{\alpha }\mathrm{t}}\gg \mathrm{1}\gg {e}^{-\mathit{\alpha }\mathrm{t}}$, the dispersion becomes
$D=\mathrm{1}/\mathrm{3}\mathrm{d}{x}^{\mathrm{2}}{e}^{\mathrm{2}\mathit{\alpha }t}.$
If one is using a diffusive fit,
$D=\mathrm{4}{\mathit{\kappa }}_{\mathrm{D}}t,$
to approximate diffusivity, then the resulting diffusivity is
${\mathit{\kappa }}_{\mathrm{D}}=\frac{\mathrm{d}{x}^{\mathrm{2}}{e}^{\mathrm{2}\mathit{\alpha }t}}{\mathrm{12}t}.$
On the other hand, the encounter volume for the linear strain flow is
$V=\phantom{\rule{0.125em}{0ex}}\mathrm{2}\mathit{\alpha }{R}^{\mathrm{2}}t,$
whereas the long-time diffusive fit is
$V=\frac{\mathrm{4}\mathit{\pi }{\mathit{\kappa }}_{\mathrm{V}}t}{\mathrm{ln}t/\mathit{\tau }},$
yielding
${\mathit{\kappa }}_{\mathrm{V}}=-\frac{\mathit{\alpha }{R}^{\mathrm{2}}\mathrm{ProductLog}\left(-\frac{\mathit{\pi }{e}^{\mathrm{2}\mathit{\gamma }}}{\mathrm{2}\mathit{\alpha }t}\right)}{\mathrm{2}\mathit{\pi }}$
where the function ProductLog(z), also known as the Lambert function, is a solution to z=wew. Because κD is exponential in time, while κV is not, κD always becomes larger than κV at large t.
Of course, real oceanic flows are more complex than the simple linear strain example. However, for flows that are in a state of chaotic advection, exponential separation between neighboring particles will occur and the dispersion will grow exponentially in time, as in the linear strain example. Although we do not have a formula for the encounter volume for a chaotic advection regime, the linear strain example suggests that the encounter volume growth will likely be slower than exponential. Thus, for a chaotic advection regime, the dispersion-based diffusivity could be expected to be larger than the encounter-volume-based diffusivity. This can potentially explain the smaller encounter-volume-based diffusivity values in Fig. 4 compared to other available estimates from the literature. Numerical simulations (not shown) using an analytic Duffing oscillator flow, which features chaotic advection, indeed produced smaller encounter-volume-based diffusivity than dispersion-based diffusivity, in agreement with our arguments above. The AVISO velocities are dominated by the mesoscales rather than submesoscales, and the 90-day time interval is about a few mesoscale eddy winding times; thus, this flow satisfies all the pre-requisites for the chaotic advection to occur. Finally, the particle trajectories that we used to produce Fig. 4 can be grouped into small clusters (we are using the encounter radius R= 30 km as a cluster radius for consistency) to estimate their dispersion and infer diffusivity from its slope. Consistent with our arguments above, the resulting dispersion-based diffusivities in Fig. 6 are larger than the encounter-volume-based diffusivities in Fig. 4 and reach values of O(104 m2 s−1) in the energetic regions of the Gulf Stream and its extension, in agreement with the previous diffusivity estimates from the literature. In applications where the number of encounters is a more important quantity than the spread of particles, the encounter-volume-based diffusivity might be a more appropriate estimate to use.
Figure 6Dispersion-based diffusivity, κD.
In the left panels of Fig. 4 we used the full velocity field to advect trajectories, so both the mean and the eddies contributed to the resulting encounter volumes and the corresponding diffusivities. But what is the contribution of the eddy field alone to this process? To answer this question, we have performed an additional simulation in the spirit of Rypina et al. (2012), where we advected trajectories using the altimetric time-mean velocity field, and then subtracted the resulting encounter volume, Vmean, from the full encounter volume, V. The result characterizes the contribution of eddies, although strictly speaking ${V}_{\mathrm{eddy}}\ne V-{V}_{\mathrm{mean}}$ because of nonlinearity. Note also that because we are interested in the Lagrangian-averaged effects of eddies following fluid parcels, Veddy cannot be estimated by simply advecting particles by the local eddy field alone (see an extended discussion of this effect in Rypina et al., 2012). Not surprisingly, the eddy-induced encounter volumes (upper right panel of Fig. 4) are smaller than the full encounter numbers, with the largest decrease near the Gulf Stream current, where both the mean velocity and the mean shear are large. In other geographical areas, specifically at the peripheries of the Gulf Stream rings, the decrease in V is less significant, so the resulting map retains its overall qualitative spatial structure. The same is true for the diffusivities in the second and third rows of Fig. 4. The overall spatial structure of the eddy diffusivity is preserved and matches that in the left panels, but the values decrease, with the largest differences near the Gulf Stream, where some diffusivity values are now O(106) instead of O(107) cm2 s−1. In contrast, κ only decreases, on average, by a factor of 2 (instead of an order of magnitude) near the peripheries of the Gulf Stream rings. The long-time diffusive timescale τ generally increases, and the ratio tτ generally decreases throughout the domain, but the long-time asymptotic formula (38) still works well in high-V regions, specifically on the peripheries of the Gulf Stream rings where τ is still significantly less than t.
4 Discussion and summary
With many new diagnostics being developed for characterizing mixing in fluid flows, it is important to connect them to the well-established conventional techniques. This paper is concerned with understanding the connection between the encounter volume, which quantifies the mixing potential of the flow, and diffusivity, which quantifies the intensity of the down-gradient transfer of properties. Intuitively, both quantities characterize mixing, and it is natural to expect a relationship between them, at least in some limiting sense. Here, we derived this anticipated connection for a diffusive process, and we showed how this connection can be used to produce maps of spatially varying diffusivity and to gain new insights into the mixing properties of eddies and the particle spreading regime in realistic oceanic flows.
When applied to the altimetry-based velocities in the Gulf Stream region, the encounter volume and diffusivity maps show a number of interesting physical phenomena related to transport and mixing. Of particular interest are the transport properties of the Gulf Stream rings. The materially coherent Lagrangian cores of these rings, characterized by very small diffusivity, are smaller than expected from earlier Eulerian diagnostics (Chelton et al., 2011). The periphery regions with enhanced diffusivity are, on the other hand, large, raising a question about whether the rings, on average, act to preserve coherent blobs of water properties or to speed up the mixing. The encounter volume, through the derived connection to diffusivity, might provide a way to address this question and to quantify the two effects, clarifying the role of eddies in transport and mixing.
Our encounter-volume-based diffusivity estimates are within the range of other available estimates from the literature, but are not among the highest. We provided an intuitive explanation for why the encounter-volume-based diffusivities might be smaller than the dispersion-based diffusivities, and we supported our explanation with theoretical developments based on a linear strain flow, and with numerical simulations. We note that in problems where the encounters between particles are of interest, rather than the particle spreading, the encounter-volume-based diffusivities would be more appropriate to use than the conventional dispersion-based estimates.
Reliable data-based estimates of eddy diffusivity are needed for parameterizations in numerical models. The conventional estimation of diffusivity from Lagrangian trajectories by calculating particle dispersion requires large numbers of drifters or floats (LaCasce, 2008). It would be useful to have a technique that would work with fewer instruments. The derived connection between encounter volume and diffusivity might help in achieving this goal. Specifically, one could imagine that if an individual drifting buoy were equipped with an instrument that would measure its encounter volume – the volume of fluid that came in contact with the buoy over time t – then the resulting encounter volume could be converted to diffusivity using the derived connection. This would allow estimation of diffusivity using a single instrument.
In the field of social encounters, it is becoming possible to construct large data sets by tracking cell phones, smart transit cards (Sun et al., 2013), and bank notes (Brockmann et al., 2006). As was the case for the Gulf Stream trajectories, some of the behavior appears to be diffusive and some not so. Where diffusive/random walk behavior is relevant, it may be easier to accumulate data on close encounters rather than on other metrics using, for example, autonomous vehicles and instruments that are able, through local detection capability, to count foreign objects that come within a certain range.
Data availability
Data availability.
The velocity fields that we used in Sect. 3 are publicly available from the CMEMS website: http://marine.copernicus.eu/services-portfolio/access-to-products/?option=com_csw&view=details&product_id=SEALEVEL_GLO_PHY_L4_REP_OBSERVATIONS_008_047 (CMEMS, 2018).
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Acknowledgements
Acknowledgements.
This work was supported by NSF grants OCE-1558806 and EAR-1520825, and NASA grant NNX14AH29G.
Edited by: Ana M. Mancho
Reviewed by: two anonymous referees
References
Abernathey, R. P. and Haller, G.: Transport by Lagrangian Vortices in the Eastern Pacific, J. Phys. Oceanogr., in press, https://doi.org/10.1175/JPO-D-17-0102.1, 2018.
Abernathey, R. P. and Marshall, J.: Global surface eddy diffusivities derived from satellite altimetry, J. Geophys. Res.-Oceans, 118, 901–916, https://doi.org/10.1002/jgrc.20066, 2013.
Bennett, A. F.: Relative dispersion: Local and nonlocal dynamics, J. Atmos. Sci., 41, 1881–1886, https://doi.org/10.1175/1520-0469(1984)041<1881:RDLAND>2.0.CO;2, 1984.
Beron-Vera, F. J. and LaCasce, J. H.: Statistics of simulated and observed pair separation in the Gulf of Mexico, J. Phys. Oceanogr., 46, 2183–2199, https://doi.org/10.1175/JPO-D-15-0127.1, 2016.
Bower, A. S., Rossby, H. T., and Lillibridge, J. L.: The Gulf Stream-Barrier or blender?, J. Phys. Oceanogr., 15, 24–32, 1985.
Brockmann, D., Hufnagel, L., and Geisel, T.: The scaling laws of human travel, Nature, 439, 462–465, 2006.
Carslaw, H. S. and Jaeger, J. C.: On Green's functions in the theory of heat conduction, B. Am. Math. Soc., 45, 407–413, 1939.
Cherian, D. A. and Brink, K. H.: Offshore Transport of Shelf Water by Deep-Ocean Eddies, J. Phys. Oceanogr., 46, 3599–3621, https://doi.org/10.1175/JPO-D-16-0085.1, 2016.
Chelton, D. B., Schlax, M. G., and Samelson, R. M.: Global observations of nonlinear mesoscale eddies, Prog. Oceanogr., 91, 167–216, 2011.
Cole, S. T., Wortham, C., Kunze, E., and Owens, W. B.: Eddy stirring and horizontal diffusivity from Argo float observations: Geographic and depth variability, Geophys. Res. Lett., 42, 3989–3997, https://doi.org/10.1002/2015GL063827, 2015.
Copernicus Marine and Environment Monitoring Service (CMEMS): Global ocean gridded L4 sea surface heights and derived variables reprocessed (1993–ongoing), available at: http://marine.copernicus.eu/services-portfolio/access-to-products/?option=com_csw&view=details&product_id=SEALEVEL_GLO_PHY_L4_REP_OBSERVATIONS_008_047 , last access: 23 March 2018.
del-Castillo-Negrete, D. and Morrison, P. J.: Chaotic transport of Rossby waves in shear flow, Phys. Fluids A, 5 , 948–965, 1993.
Davis, R. E.: Observing the general circulation with floats, Deep-Sea Res., 38, 531–571, 1991.
Kamenkovich, I., Rypina, I. I., and Berloff, P.: Properties and Origins of the Anisotropic Eddy-Induced Transport in the North Atlantic, J. Phys. Oceanogr., 45, 778–791, https://doi.org/10.1175/JPO-D-14-0164.1, 2015.
Klocker, A. and Abernathey, R.: Global Patterns of Mesoscale Eddy Properties and Diffusivities, J. Phys. Oceanogr., 44, 1030–1046, https://doi.org/10.1175/JPO-D-13-0159.1, 2014.
Kuo, H.: Dynamic instability of two-dimensional non-divergent flow in a barotropic atmosphere, J. Meteorol., 6, 105–122, 1949.
LaCasce, J. H.: Statistics from Lagrangian observations, Prog. Oceanogr., 77, 1–29, https://doi.org/10.1016/j.pocean.2008.02.002, 2008.
LaCasce, J. H., Ferrari, R., Marshall, J., Tulloch, R., Balwada, D., and Speer, K.: Float-derived isopycnal diffusivities in the DIMES experiment, J. Phys. Oceanogr., 44, 764–780, https://doi.org/10.1175/JPO-D-13-0175.1, 2014.
Ledwell, J. R., Montgomery, E. T., Polzin, K. L., St. Laurent, L. C., Schmitt, R. W., and Toole, J. M.: Evidence for enhanced mixing over rough topography in the abyssal ocean, Nature, 403, 179–182, https://doi.org/10.1038/35003164, 2000.
Llewellyn Smith, S. G.: The asymptotic behaviour of Ramanujan's integral and its application to two-dimensional diffusion-like equations, Eur. J. Appl. Math., 11, 13–28, 2000.
Munk, W. H.: Abyssal Recipes, Deep-Sea Res., 13, 707–730, 1966.
Ngan, K. and Shepherd, T. G.: Chaotic mixing and transport in Rossby wave critical layers, J. Fluid Mech., 334, 315–351, 1997.
Okubo, A.: Ocean diffusion diagram, Deep-Sea Res., 18, 789–802, 1971.
Olver, F. W. J.: Asymptotics and Special Functions, edited by: W. Rheinbolt, Academic Press, ISBN: 9781483267449, 588 pp., 1974.
Rogerson, A. M., Miller, P. D., Pratt, L. J., and Jones, C. K. R. T.: Lagrangian Motion and Fluid Exchange in a Barotropic Meandering Jet, J. Phys. Oceanogr., 29, 2635–2655, 1999.
Richardson, L. F.: Atmospheric diffusion on a distanceneighbour graph, P. R. Soc. London, A110, 709–737, https://doi.org/10.1098/rspa.1926.0043, 1926.
Rypina, I. I. and Pratt, L. J.: Trajectory encounter volume as a diagnostic of mixing potential in fluid flows, Nonlin. Processes Geophys., 24, 189–202, https://doi.org/10.5194/npg-24-189-2017, 2017.
Rypina, I. I., Brown, M. G., Beron-Vera, F. J., Kocak, H., Olascoaga, M. J., and Udovydchenkov, I. A.: On the Lagrangian dynamics of atmospheric zonal jets and the permeability of the stratospheric polar vortex, J. Atmos. Sci., 64, 3595–3610, 2007.
Rypina, I. I., Pratt, L. J., Pullen, J., Levin, J., and Gordon, A.: Chaotic advection in an archipelago, J. Phys. Oceanogr., 40, 1988–2006, https://doi.org/10.1175/2010JPO4336.1, 2010.
Rypina, I. R., Kamenkovich, I., Berloff, P., and Pratt, L. J.: Eddy-Induced Particle Dispersion in the Near-Surface North Atlantic, J. Phys. Ocean., 42, 2206–2228, https://doi.org/10.1175/JPO-D-11-0191.1, 2012.
Rypina, I. I., Kirincich, A., Lentz, S., and Sundermeyer, M.: Investigating the eddy diffusivity concept in the coastal ocean, J. Phys. Oceangr., 46, 2201–2218, https://doi.org/10.1175/JPO-D-16-0020.1, 2016.
Sun, L., Axhausen, K. W., Der-Horng, L., and Huang, X.: Understanding metropolitan patterns of daily encounters, P. Natl. Acad. Sci. USA, 110, 13774–13779, 2013.
Sundermeyer, M. and Ledwell, J.: Lateral dispersion over the continental shelf: Analysis of dye release experiments, J. Geophys. Res., 106, 9603–9621, https://doi.org/10.1029/2000JC900138, 2001.
Vallis, G. K.: Atmospheric and Oceanic Fluid Dynamics, Cambridge University Press, 745 pp., 2006.
Visbeck, M., Marshall, J., Haine, T., and Spall, M.: Specification of eddy transfer coefficients in coarse-resolution ocean circulation models, J. Phys. Oceanogr., 27, 381–402, 1997.
Zhurbas, V. and Oh, I.: Drifter-derived maps of lateral diffusivity in the Pacific and Atlantic Oceans in relation to surface circulation patterns, J. Geophys. Res., 109, C05015, https://doi.org/10.1029/2003JC002241, 2004. |
Following toggle tip provides clarification
# Differential Equations (Metric)
## Class Details
Tom Baker
Learn, or revise, solving first order differential equations by various methods, solving second order differential equations with constant coefficients, and classifying the critical points of systems.
## Units
### First Order
Solving differential equations of the form $$dy/dx=f(x,\,y)$$. Part of this topic is found on A-level courses, and the rest is Further Maths.
Lesson
Solving differential equations of the form $$dy/dx = f(x)\,g(y)$$, by separating the variables.
Solving differential equations that can be written in the form $$d/dx(f(x,\,y))=0$$.
Solving differential equations of the form $$a\,d^2y/dx^2+b\,dy/dx+c\,y=f(x)$$. This is a Further Maths topic; courses that require Further Maths may assume it.
Classifying the critical points of systems of the form $$dx/dt = f_1(x,\,y)$$, $$dy/dt = f_2(x,\,y)$$. This is part of the First Year content of some Science and Engineering courses at Imperial. |
# Factorial!
Number Theory Level pending
Let $$m$$ and $$n$$ be positive integers satisfying $n! + 76 = m^2.$
If all the solutions of $$(m,n)$$ are $$(m_1, n_1) , (m_2, n_2) , \ldots , (m_k , n_k )$$, submit your answer as $$\displaystyle \sum_{j=1}^k (m_j + n_j)$$.
Notation: $$!$$ is the factorial notation. For example, $$8! = 1\times2\times3\times\cdots\times8$$.
× |
# Audit Updating Macro
In my code, I'm trying to manipulate data for an audit between 3 sheets in a workbook.The first block of my code is to paste the data of items I need to find for the audit from the original sheet onto the 3rd sheet by setting each row equal to the data of the original row in the first sheet. The second block is used to re-paste the data of found objects in the Audit to only have values rather then formulas.Then the code will iterate through the Audit list to check for the same values and delete those values on the list in the 3rd sheet. The 2nd sheet will have the list of found audit items being pasted in at the same time. The end result is 3 sheets, 1st being just the main list where all the data is collected, the 2nd being a list of found audit items, and the 3rd being left over items that need to be found at a later date. The code works and has a few kinks in it where the screen would be buzzing because of all of the activate lines so I was wondering if there were better ways to manipulate data between different sheets in a workbook.
Sub Update_Audit()
Dim j As Integer
Dim i As Integer
Dim k As Integer
Dim Aud_Tot As Integer
i = 2
Aud_Tot = Application.InputBox("How big is your audit", , , , , , , 1)
k = 2
Worksheets(1).Activate
Do While Cells(k, 24) <> ""
Tab_Data = Range(Cells(k, 24), Cells(k, 44)).Value
Worksheets(3).Activate
Range(Cells(k, 1), Cells(k, 21)).Value = Tab_Data
Worksheets(1).Activate
k = k + 1
Loop
Do While Cells(i, 1).Value <> "" And Not IsError(Cells(i, 2).Value)
Dataset = Range(Cells(i, 1), Cells(i, 22)).Value
Range(Cells(i, 1), Cells(i, 22)).Value = Dataset
Worksheets(2).Activate
Range(Cells(i, 1), Cells(i, 22)).Value = Dataset
Worksheets(1).Activate
For j = 2 To Aud_Tot
If CStr(Cells(j, 24).Value) = CStr(Cells(i, 2).Value) Then
Worksheets(3).Activate
Range(Cells(j, 1), (Cells(j, 22))).Delete Shift:=xlShiftUp
Worksheets(1).Activate
Exit For
End If
Next j
i = i + 1
Loop
End Sub
• Welcome to Code Review! I've fixed the indentation of the code block so as to include the final End Sub statement inside of it - code blocks ought to be indented with 4 leading spaces. If this edited code doesn't look exactly as it does in your IDE, please feel free to edit further to make it so. I hope you get good reviews! – Mathieu Guindon Jun 27 '16 at 16:54
• Regarding screen would be buzzing, add application.screenupdating=false at the beginning. – findwindow Jun 27 '16 at 20:46
## Option Explicit
That should be at the top of every VBA module you ever create. Go to Tools -> Options -> Require Variable Declaration to have it inserted automatically. It's important because it forces you to declare every variable you use, and so automatically gets you to declare types and catches any typos that creep in. Those 2 alone will prevent all sorts of problems down the line.
## Very Low Hanging Performance Fruit
VBA has 3 of these:
Application.ScreenUpdating = False
Application.EnableEvents = False
Application.Calculation = xlCalculationManual
Doing the following will vastly increase the speed of your code:
Public Sub DoThing()
Application.ScreenUpdating = False
Application.EnableEvents = False
Application.Calculation = xlCalculationManual
...
Code
...
Application.ScreenUpdating = True
Application.EnableEvents = True
Application.Calculation = xlCalculationAutomatic
End Sub
In this case, since you're relying on certain formulas to throw errors, you should probably keep Application.Calculation on xlCalculationAutomatic.
## Use the Object Model
The great power of VBA comes from its' tight integration with the Office Object Model (from were Intellisense gains its' power).
Worksheet objects, Workbook objects, Range objects, Array objects, Err (error) objects etc.
Rather than constantly activating different worksheets, put them in objects and then refer to them instead:
Dim sourceDataSheet As Worksheet
Set sourceDataSheet = Worksheets(1)
Dim foundItemsSheet As Worksheet
Set foundItemsSheet = Worksheets(2)
Dim remainingItemsSheet As Worksheet
Set remainingItemsSheet = Worksheets(3)
...
Do While sourceDataSheet.Cells(k, 24) <> ""
Tab_Data = sourceDataSheet.Range(sourceDataSheet.Cells(k, 24), sourceDataSheet.Cells(k, 44)).Value
remainingItemsSheet.Range(remainingItemsSheet.Cells(k, 1), remainingItemsSheet.Cells(k, 21)).Value = Tab_Data
k = k + 1
Loop
This also lets you do really awesome things like hold object references using With statements:
Do While sourceDataSheet.Cells(k, 24) <> ""
With sourceDataSheet
Tab_Data = .Range(.Cells(k, 24), .Cells(k, 44)).Value
End With
With remainingItemsSheet
.Range(.Cells(k, 1), .Cells(k, 21)).Value = Tab_Data
End With
k = k + 1
Loop
And now you can forget about having to keep using Activate ever again.
It also lets you re-use references, so this:
Do While Cells(i, 1).Value <> "" And Not IsError(Cells(i, 2).Value)
Dataset = Range(Cells(i, 1), Cells(i, 22)).Value
Range(Cells(i, 1), Cells(i, 22)).Value = Dataset
Worksheets(2).Activate
Range(Cells(i, 1), Cells(i, 22)).Value = Dataset
Worksheets(1).Activate
For j = 2 To Aud_Tot
If CStr(Cells(j, 24).Value) = CStr(Cells(i, 2).Value) Then
Worksheets(3).Activate
Range(Cells(j, 1), (Cells(j, 22))).Delete Shift:=xlShiftUp
Worksheets(1).Activate
Exit For
End If
Next j
i = i + 1
Loop
becomes this:
Dim startCell As Range
Dim errCheckCell As Range
Const START_COLUMN As Long = 1
Const ERR_CHECK_COLUMN As Long = 2
Const END_COLUMN As Long = 22
Dim sourceDataRange As Range
Dim pasteDataRange As Range
Set startCell = sourceDataSheet.Cells(i, START_COLUMN)
Set errCheckCell = sourceDataSheet.Cells(i, ERR_CHECK_COLUMN)
Do While startCell.Value <> "" And Not IsError(errCheckCell.Value)
With sourceDataSheet
Set sourceDataRange = .Range(.Cells(i, START_COLUMN), .Cells(i, END_COLUMN))
End With
With foundItemsSheet
Set pasteDataRange = .Range(.Cells(i, START_COLUMN), .Cells(i, END_COLUMN))
End With
Dataset = sourceDataRange
sourceDataRange = Dataset
pasteDataRange = Dataset
For j = 2 To Aud_Tot
If CStr(sourceDataSheet.Cells(j, 24).Value) = CStr(errCheckCell.Value) Then
With remainingItemsSheet
.Range(.Cells(j, 1), (.Cells(j, 22))).Delete Shift:=xlShiftUp
End With
Exit For
End If
Next j
i = i + 1
Loop
which looks a little bigger (right now, we'll get to cleaning it up later) but is much, much clearer about what's going on and where, and lets you change just one reference if and when things get moved/changed in the future.
For instance, what happens if your order of worksheets gets changed? Now, you only have to change that once, right at the start, and the rest takes care of itself.
## Tips and Tricks
finalRow - Want to find the last used row in a column?
Dim finalRow As Long
With sheetObject
finalRow = .Cells(.Rows.Count, targetColumn).End(xlUp).Row
End With
And you can then use
For k = 2 To finalRow
...
Next k
instead of that unwieldy Do While cellReference(k).Value <> ""
constants - if you're going to hard-code values (e.g. Column 1, Column 22, Worksheets(1)) then actually hard-code them. Once. In one place. So you can change it in a single go rather than having to track down every occurence of the thing (and invariably missing some and causing errors).
The proper variable for a constant value is, unsurprisingly a Constant. Standard VBA Naming Conventions use SHOUTY_SNAKE_CASE for constants. Created like so:
Option Explicit
Public Const GLOBAL_CONSTANT As Boolean = True
Private Const MODULE_CONSTANT As Long = 42
Public Sub DoThing()
Const PROCEDURE_CONSTANT As Long = 1
...
End Sub
Codenames - Every Worksheet has a codename (name) property. If you got to the properties window in the Editor, select a worksheet and type, e.g. sheetCodename in the (name) property, then you can write a procedure like so:
Public Sub DoThingWithSheet()
sheetCodename.Cells(1, 1).Value = 1
without having to declare the sheet, or assume anything about its' name, or its' position in your workbook, or anything else. The variable is just there, constant and unchanging.
(assuming we've already given your sheets the following codenames: dataSheet, foundItemsSheet, remainingItemsSheet)
Public Sub UpdateAudit()
Const TAB_START_COLUMN As Long = 24
Const TAB_END_COLUMN As Long = 44
Const TAB_PASTE_START_COLUMN As Long = 1
Const TAB_PASTE_END_COLUMN As Long = 1
Const START_ROW As Long = 2 '/ +1 for headers
Dim numItemsToAudit As Long
numItemsToAudit = Application.InputBox("How big is your audit", Type:=1)
Dim finalRow As Long
With dataSheet
finalRow = .Cells(.Rows.Count, TAB_START_COLUMN).End(xlUp).Row
End With
'/ Copy All Raw Data to "remainingItemsToFind" sheet
Dim tabData As Variant
Dim iRow As Long
For iRow = START_ROW To finalRow
With dataSheet
tabData = .Range(.Cells(iRow, TAB_START_COLUMN), Cells(iRow, TAB_END_COLUMN))
End With
With remainingItemsSheet
.Range(.Cells(iRow, TAB_PASTE_START_COLUMN), .Cells(iRow, TAB_PASTE_END_COLUMN)) = tabData
End With
Next iRow
'/ For each row in "rawData" sheet, check for error.
'/ If not error, copy to "foundItems" sheet and delete from "remainingItems" sheet
Const FOUND_START_COLUMN As Long = 1
Const FOUND_END_COLUMN As Long = 22
Const FOUND_ERR_CHECK_COLUMN As Long = 2
Const REMAINING_ERR_CHECK_COLUMN As Long = 24
With dataSheet
finalRow = .Cells(.Rows.Count, FOUND_START_COLUMN).End(xlUp).Row
End With
Dim dataArray As Variant
Dim dataRange As Range
Dim pasteRange As Range
Dim foundErrCheckCell As Range
Dim remainingErrCheckCell As Range
Dim errCheckRow As Long
For iRow = START_ROW To finalRow
Set foundErrCheckCell = dataSheet.Cells(iRow, FOUND_ERR_CHECK_COLUMN)
If Not IsError(foundErrCheckCell) Then
'/ Get Source Data
With dataSheet
Set dataRange = .Range(.Cells(iRow, FOUND_START_COLUMN), .Cells(iRow, FOUND_END_COLUMN))
End With
dataArray = dataRange
With foundItemsSheet
Set pasteRange = .Range(.Cells(iRow, FOUND_START_COLUMN), .Cells(iRow, FOUND_END_COLUMN))
End With
'/ Copy Data
dataRange = dataArray
pasteRange = dataArray
'/ Find and Delete from "remainging items" sheet
For errCheckRow = 2 To numItemsToAudit
Set remainingErrCheckCell = dataSheet.Cells(errCheckRow, REMAINING_ERR_CHECK_COLUMN)
If remainingErrCheckCell.Text = foundErrCheckCell.Text Then
remainingItemsSheet.Rows(errCheckRow).Delete shift:=xlShiftUp
Exit For
End If
Next errCheckRow
Next iRow
End Sub
You'll notice the better code looks longer. This is entirely down to having more variable declarations and adding some whitespace for readability. When measuring code, the metric that matters is not how much you can cram into a small space, but how quickly you can read and understand the code and how to change it.
• Somebody looks like they're after some golden tag badge! Another awesome answer! – Mathieu Guindon Jun 28 '16 at 15:48 |
# 2. Given the 'H NMR spectrum below and a molecular formula of C-H100, provide a structure....
###### Question:
2. Given the 'H NMR spectrum below and a molecular formula of C-H100, provide a structure. Note you must assign all of the signals in the spectrum below to receive full credit. IR: 1710 cm: (5 pts.) 6H, 1H, S 2H, d 1H, m PPM
#### Similar Solved Questions
##### 1) Strength of the electric force The earth and the moon exert attractive gravitational forces on...
1) Strength of the electric force The earth and the moon exert attractive gravitational forces on one another according to Newton's law of gravitation. However, since the earth and the moon are electrically neutral (that is, the net charge of each planet is zero), they do not exert electrical fo...
##### Budgets are the quantitative expression of management's plans. True False
Budgets are the quantitative expression of management's plans. True False...
##### Name 1. This problem is designed to test your skill in using the Tables in the...
Name 1. This problem is designed to test your skill in using the Tables in the Appendix to find probabilities and cutoffs. (Drawing a picture is highly recommended.) (a) Suppose that the random variable B follows a Binomial distribution with no 9 and p = .70, find P(B = 6). (b) Suppose that the rand...
##### Limiting Reactant and Theoretical yield Chapter 7 NAME Partners 4. LIOH + KCI → LiCl +...
Limiting Reactant and Theoretical yield Chapter 7 NAME Partners 4. LIOH + KCI → LiCl + KOH DATE ORROW LIA WORTlong guwollollo d Kai LCI + KOH bold stopalomarmor Mam H o w to log 0.81 marw action with 400 grams of lithium hydroxide. What is her theoretical yield of HOH a) Zuri began this reactio...
##### In 1994, Congress passed the DNA Identification Act which authorized the FBI to do 2 things:...
In 1994, Congress passed the DNA Identification Act which authorized the FBI to do 2 things: (1) create and maintain a national DNA database, and (2) establish standards for forensic DNA testing. Because the human genome is full of DNA tandem repeats and because they vary in the number of contiguous...
##### A flanged wooden shape is used to support the loads shown on the beam. The dimensions...
A flanged wooden shape is used to support the loads shown on the beam. The dimensions of the shape are shown in the second figure. Assume LAB = 5 ft. LBC = 2 ft, LCD= 3 ft, LDE = 4 ft, Pc = 2060 lb, Pe= 1990 lb, WAB = 750 lb/ft, b1 = 10 in., b2 = 2 in., b3 = 7 in., dų = 2 in., d2 = 8 in., d3= 2...
##### 6. The equation of a line is given by the function y -mx +b. Determine the...
6. The equation of a line is given by the function y -mx +b. Determine the value of the X-intercept (where y 0) of the line defined by the equation y 3x+9. Using the equation for a straight line, what is the formula for the x intercept? 7. What effect will a 1.0°C error (all temperature readings...
##### 22. You perform an experiment to test whether female birds prefer males with longer tail feathers....
22. You perform an experiment to test whether female birds prefer males with longer tail feathers. You find that they do, and, in fact, they prefer even longer tail feathers than you ever see on males in the population. Assuming that this trait has had enough time to evolve to an optimum, what is th...
##### Organic Chemistry
1 a. What is the major organic product in the following reaction? What is its systematic IUPAC name?methyl 2-oxocyclohexanecarboxylate (http://www.ichemistry.cn/structure/41302-34-5.gif) + 1)NaBH4 2) H3O+b. Which of the sequences works best to accomplish the following conversion?3-methylbutanal (htt...
##### How do you graph y= 1/3 | x-3 | + 4?
How do you graph y= 1/3 | x-3 | + 4?...
##### Can you answer IV,V,VI, give reactions and mechanism and explain in detail how you got the...
Can you answer IV,V,VI, give reactions and mechanism and explain in detail how you got the answers. Thank you very much!!! IV. Suggest a chemical test/reaction you can use to distinguish pentan-1-ol from 2-methy the two alcohols. 2-ol. Give the chemical equations and briefly describe the different r...
##### Part I: Molly is an active 14-year-old girl. She has just started her freshman year of...
Part I: Molly is an active 14-year-old girl. She has just started her freshman year of high school and is participating in many school activities. This includes playing on the volleyball team. In order to make the junior varsity team, she has been training extra hard and drinking extra fluids. Her m...
##### Lactose (or rather, its derivative allolactose) can lift the Lac Repressor protein from the Operator site...
Lactose (or rather, its derivative allolactose) can lift the Lac Repressor protein from the Operator site in DNA, thus allowing for expression of the genes coded by the Lac Operon. However, lactose cannot cross the bacterial cell membrane without the help of the Permease protein channel, which is co...
##### How do you write sec(2x) in terms of sinx?
How do you write sec(2x) in terms of sinx?...
##### How do you solve a/(9a-2)=1/8?
How do you solve a/(9a-2)=1/8?... |
# For MSE equation does order of $y$ and $\hat{y}$ in the residual $(y-\hat{y})$ matter?
So the equation for MSE is $$\frac{1}{2N}\sum(y-\hat{y})^2$$. If you switch the order as in $$\frac{1}{2N}\sum(\hat{y} - y)^2$$ does that affect anything? The only thing I think it potentially effects is when you're doing gradient descent you have to change the sign in front of the learning rate multiplied by the derivative.
No because $$a^2 = [(-1)(-a)]^2 = [(-1)^2(-a)^2] = (-a)^2$$.
For the gradient, you'd have $$2(-a)\left[\frac{\partial}{\partial \theta}(-a) \right] = 2a \left[\frac{\partial}{\partial \theta} a \right]$$ due to the chain rule.
• Can you explain how the bears on the gradient of $(y-\hat{y})^2$? – Sycorax Jul 22 at 4:22
• this wouldn't be the case if $\hat{y}$ is a function and you're taking the derivative for the variable correct? Like $\hat{y}=ax+b$ and you're getting the derivative w.r.t to a. – user8714896 Jul 22 at 5:26 |
# Suppose that a Treasury Note with a coupon rate of 7.4% is purchased between coupon periods. The days between the settlement
## This Post Has 4 Comments
1. Expert says:
2. Expert says: |
# How to use Random Forest for categorical variables with missing value
I have a labelled dataset of 1M rows and 600 features. I am trying to build a supervised learning model for prediction. I am particularly working with Random forests in R.The data I have has following properties.
1. Most of the features are categorical in nature.
2. Each categorical variable has multiple levels ( some of them having 20 levels)
3. Some of the features have data missing
Can random forests work without imputation of these missing values. If no then what is the best way to impute these missing categorical values. Any literature or R functionality which addresses this issue will be really helpful
Off the top of my head, I would say that this shouldn't be an issue. The rf package in R implements random forests using CARTs. One of the nicest thing about trees is how they are "natively" capable of dealing with categorical and missing variables. Here is the package documentation; you can download the package itself from CRAN.
Chapter 8 in James, Witten, Hastie, & Tibshirani's Introduction to Statistical Learning with Applications in R offers a good introduction to tree methods and also covers random forests on page 328.
Imputing missing variables is a whole thing in and of itself and, depending on your needs and data, you might be able to get away with not having to do it. If you do have to perform imputation you might want to check here and here for some quick pointers, but you're probably just going to have to read up on imputation methods and make a judgement call on what to go with.
The R randomForest package includes functions for doing a rough imputation of missing values and then iterativelly improving this imputation based on case proximity in RF runs.
There are a bunch of other methods that have been proposed as ways rf's and decision trees can handle missing values:
1) Leave them out when split and do a bias correction for the reduction in impurity.
2) Split them onto a a third branch at each node.
3) Label them as a separate category as chf suggests. For numerical features impute and create a separate x_is_missing feature.
4) Identify "surrogate splitter" relationships between features by analyzing which features work well in the same place and then use a surrogate to split when a feature is missing.
5) Do a local imputation within the branch of the tree.
I'm not aware of R code for most of these though it may exist.
I implemented a stand alone utility that can do the first two methods: https://github.com/ryanbressler/CloudForest
It is easy enough to use write.arff to dump you're data out and call it and load the predictions (which are stored in a tsv) back in. (The arff file format is nice for categorical data with missing values).
I chose those two methods as they don't increase the computation required on large data sets. I've found the first works well when there are few missing values and they aren't meaningfully distributed...imputation often also works well here.
The second, three way splitting, works well when the fact a value is missing may be significant. This is quite common in poorly designed survey's that don't include a "don't know" or "not applicable" category. Method 3 can also work well here.
You can simply introduce a new level for each categorical variable which represents missing data. Then you would simply replace the missing fields with this new category.
• this would be an interesting solution for nominal data, but would not work for ordinal data/ranked levels. – katya May 6 '15 at 3:31
• @katya: Why not? – cfh May 6 '15 at 6:29
• because there is no way to justify the ranking of that new variable - is it lower than the lowest, higher than the highest - so it still needs to be imputed as suggested in (3) above. – katya May 6 '15 at 13:58
• @katya: But random forests are nonlinear classifiers, so they can make sense of the relevance of a category beyond just its magnitude in relation to others. – cfh May 6 '15 at 18:26
• but you still have a basic assumption of ranking / directionality, I think it is a logical issue more so than RF-specific issue. It would be interesting to simulate ordinal data eg temperature ranges (1=0-10, 2=11-20, etc.), introduce MAR and try to treat that predictor as default imputed vs. n+1 category, results would likely differ. It is essentially introducing extreme values instead of noise (imputed) values. – katya May 7 '15 at 1:34 |
Question
The mass of a theoretical particle that may be associated with the unification of the electroweak and strong forces is $10\times 10^{14}\textrm{ GeV/c}^2$ . (a) How many proton masses is this? (b) How many electron masses is this? (This indicates how extremely relativistic the accelerator would have to be in order to make the particle, and how large the relativistic quantity $\gamma$ would have to be.)
1. $1\times 10^{14}$
2. $2\times 10^{17}$
Solution Video |
# LS5. Activation Parameters
#### LS5. Activation Parameters
The rate law shows how the rate of a reaction depends on concentrations of different species in solution. The proportionality constant, k, is called the rate constant. It contains other information about the energetic requirements of the reaction.
All reactions must overcome activation barriers in order to occur. The activation barrier is the sum of the energy that must be expended to get the reaction going. An activation barrier is often thought of, cartoonishly, as a hill the molecule has to climb over during the reaction. Once, there, it can just slide down the other side of the hill to become products. At the top of the hill, the molecule exists in what is called the "transition state". At the transition state, the structure is somewhere between its original form and the structure of the products.
Figure LS5.1. The activation barrier for a ligand dissociation step.
The type of diagram shown in figure LS6.1 is sometimes called a "reaction progress diagram". It shows energy changes in the system as a reaction proceeds. One or more activation barriers may occur along the reaction pathways, as various elementary steps occur in the reaction. In the above case, it is easy to imagine the source of the energy barrier, because some energy must be expended to break the bond to ligand C.
However, after that barrier is passed, energy is lowered again. This can happen for several reasons. Once C has separated from the metal complex, it is free to vibrate, tumble, roll and zip around all on its own. That means it can put its energy into any of those modes, independently of the metal complex. As a result, the entropy of the system increases. That lowers the overall "free energy" of the system. In addition, there may be some relief of crowding as the molecule changes from a four-coordinate complex to a three-coordinate complex, so strain energy is also lowered.
##### Problem LS5.1.
Make drawings depicting the relationship between reaction progress and energy for the following cases:
1. a new ligand binds to a four-coordinate complex, forming a five coordinate complex.
2. a two-step process in which a new ligand binds to a four-coordinate complex, forming a five coordinate complex, and then an old ligand dissociates to form a new, four-coordinate complex.
The rate constant gives direct insight into what is happening at the transition state, because it gives us the energy difference between the reactants and the transition state. Based on that information, we get some ideas of what is happening on the way to the transition state.
The rate constant can be broken down into pieces. Mathematically, it is often expressed as
$k = \left ( RT \over Nh \right ) e^{- \Delta G \ddagger \over RT}$
In which R = the ideal gas constant, T = temperature, N = Avogadro's number, h = Planck's constant and Δ G = the free energy of activation.
The ideal gas constant, Planck's constant and Avogadro's number are all typical constants used in modeling the behaviour of molecules or large groups of molecules. The free energy of activation is essentially the energy requirement to get a molecule (or a mole of them) to undergo the reaction.
Note that k depends on just two variables:
• Δ G or the energy required for the reaction
• T or the temperature of the surroundings, which is an index of the available energy
The ratio of activation free energy to temperature compares the energy needs to the energy available. The more energy available compared to the energy needed, the lower this ratio becomes. As a result, the exponential part of the function becomes larger (since the power has a minus sign). That makes the rate constant bigger, and the reaction becomes faster.
The activation free energy is constant for a given reaction. It can be broken down in turn to:
$\Delta G^{\ddagger} = \Delta H^{\ddagger} - T \Delta S^{\ddagger}$
in which Δ H = activation enthalpy and Δ S = activation entropy.
The activation enthalpy is the energy required for the reaction. The activation entropy deals with how the energy within the molecule must be redistributed for the reaction to occur. These two parameters can be useful in understanding events leading to the transition state.
For example, in ligand substitution, an associative pathway is marked by low enthalpy of activation but a negative entropy of activation. The low enthalpy of activation results because bonds don't need to be broken before the transition state, so it doesn't cost much to get there. That's favourable and makes the reaction easier. However, a decrease in entropy means that energy must be partitioned into fewer states. That's not favourable and makes the reaction harder. The reason the energy must be redistributed this way is that two molecules (the metal complex and the new ligand) are coming together to make one bigger molecule. They can no longer move independently of each other, and all of their combined energy must be reapportioned together, with a more limited range of vibrational, rotational and translational states to use for that purpose.
• Associative pathway: more bond making than bond breaking; lower enthalpy needs
• Associative pathway: two molecules must be aligned and come together; fewer degrees of freedom for energy distribution; decrease in entropy
On the other hand, the dissociative pathway is marked by a higher enthalpy of activation but a positive entropy of activation. The higher enthalpy of activation results because a bond must be broken in the rate determining step. That's not favourable. However, the molecule breaks into two molecules in the rate determining step. these two molecules have more degrees of freedom in which to partition their energy than they did as one molecule. That's favourable.
• Dissociative pathway: more bond breaking in rate determining step, higher enthalpy needs
• Dissociative pathway: one molecule converts to two molecules in rate determining step, greater degrees of freedom in two independently moving molecules, entropy increases
Thus, looking at the activation parameters can reveal a lot about what is going on in the transition state.
##### Problem LS5.2.
What factor(s) other than entropy might raise the free energy of the transition state going into an associative step between a metal complex and an incoming ligand? (What factor might make the first, associative step slower than the second, dissociative step?)
##### Problem LS5.3.
Other mechanisms for ligand substitution are also possible. The following case is referred to as an associative interchange (IA).
a) Describe in words what happens in an associative interchange.
b) Predict the rate law for the reaction.
c) Qualitatively predict the activation entropy and enthalpy, compared with
i) an associative mechanism and
ii) a dissociative mechanism.
##### Problem LS5.4.
For the following mechanism:
a) Describe in words what is happening.
b) Predict the rate determining step.
c) Predict the rate law for the reaction.
d) Qualitatively predict the activation entropy and enthalpy, compared with
i) both an associative mechanism and
ii) a dissociative mechanism.
e) Suggest some ligands that may be able to make this mechanism occur. |
oliviayychengwh
2021-12-21
The distance between point and given in spherical coordinates extract the formula.
Cheryl King
Expert
Step 1
Given point
Let corresponding coutesian cordinates are,
Relation between coutesian cordinates and sherical coordinates are
Distance between paints
$d=\sqrt{{\left({x}_{1}-{x}_{2}\right)}^{2}+{\left({y}_{1}-{y}_{2}\right)}^{2}+{\left({z}_{1}-{z}_{2}\right)}^{2}}$
$=\sqrt{{x}_{1}^{2}+{x}_{2}-2{x}_{1}{x}_{2}+{y}_{1}^{2}+{y}_{2}^{2}-2{y}_{1}{y}_{2}+{z}_{1}^{2}+{z}_{2}^{2}-2{z}_{1}{z}_{2}}$
ambarakaq8
Expert
Assuming that $\theta$ is the latitude and $\varphi$ is the longitude we have that the cartesian coordinates of the first point are:
so the distance between the two points is given by:
$\sqrt{{\rho }_{1}^{2}+{\rho }_{2}^{2}-2{\rho }_{1}{\rho }_{2}\left({\mathrm{cos}\theta }_{1}{\mathrm{cos}\theta }_{2}\mathrm{cos}\left({\varphi }_{1}-{\varphi }_{2}\right)+{\mathrm{sin}\theta }_{1}{\mathrm{sin}\theta }_{2}\right\}}$
nick1337
Expert
Step 1
The expression of the distance between two vectors in spherical coordinates provided in the other response is usually expressed in a more compact form that is not only easier to remember but is also ideal for capitalizing on certain symmetries when solving problems.
$||r-{r}^{\prime }||=\sqrt{\left(x-{x}^{\prime }{\right)}^{2}+\left(y-{y}^{\prime }{\right)}^{2}+\left(z-{z}^{\prime }{\right)}^{2}}$
$=\sqrt{{r}^{2}+{r}^{\prime 2}-2r{r}^{\prime }\left[\mathrm{sin}\left(\theta \right)\mathrm{sin}\left({\theta }^{\prime }\right)\mathrm{cos}\left(\varphi \right)\mathrm{cos}\left({\varphi }^{\prime }\right)+\mathrm{sin}\left(\theta \right)\mathrm{sin}\left({\theta }^{\prime }\right)\mathrm{sin}\left(\varphi \right)\mathrm{sin}\left({\varphi }^{\prime }\right)+\mathrm{cos}\left(\theta \right)\mathrm{cos}\left({\theta }^{\prime }\right)\right]}$
$=\sqrt{{r}^{2}+{r}^{\prime 2}-2r{r}^{\prime }\left[\mathrm{sin}\left(\theta \right)\mathrm{sin}\left({\theta }^{\prime }\right)\left(\mathrm{cos}\left(\varphi \right)\mathrm{cos}\left({\varphi }^{\prime }\right)+\mathrm{sin}\left(\varphi \right)\mathrm{sin}\left({\varphi }^{\prime }\right)\right)+\mathrm{cos}\left(\theta \right)\mathrm{cos}\left({\theta }^{\prime }\right)\right]}$
$=\sqrt{{r}^{2}+{r}^{\prime 2}-2r{r}^{\prime }\left[\mathrm{sin}\left(\theta \right)\mathrm{sin}\left({\theta }^{\prime }\right)\mathrm{cos}\left(\varphi -{\varphi }^{\prime }\right)+\mathrm{cos}\left(\theta \right)\mathrm{cos}\left({\theta }^{\prime }\right)\right]}$
This form makes it fairly transparent how azimuthal symmetry allows you to automatically eliminate some of the angular dependencies in certain integration problems
Another advantage of this form is that you now have at least two variables, namely , that appear in the equation only once, which can make finding series expansions w.r.t. these variables a little less of a pain than the others.
Do you have a similar question? |
# Thread: problem of quadratic equation with two variables
1. ## problem of quadratic equation with two variables
If 3x2+2αxy+2y2+2ax-4y+1 can be resolved into two linear factors, prove that α is the root of the equation x2+ 4ax+2a2+6=0.
please don't solve the problem
just hint is expected.
2. ## Re: problem of quadratic equation with two variables
Originally Posted by sumedh
If 3x2+2αxy+2y2+2ax-4y+1 can be resolved into two linear factors, prove that α is the root of the equation x2+ 4ax+2a2+6=0.
Hint : The discriminant of the second degree equation on $x$ : $3x^2+(2\alpha y+2a)x+2y^2-4y+1=0$ must be a perfect square.
3. ## Re: problem of quadratic equation with two variables
That is, if you meant "can be resolved into two linear factors" with rational coefficients.
4. ## Re: problem of quadratic equation with two variables
Originally Posted by HallsofIvy
That is, if you meant "can be resolved into two linear factors" with rational coefficients.
Why?. The problem is equivalent to find where the given conic is degenerated. That is $\Delta=\begin{vmatrix}{3}&{\alpha}&{a}\\{\alpha}&{ 2}&{-2}\\{a}&{-2}&{1}\end{vmatrix}=0$ i.e. $\alpha^2+4a\alpha +2a^2+6=0$ . Why do we need rational coefficients ?
5. ## Re: problem of quadratic equation with two variables
on solving i got
α^2 y^2+a^2+2aαy =-6y^2-3+12y [α means alpha]
α^2 y^2+a^2+2aαy -9 (y-3)^2
after equating what should i do???
could you please tell me the concept for solving this????
6. ## Re: problem of quadratic equation with two variables
The discriminant of the equation $p(x,y)=3x^2+(2\alpha y+2a)x+2y^2-4y+1=0$ is $\Delta=(2\alpha y+2a)^2-12(2y^2-4y+1)$ then,
$p(x,y)=3\left(x-\dfrac{-(2\alpha y+a)+\sqrt{\Delta}}{6}\right)\left(x-\dfrac{-(2\alpha y+a)-\sqrt{\Delta}}{6}\right)$
$\Delta$ is a perfect square iff the discriminant $\Delta_1$ of $\Delta=0$ is $0$ . Now, verify $\Delta_1=0\Leftrightarrow \ldots \Leftrightarrow \alpha^2+4\alpha a+2a^2+6=0$ . Equivalently, $\alpha$ is a root of $x^2+4ax+2a^2+6=0$ .
7. ## Re: problem of quadratic equation with two variables
thank you very much i got it |
# Boycott Iranian Chess? A reply
Posted by David Smerdon on Oct 4, 2016 in Chess, Gender, Non-chess, Politics
EDIT: Nigel Short responded with an addendum that “the Iran bid was not mentioned in the FIDE General Assembly Agenda. It was sprung on Delegates as a surprise.” This procedural anomaly is worth mentioning in light of my shielding FIDE from blame in the text below.
Too-long-didn’t-read version: I don’t support a mass boycott of the upcoming women’s world chess championships in Iran, or removing Iran’s right to host. My reason is that it will hurt, not help, gender equality, particularly in Iran. This will probably make me unpopular.
The chess world has been rocked in the last week by a fresh controversy, this time the awarding of hosting rights for the Women’s World Championship to Iran. The main tinder box was US Women’s Champion Nazi Paikidze issuing a statement that she will boycott the event rather than wear a hijab and acquiesce to sex discrimination, a provocative comment that was irresistible to the mainstream media (see these articles in Fox News, The Telegraph, CNN and of course the Daily Mail). Other notable chess celebrities, such as Nigel Short, Emil Sutovsky, Tatev Abrahamyan and Sabrina Chevannes, have strongly and angrily come out in support of her boycott.
This is a tough issue for me, and I’ve sat in nervous silence for a week while deciding whether to write about it. As many know, I’m a strong defender of equality and women’s rights, particularly in the chess world. And yet try as I might, I cannot support the proposal to withdraw Iran’s hosting rights and move the championship. My main reason for this, as ironic as it may seem, relates to defending and empowering women.
My opinion has landed me on quite an unfamiliar side of the political divide. Across the gorge are friends and others whose beliefs I generally respect, while some of those besides me are traditional ideological foes. This is an uncomfortable position to be in, particularly seeing as this debate seems to have brought out the worst of ad hominem in people, so I will tread carefully.
I’ll start with an obvious clarification. I’m not a supporter of the Iranian governmental regime, and many of its policies that engender the oppression of women are simply indefensible. Neither am I a ‘defender of Islam’, just as I don’t specifically promote any religion. (And let us not forget that almost every major religion, taken at its fundamental level, demands gender discrimination, with the notable exception of Pastafarianism.) Several of the public criticisms of having Iran as host seem to be well-intentioned, but use the guise of “defending women’s rights” to champion an anti-Islam agenda (thereby employing another logical fallacy, that of tying). Opposition to freedom of religion has no place in this debate. Others have argued that FIDE has put women’s lives in danger by awarding the host to an unsafe country, a not unreasonable objection, but also one not supported by precedent.
I do not at all oppose the right of an individual (or team of individuals) to boycott this or any other event, nor their right to publicly state their reasons for doing so. But here we are talking about a mass, organised boycott and potential removal of Iran’s hosting rights, and as such, it’s important not to conflate the issues above. First, many international events (and here I mean world championships for both genders, European championships and world senior, youth and junior events) have been held in countries that are predominantly Muslim, are suffering unrest, face high crime rates, have a historically bad record on human rights or have deep political conflicts with other nations. Players from thirty-four countries were not permitted (by their own nations) to participate in the 1976 Olympiad in Israel. In 1978 at the chess Olympiad in Buenos Aires, some players could allegedly hear the shots of executions of political dissidents by the Argentinian junta as they played their games (it is estimated that between 10,000 and 30,000 citizens were killed during the Dirty War of 1976-1983). At the 2006 World Student Championships in Lagos, participants were not allowed to leave their hotels without armed guards. There are many stories of corruption and human rights abuses carried out by the Aliyev-led government of Azerbaijan, a great supporter of international chess and host of the recent 2016 Olympiad. (Incidentally, former Olympiad champions Armenia could not participate for fear of violence.)
And of course there have been similar moves in other sports: many objections were raised to China’s hosting of the 2008 Summer Olympiads for reasons of its human rights record, while the US famously boycotted the 1980 Olympics in Moscow. My point is that one cannot simply exclude a country as host due to political or religious objections, or because the conditions aren’t favourable to a particular country. That’s not the way of international sport. So let’s turn now to the one viable issue at stake: whether a participant should be forced to wear a hijab.
The hijab is a head covering worn predominantly by Muslim women, originally as a symbol of “modesty and privacy” (Wikipedia). Less than 50% of self-professed Muslim women wear one, though statistics here are unreliable. Iran’s government is somewhat unique in that it follows what is commonly (though inaccurately) called Sharia Law, in that the principles of Shia Islam are hardwired into the Constitution. Practically, this means that citizens can be arrested for breaking those principles, including with regard to dress. Men cannot wear shorts in public places. Women must have their hair covered by a scarf or hijab, though for tourists and foreigners, the punishment for forgetting is usually a request to get one. As with men, legs should be covered, but all the way to the ankles (sandals or bare feet are allowed).
At the championships in Iran, the female players will be required to adhere to the Iranian dress code. This has been the case at all international chess events held in Iran (including the 2016 Women’s Grand Prix, in which 12 of the world’s top female players took part). Many other countries have strong cultural norms that follow these principles, although there may not be legal punishments in play. At two world junior championships in India in which I competed, both foreign boys and girls felt some cultural pressure to dress to cover our legs; in fact, refusal to do so actually led to the male and female events being segregated into different rooms!
After that very long setup, we come to the key point. The main question is whether or not FIDE’s awarding the hosting right to Iran, which means women must wear hijabs during the games, constitutes gender discrimination. First, 165 member nations of FIDE had a chance to vote against Iran’s bid, and none did, so I’m not even sure FIDE or its Commission for Women’s Chess could be blamed in any case. (This issue really does make for strange bedfellows.) Secondly, the wearing of the hijab is an Iranian law, not a rule made by the organisers. And finally, covering the head is by and large a reflection of the cultural values of the host country that are admittedly tied to its religion, in much the same way that a woman would take off shoes before entering a Hindu temple, remove her hat at a Christian church or funeral, or refrain from touching a Buddhist monk. To some individuals, I can understand that a hijab might symbolize oppression, but only if that is one’s stance against Islam; in that case, a personal boycott is the appropriate action. If the players were required to drape themselves in the Iranian flag, that might be another issue. But here, the players aren’t being asked to do anything more than what any other tourist or visitor to Iran is asked.
(As an aside: A good point was raised by IM Elizabeth Paehtz, who wondered how women would be permitted to be alone with their male trainers, which may also defy Iranian principles. This is something that could materially affect the players’ preparations as it has done for Iranian girls competing in events, and I hope a solution is found.)
Finally, why does this issue matter, if at all? The truth is, it matters a whole lot. Iranian chess has seen something of a revolution in the last decade, and the national team at the Olympiad was one of the standouts. The federation has organised several large tournaments and events, including the aforementioned Women’s Grand Prix earlier this year. While women do suffer oppression in everyday life in Iran, as has been well documented, chess is a medium through which they can travel, engage in bilateral cultural exchange with their western counterparts, earn respect and standing among their male peers at home, and potentially even foster an independent career. For girls, it provides a complementary source of education, along with all the associated benefits, as well as rare opportunities to interact and compete with boys on a more even footing.
I’m not the only one who thinks this. GMs Adly and Al-Medaihki, for example, have spoken out strongly on this matter. But on this point, I can’t do better than re-quote the statement of Mitra Hejazipour, a women’s grand master from Iran and winner of the 2015 Asian continental championships. She pleaded:
“This is going to be the biggest sporting event women in Iran have ever seen; we haven’t been able to host any world championship in other sporting fields for women in the past. It’s not right to call for a boycott. These games are important for women in Iran; it’s an opportunity for us to show our strength.”
In an interview with The Guardian, she went on to say that such a move would ‘isolate Iran and ignore progress that Iranian women have made in the country.’
I agree. For me, the key test is to consider: Would the lives of Iranian women and girls be better or worse if all major events were banned in their country? I have carefully weighed the evidence, and I believe it is in the best interests of promoting equality and lifting Iranian women out of oppression for the championships to go ahead. This is probably going to make me unpopular among some of the more opinionated in the chess world, but I can’t compromise my beliefs on this. Individuals such as Paikidze may wish to boycott it, as is their right. But please, let’s keep sight of who the real victims are here, and look at the big picture: supporting equality for women, everywhere.
# Pirate Week
Posted by David Smerdon on Aug 27, 2015 in Chess, Non-chess, Politics
Three strange pirate-related things happened to me last week.
Now there’s a sentence you don’t see every day. Admittedly, some of the links to buccaneering are a little tenuous, but it still makes for an unusual theme.
It started when I found out about some trouble one of my German friends had gotten into. Having just started university, he, like many freshers, soon discovered the shady world of movie downloading – or ‘online piracy’ as it’s more commonly known. He’d downloaded a grand total of 12 movies before he got sent a letter from a law firm representing a media corporations demanding almost a thousand euros for one particular movie. Another demand following on behalf of a different corporation, for a similarly jawdropping fee. If he pays the fines, it’ll have cost him roughly 150 euros on average per movie he watched. And that’s assuming no more fines follow.
Now, I have lots of friends. And some of them illegally download media from controversial ‘activist’ sites like The Pirate Bay. Some of them have been doing it for years. My old college’s intranet literally had terabytes of material (…at which point the law gets a litle fuzzy. If a pirate buys you a drink with his stolen loot, are you also culpable?).
But in all these years and of all these people, I’ve never heard of anyone having to answer. At first I thought my friend was particularly unlucky, but then I googled anti-piracy laws in Germany. It turns out Germany is the Stockfish of online piracy: no mistake goes unpunished. Literally millions of letters are sent to perpetrators, and the law allows little leeway. (Not that I’m bagging out Germany’s techno laws in general, mind you; their mobile phone services are so impressive that it’s actually cheaper to call within the Netherlands on my girlfriend’s German phone than my Dutch one.)
A full post about online piracy will have to wait for another day, however, because it’s time to move on to pirate event number two. We’ve recently moved apartments to the north-east side of Amsterdam, and by coincidence we look out over the Ij (“Eye”) harbour where last week the Amsterdam Sail Festival took place. Held once every five years (or “quinquenially”, if you’re feeling fancy), it’s one of the largest maritime festivals in the world.
“Honey, what’s that outside our window?”
I’m not really a ‘boat’ person, but this festival was phenomenal. About two million tourists crammed into tiny Amsterdam to check out the ships, which were, I have to admit, stunning. They came from all over the world, these huge sail boats from various centuries, including a small Aussie one that had sailed all the way from Down Under with most of its crew barely out of high school. But it was the collection of older boats that really stood out in my opinion. Some of them were huge. Some, such as the Russian, French and South American vessels, were immaculate, with the crew dressed in exquisite, colourful garb. Other crews were literally dressed as pirates, for no good reason that I could discern. My favourite was the Nao Victoria, a replica of one of Ferdinand Magellan’s ships from the early sixteenth century, and the first to circumnavigate the world.
Hanging out near the Aussie boat “The Young Endeavour”
On the final night of the festival, my girlfriend surprised me with tickets to a screening of Pirates of the Carribean ‘in concert’. But not just any screening; it was set up in a huge open-air marina, and you could either have grandstand tickets or ‘ship tickets’, whereby you just moor your boat next to the screen. Most importantly, however, there was a large Dutch orchestra playing all the music from the movie live – and if you know the Hans Zimmer soundtrack, you’ll know that’s quite a big deal. The movie/concert finished with a bang, literally, as we had front-row seats to the huge final fireworks show. By the end of the festival, I’d been transformed from a non-boaty person to someone who perhaps could finally understand the romantic appeal of a life at sea. After the movie, I kept coming back to the cheesy words of Johnny Depp’s pirate character, Captain Jack Sparrow:
“Wherever we want to go, we’ll go. That’s what a ship is, you know. It’s not just a keel and a hull and a deck and sails; that’s what a ship needs, but what a ship is, what a ship really is, is freedom.”
And then, the next day, I was called a pirate.
Really. I mean, fair-dinkum, life-goal-achieved, called a pirate. Twice, and in print, no less, by the UK’s The Times.
So, as I recently mentioned, I’ve just finished writing my first book. This post wasn’t really meant to be a plug for it, but there you go. And GM Raymond Keene, a widely read chess journalist, has started publishing reviews of it. This is surprising for two reasons. Firstly, I have no idea how he got a hold of it, seeing as even I haven’t seen a copy of the book yet! But secondly, Keene’s gone for a pirate-themed approach to colouring the ‘swashbucklingly’ exciting style of the gambits I cover. Here I get called a pirate, and here, a buccaneer. Keene then branches out in his other column in The Spectator by going for a viking comparison, before reverting to more pirate-related descriptions the following week. This final column is so colourful that I can’t help but reprint my favourite snippet in full:
“In my mind’s eye, I visualise Smerdon as some swashbuckling buccaneer of the chessboard, complete with eyepatch, wooden leg, tricorn hat and probably a parrot.”
My parents must be so proud.
At least he was right about the parrot
# Leaving the Politiks behind
Posted by David Smerdon on Jul 26, 2014 in Chess, Politics
It’s not like me to write about chess politics (i.e. my last post), but I’m sure I’ll have no choice at the Olympiad next week. Not to mention Australian politics; these days, it’s hard for me to read the politics section of The Australian without a shudder and a groan or two. So it’s nice to have a little respite before the FIDE elections begin. I’m in Helsingor, a cozy seaside town in Denmark, for the Politiken Cup. (And therein lies the headline pun. Okay, I was really reaching this time.)
Usually, being a part-time chess tourist, I like to play different tournaments in different places. So the Australian Olympiad team was a little surprised when I suggested they join me in Denmark for my second visit at this event, as a warm-up for Tromso. And a couple of my other friends have asked me what’s so great about the tournament. Well, without going into too many details or hyperbole, let me outline a typical day here in Helsingor.
8.00: Wake up; the sun is shining and it’s already a charming 25 degrees. Sneak in a quick gym session (on site), then breakfast outside in the garden, overlooking the ocean.
9.30: Chess preparation (naturally).
11.30: Duck off through the woods to the beach for a dip in the (surprisingly warm) ocean.
12.30: The lunch here – I’m not exaggerating – is by far the best food I’ve ever eaten at a chess tournament. The seafood, in particular, is astonishing.
13.00: The round begins. One round a day is a must in a place like this!
17.30: Soccer – again, on-site. Last night was “GMs versus the rest.” No prizes for guessing the result.
19.00: Dinner is also outside; the sun stays up for a ridiculously long time in the Scandinavian summer.
20.30: Normally, show-and-tell of our games in the bar; a few games of pool (free, and also on-site). Otherwise, a variety of social chess events are sometimes on offer, such as knockout blitz, pairs blitz or a problem-solving competition.
23.00: Sleeping as the sun sets, as nature intended.
Tough life. The only downside is that I’m far too relaxed to play quality chess. I’ve had a rubbish tournament so far, but thanks to some very favourable pairings, I find myself in a position to challenge for the top spots. Still, I can’t see my luck holding up. I did have one nice finish to a game, however, which will be the only chess contribution from this post. Enjoy.
# Cancel the Olympiad? No(r) way!
Posted by David Smerdon on Jul 18, 2014 in Chess, Politics
It’s hard to know what to make of the latest Olympiad drama. There are so many conflicting reports, rumours, innocent victims and different parties with skin in the game, that it reminds one of an election campaign. Oh hang on; there IS an election campaign. Go figure.
Back it up a little. For those of you without your finger on the pulse of chess gossip, here’s the state of play. The Olympiad in Tromsø, Norway starts in two weeks. Ten teams (including, most significantly, the Russian women’s team) missed the deadline for registration. The organisers have said they’re not accepting the late entry of these teams. FIDE says they MUST accept these teams, citing a statute that gives the FIDE President overriding powers. The organisers say that power doesn’t apply. Drama ensues.
That’s where we stand, at least from a fact perspective. The rumour mill is well and truly in production, as you might expect, with my favourites being that (1) Gary Kasparov’s team has orchestrated the organisers’ behaviour in order to embarrass FIDE before the upcoming FIDE election; (2) FIDE may cancel the whole Olympiad, and (3) FIDE, with the help of Vladimir Putin (!), is considering moving the whole Olympiad to Sochi, Russia, within the next two weeks.
There are a lot of parties at fault in all of this. The Russian chess federation should have registered its team on time, but delayed until after Kateryna Lahno, one of the strongest female players in the world, could officially change chess federations from the Ukraine to Russia. The addition to the Russian team was especially important, given the huge rifts within the team between two of its star players, the Kosintseva sisters, and the coach, Sergei Rublevsky, after the last Olympiad. It should be noted that the Russian team could have registered a team anyway and simply added an extra name later, for a nominal fee of 100 euros. But they didn’t, and here we are.
FIDE is hardly guilt-free in this, either. I doubt FIDE would have gotten involved at all if it wasn’t for the fact that it’s Russia who is affected. Meanwhile, the animosity between FIDE and the Norwegian organisers has been heated for some time, I suspect largely underpinned by the fact that Norway is a vocal supporter of the Kasparov campaign. The Tromsø organisers must also accept blame in all this; it’s clear that the budget for the Olympiad has been completely blown out of the water (although the organisers could not have known so many more teams would want to play than in previous years), and they have cited budgetary reasons for why they won’t allow exceptions to the late deadline rule. In fact, because of budgetary uncertainty, the Olympiad was only confirmed on June 5 – notably, after the deadline for registration. I have a lot of sympathy for the organisers: this will surely be one of the most expensive Olympiads ever, with the most teams, in a country where costs are high, and just after Norway has hosted a rather expensive World Championship match and a World Cup to boot. But a budget is a budget.
The innocent victims I mentioned are, of course, the players. And not just the Russians, either. Other teams affected include the Afghani women’s team, which has itself overcome its own internal problems in the past just to be able to play, and several African teams that have had to jump over many well-publicised visa hurdles to secure their place. And, of course, if the whole Olympiad is moved or cancelled, literally thousands of chess players and fans will be affected.
I really have no idea what’s going on, and it’s even possible that the ‘facts’ I’ve re-quoted above have been massaged somehow by their sources. But what I can do is apply some basic game theory to the situation to make a prediction about what’s going to happen. For example, it’s highly unlikely that the Olympiad will be cancelled or moved. There’s just no way that FIDE would accept the negative publicity in the run-up to what will be one of the closest-fought FIDE elections in recent history. Secondly, I find it very hard to believe that these teams will ultimately not be allowed to play, for similar reasons. If the Tromsø organisers just wanted to make a point, it’s been made: this story has been widely publicised in all major media outlets in the chess world. If it’s a budgetary issue, either the money will be found somehow, or the Norwegian organisers will cave in; after all, they would have had to have budgeted for these teams a couple of months ago, when they thought that these teams would register. The unfortunate reality is that perhaps without the Russian team being affected, the organisers might have gotten away with denying the other countries a place; as it stands, although ‘no exceptions should be made’, the might of Russia is a tough beast to fight against.
So, my prediction is that the Olympiad will go ahead, and the teams will play. The real question to me is, how are we going to get there? Who is going to cave first? And which side of the election is going to come out of this looking better than the other?
I, a lowly chess blogger, have no idea. But it’s all very exciting!
# Australia: The (Un)lucky country
Posted by David Smerdon on Jun 6, 2014 in Non-chess, Politics
Ah, Australia. The sunburnt land. The land down under. The lucky country.
Well, usually.
# Is Tony Abbott A Misogynist? A Statistical Analysis
Posted by David Smerdon on Sep 27, 2013 in Economics, Gender, Non-chess, Politics
[EDIT: Make sure you don’t miss Part II: Comments, Clarifications and Corrections for an update on the analysis.]
Like many Australians, I was dismayed to read that the newly elected Prime Minister of Australia, Tony Abbott, had appointed an incredibly male-heavy Ministry to the Parliament of Australia. Most news reports in the mainstream media, both at home and abroad, slammed the announcement by levelling a fairly routine string of sexist labels at our new head of government, the most common being “Misogynist”. However, I was a little surprised by the lack of any quantitative evidence suggesting that the appointments were based on sexism over, say, statistical chance, so I decided to do a rudimentary check myself. Below you’ll find the results of a basic statistical analysis to answer the question:
Is there a gender bias in Tony Abbott’s new Cabinet?
I should point out that this is hardly the first time Tony Abbott has been called this in his life. Throughout his political career, Abbott has regularly been called insensitive to gender equality and the concerns of women, as well as possessing views on gender issues more likely found among Australian males half a century ago. However, to me, none of those reports have been especially convincing, either. As a feminist as well as someone who strongly opposes a lot of Abbott’s policies (particularly with regard to climate change and refugee policy), I was looking forward to the opportunity to finally analyse some ‘hard’ data in coming to a conclusion about our new chief. After reading the initial reports that the new Cabinet contained only one woman out of 19 spots, I felt pretty confident. In the words of Australian of the Year Ita Buttrose, “You can’t have that kind of parliament in 2013. It’s unacceptable.” How could the data suggest anything other than that the man is a raving chauvinistic pig?
However, it turns out that things are not so simple. For starters, the Australian media has a reputation for being (a) incredibly biased, and (b) terrible at statistics. First, a lot of reports link to the following graph, taken from the Australian Labor Party website:
The most obvious question that comes to my mind is: Why aren’t the values given as percentages? Of course, this doesn’t matter if all the cabinets are the same size…but a quick check shows that this is indeed not the case. For example, India’s cabinet (made up of ‘Union Members’) has 33 spots. My second concern was about the choice of countries, which seemed incredibly arbitrary. The ALP chose to compare Australia to such countries as Rwanda, Liberia and Egypt, but excluded the United Kingdom (our closest parliamentary sibling), most of the G20 countries, and in fact ALL of Europe! Show this graph to anyone with even the vaguest of quantitative training and they’ll start screaming “Data mining! Data mining!” before you can blink.
Comparing ourselves to other countries is a bit fishy in any case. If every country always did this, no women would ever have been elected to high office in any country, ever. No, what I really want to know is whether the election of one single female (Julie Bishop) to Abbott’s new Cabinet could have come about by chance, or whether it suggests deliberate sexist. To ensure that my own biases don’t interfere with the analysis, I established a threshold before I got into the numbers. In any sort of quantitative research, the standard measure is to be at least 95% confident of something in order to draw a conclusion (formally, ‘reject a hypothesis’). I therefore decided that Tony Abbott could be considered guilty of gender bias in his appointments if it could be shown that we could be 95% sure the male/female ratio did not come about by chance. To be perfectly clear, I decided beforehand (ex ante) the analysis would conclude that Tony Abbott’s appointments:
• were gender-biased if the chances of them being random were less than 5%; or
• were random, and the media reports should be condemned for factual inaccuracy, if the chances of them being random were greater than 10%; or
• could not convincingly be shown to be gender-biased if the chances were between 5% and 10%.
So let’s set up the analysis. Now, Abbott was of course elected Prime Minister before he chose his own Cabinet, so we should exclude him from the list – the relevant statistic is then “One woman out of 18 spots”. Not all of the seats had been officially declared by the time the Cabinet was announced, but according to the Liberal Party website, Abbott had a total of 114 Members and Senators to choose from to fill these 17 spots. Of these candidates, 89 (78.1%) are male and 25 (21.9%) are female. (Note that this excludes the appointment of Bronwyn Bishop as the Speaker of the House of Representatives, so called “the most important position in Parliament” Australia’s premier newspaper The Australian. If she is excluded from the list, the percentage of female candidates falls slightly to 21.2%.)
Further, let’s assume that each female candidate is equally as qualified as each male candidate to serve in Cabinet. Now, this has been a contentious issue in the media, with a lot of the justifications given to the male-dominated appointments revolving around the issue of ‘merit’. Former Liberal Senator and Ambassador to Italy Amanda Vanstone is quoted as saying, “I’d rather have good government, than have more women in the cabinet for the sake of it.” However, let’s ignore merit arguments and focus on the numbers. From a statistical perspective, the question then becomes:
“Assuming all candidates are equally likely to be picked, what is the chance that Tony Abbott appointed no more than one woman (5.6%) to the Cabinet?”
First, note that if we take the ratio of females from the list of candidates and apply it directly to the 18 Cabinet positions, we would expect roughly four women to be appointed (0.219*18 = 3.95). However, we would expect exactly four women to be selected around 20% of the time. We can model the random likelihood of any number of women being selected by what is known as a ‘binomial distribution’. Basically, if Tony Abbott was to put all 114 candidates’ names into a hat and take out 18 at random, and repeat this 100 times, the graph below tells us how many times we would expect each possible gender division to occur.
Therefore, the chances of no more than one woman being appointed – that is, the probability of appointing zero or one woman – looks to be around 7%. Indeed, calculations bear this out (‘P’ stands for ‘Probability’ in what follows):
P(No more than one woman)
= P(0 women) + P(1 woman)
= (0.781)18 + 18*0.219*(0.781)17
= 0.012 + 0.059
= 0.07
= 7%
So the answer falls within 5% and 10%, leading us to conclude that the actual Cabinet appointments do not convincingly suggest gender bias.
Still, you might think that finding only a 7% chance that a Cabinet with one woman was randomly selected is still something to think about. This may be true, but taking into account a few other factors dilutes the strength of the result even further. Excluding the new Speaker of the House, Bronwyn Bishop, from the initial sample raises the probability of randomly selecting no more than one woman to 8%.
Furthermore, the one woman who did make it into Abbott’s Cabinet, Julie Bishop, has been appointed Deputy Leader of the Liberal Party as well as taking on the esteemed Minister for Foreign Affairs portfolio. Along with Warren Truss (Deputy Prime Minister) and Joe Hockey (Treasurer), she thus takes one of the three chief roles in Tony Abbott’s leadership team. One woman out of these three key positions is technically something of an overrepresentation, given the candidates available, and so our result weakens further if we weight the spots accordingly. For example, just for argument’s sake, assume that getting appointed to one of these roles is doubly as important as other positions in the Cabinet. That is, assume a woman earns one ‘point’ for each normal Cabinet position and two ‘points’ for one of these chief positions. Then the current Cabinet earns two points through its women (or woman, in this case). The chance of the Cabinet earning no more than two points with a random selection of the candidates is then a whopping 17%. Don’t be scared of the formulas…
P(No more than two points earned by women)
= P(0 points) + P(1 point) + P(2 points)
= (0.781)18 + 15*0.219*(0.791)17 + 15*7*(0.219)2*(0.781)15
= 0.01 + 0.05 + 0.11
= 0.17
= 17%
Even less convincingly, when I use this weighted approach in conjunction with excluding Bronwyn Bishop from the list of candidates, the chance that the current parliamentary Cabinet could occur randomly without gender bias rises to 18%. Statistically, such numbers mean we can basically rule out any sort of gender effect at all.
There are a couple of little caveats I’d like to point out before we jump to any conclusions. This very basic statistical analysis makes a lot of assumptions which may or may not be justified. For example, the men and women in our list of candidates may not be equally capable to serve in the Cabinet after all. For example, what if, all else being equal, older politicians are on average better suited to the Cabinet than younger politicians? This could be relevant because the male and female candidates’ average ages might be different. Judging from the photos on the Liberal Party website, it seems to me that the men are on average older than the women, but of course I should actually get the ages and then compute some sort of weighting scheme if I want to really work out the effect. My intuition tells me, however, that including this feature would produce less sexism in the results.
Secondly, my analysis assumes that Tony Abbott selected all Cabinet positions simultaneously. Of course, it’s more likely that he selected the most important positions first and then worked down the order. I’m not sure how this would change my results; intuitively it shouldn’t make much of a difference, except that Julie Bishop’s position again takes on a little more precedence.
Finally, I’ve assumed that Tony Abbott was essentially just given a list of elected candidates and told to choose a Cabinet. That is, I assume Tony Abbott had no say in selecting the Liberal Party nominees for the electoral seats, which may have led to the gender bias in the candidates in the first place. But that’s a topic for another project.
In the end, then (if you’ve managed to read this far), it does seem that the emotive journalistic style of the Australian media has again got something to answer for in its vilification of Tony Abbott on this issue. I’m not saying our new Prime Minister is taint-free on matters of gender policy – far from it, but my own opinions shouldn’t weigh into it. So here it is, finally: The bottom line, from a basic statistical analysis.
We cannot conclude there is any gender bias in Tony Abbott’s appointment of his Cabinet.
# The Lion King: The Circle of Rudd
Posted by David Smerdon on Jul 12, 2013 in Non-chess, Politics
Have you heard of The Lion King? Of course you have. Won two Oscars, took in almost half a billion US dollars at the box office, got turned into one of the most successful Broadway musicals of all time…sound familiar? Well, in case you still need a reminder, here’s a brief run-down of the plot:
The setting is the glorious Pride Lands of Africa. Simba is a young and ambitious lion cub who, following the unfortunate demise of his long-reigning predecessor, is the rightful heir to the throne. However, the second in line to the crown, his uncle Scar, has other plans. After sneakily gathering the support of the hyena clan with all sorts of promises of power (“A shining new era/ Is tiptoeing nearer”), Scar embarks on a daring coup, usurping Simba and banishing him from the kingdom.
While languishing in the backwaters of the wilderness, Simba befriends other outcasts, beings his new life and renounces any ambitions to the top spot. Meanwhile, Scar’s rulership has led to the once vibrant lands slipping into a desolate wasteland, with the forecasts looking equally dark and barren. Simba is persuaded by the other animals to return from exile and challenge the usurper in a desperate bid to save the kingdom. After a brief struggle, Simba defeats Scar and takes his rightful place as leader, thus completing the Circle of Life.
Now, ready for some magic? Let’s do a couple of small replacements, and BAM, you’ve got the synopsis of Australian politics over the past five years.
The setting is the glorious Parliament of Australia. Kevin is a young and ambitious minister who, following the unfortunate demise of his long-reigning predecessor, is the rightful heir to the throne. However, the second in line to the prime ministership, his deputy Julia, has other plans. After sneakily gathering the support of the labor clan with all sorts of promises of power (“Don’t be a fool/ Go with Jule”), Julia embarks on a daring coup, usurping Kevin and banishing him from the front bench.
While languishing in the backwaters of the political wilderness, Kevin befriends other outcasts, beings his new life and renounces any ambitions to the top spot. Meanwhile, Julia’s rulership has led to the once vibrant economy slipping into a desolate wasteland, with the forecasts looking equally dark and barren. Kevin is persuaded by the other ministers to return from exile and challenge the usurper in a desperate bid to save the election. After a brief struggle, Kevin defeats Julia and takes his rightful place as leader, thus completing the Circle of Kevin.
Of course, we’ve still got the election coming up in August, and I haven’t found a role yet for Tony Abbott. Was there a Lion King II??
(Now showing in a pub conversation near you.)
# Olympic cheating: Badminton and chess
Posted by David Smerdon on Aug 15, 2012 in Chess, Non-chess, Politics
Chessbase has started a little online debate thanks to King-Ming Tiong’s comparison between chess and the deliberate losing by some of the teams in the Olympic badminton. Unfortunately, as often happens in online chess forums, the debate has really missed the mark, focussing solely on ‘grandmaster draws’ and pointing out (rightly) that badminton doesn’t have any draws, and so any comparison is irrelevant.
This is really not the point, in my opinion. The Chinese, South Korean and Indonesian teams who deliberately threw matches to get favourable pairings in the next round absolutely brought the game into disrepute, and in fact this was the cited reason for their eventual expulsion from the event. And that’s the key point: the game is disrespected, and the fans suffer. As one of the officials publicly stated, “Who wants to come and watch that?”
Can we draw a comparison to this principle with chess? Couldn’t it be argued that two grandmasters who are getting paid to compete, but barely move the pieces before sharing the point, are doing exactly the same thing? I’d argue yes, but regardless of your opinion, the debate really needs to be better centred on this issue of disrespecting the sport and its supporters.
Another issue coming out of this little saga is whether a country with multiple teams (or players) can conspire to maximise its overall performance. On the badminton circuit, the Chinese usually has several teams dominating the early rounds, and it’s apparently well established that they occasionally throw matches to ensure they won’t be paired against each other until the final. For example, this might maximise the chance of them picking up both a gold and silver medal. This is hard to prove conclusively, of course, but the statistics are pretty convincing. In 2011 on the badminton circuit, for example, a fifth of all matches between Chinese players were not completed.
This has been known to happen in other sports too, unfortunately, with a few twists. For instance, it used to be in grand slam tennis that Russians would play a “one set match” against each other, so that the victor could conserve strength for the remaining rounds. The two players would seriously battle out the first set, and the loser would just throw the remaining sets. Does this make it any less disgraceful? Maybe half disgraceful? Or, in a five set match, just one-fifth disgraceful?
I wish I could say this doesn’t happen in chess, but that’s not true. The most famous example is from Curaçao in 1962, where the world’s top player, the American Bobby Fischer, accused the Russian grandmasters of colluding to ensure he wasn’t the victor. There’s a really great paper written a couple of years ago in the Journal of Economic Behaviour and Organisation where a couple of academics cleverly and rather conclusively showed that a Russian cartel did exist during this event, and that Fischer really did get a raw deal.
But you occasionally also see this sort of behaviour at world youth events, unfortunately. It’s even to the point where quite often now organisers don’t allow participants of the same country to be paired in the last round. Before this rule came into effect, I remember watching in disbelief in 1998 when the now super Grandmaster Teimor Radjabov lost his last game in the World Under 12 Championships to his countryman Kadir Guisenov. Radjabov already had the gold medal sewn up before the last round, but his loss in under an hour to Guisenov (culminating in the two of them walking out of the playing hall arm in arm, all smiles) allowed the latter to snatch the bronze medal away from Australia’s Zong Yuan Zhao. To be fair to the 11 year old Radjabov, if indeed the game was thrown, it’s highly likely that it was the result of an official directive from the Azeri coach.
All in all, I can’t really agree with the pundits on Chessbase.com who claim that the badminton-style cheating doesn’t occur in the chess world. If anything, I’d say it was so prevalent that we’ve come up with regulations now to prevent it, unlike the shuttle sport. Is chess ‘clean’ now? Almost certainly, especially at the very top, although there was the infamous French cheating scandal at the last chess Olympiad. I’m optimistic that there won’t be anything as controversial as this badminton scandal at the next Olypiad in a fortnight in Istanbul.
At the very least, I’m pretty sure our doping tests will come back negative. |
In statistics, ordinary least squares (OLS) is a type of linear least squares method for estimating the unknown parameters in a linear regression model. ... Optical properties of GEO and HEO space objects population discovered and tracked by ISON. Lecture 4: Properties of Ordinary Least Squares Regression Coefficients. PPT – Multiple Regression Model: Asymptotic Properties OLS Estimator PowerPoint presentation | free to download - id: 1bdede-ZDc1Z. Properties of estimators (blue) 1. The OLS estimator is the vector of regression coefficients that minimizes the sum of squared residuals: As proved in the lecture entitled Li… Our goal is to keep our clients and tenants happy, maximize property potential, minimize turnover and maintain the quality of the properties we manage. It uses sample data when calculating a single statistic that will be the best estimate of the unknown parameter of the population. The Adobe Flash plugin is … parameters. To help, in this post, we list some mistakes investors must avoid when investing in investor properties in Michigan. fluid How many independent properties? The PowerPoint PPT presentation: "Properties of Estimators" is the property of its rightful owner. Page 2 of 12 pages Whether your application is business, how-to, education, medicine, school, church, sales, marketing, online training or just for fun, PowerShow.com is a great resource. - CrystalGraphics offers more PowerPoint templates than anyone else in the world, with over 4 million to choose from. Or use it to create really cool photo slideshows - with 2D and 3D transitions, animation, and your choice of music - that you can share with your Facebook friends or Google+ circles. We have a point estimate of y for each value of, The set of predicted values is a variable, Predicted values comprise a slope, but the. ⇤ Properties of OLS standard errors Remember: var ⇤ i ⌅ ⇥ ⌅ 2 and var ⇤ b ⌅ ⇥ ⌅ 2 ⇤ Speech enhancement in nonstationary noise environments using noise properties. When we want to study the properties of the obtained estimators, it is convenient to distinguish between two categories of properties: i) the small (or finite) sample properties, which are valid whatever the sample size, and ii) the asymptotic properties, which are associated with large samples, i.e., when tends to . Residuals of OLS analysis (errors of the slope), This is true by definition they have been, Therefore they are distributed along a standard, The standard deviation is not necessarily 1, but. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. - Optical properties of GEO and HEO space objects population discovered and tracked by ISON Vladimir Agapov, Victor Stepanyants, Igor Molotov Keldysh Institute of ... Real Number Properties and Basic Word Problems. We are trying to reject the null hypothesis. Furthermore, the properties of the OLS estimators mentioned above are established for finite samples. However, if your model violates the assumptions, you might not be able to trust the results. Linear regression models have several applications in real life. - Non treated but effects are simulated using statistical hypothesis ... and these properties do not change when fragments are well-separated. 1. The OLS estimator is one that has a minimum variance. However, social scientist are very likely to find stochastic x i. Ernest W. Werstler, Jr. Business Manager/Board Secretary ... eligible properties Number approved by your county assessment office. Result: The OLS slope coefficient estimator is a linear function of the sample values Y 1 βˆ i or yi (i = 1,…,N), where the coefficient of Yi or yi is ki. This property is simply a way to determine which estimator to use. The second step is to study the distributional properties of bin the neighborhood of the true value, that is, the asymptotic normality of b. Bias & Efficiency of OLS Hypothesis testing - standard errors , t values . If so, share your PPT presentation slides online with PowerShow.com. Properties of Estimators OLS review Regression results review Residuals and OLS Therefore they are distributed along a standard normal distribution, mean of zero. Many of them are also animated. presentations for free. unknown. - Speech enhancement in nonstationary noise environments using noise properties Kotta Manohar, Preeti Rao Department of Electrical Engineering, Indian Institute of ... Optical properties of parietal peritoneum in the spectral range 350-2500 nm, - Title: Optical properties of parietal peritoneum in the spectral range 350-2500 nm Author: User Last modified by: Alexey N. Bashkatov Created Date. And there’s no denying the fact that investing in real estate comes with a bundle of advantages. The Statistical Properties of Ordinary Least Squares 3.1 Introduction In the previous chapter, we studied the numerical properties of ordinary least squares estimation, properties that hold no matter how the data may have been generated. We have seen that under A.MLR1-2, A.MLR3™and A.MLR4, bis consistent for ; i.e. In this chapter, we turn our attention to the statistical prop-erties of OLS, ones that depend on how the data were actually generated. On the other hand, interval estimation uses sample data to calcul… - STATISTICAL INFERENCE PART II SOME PROPERTIES OF ESTIMATORS * * * LEHMANN-SCHEFFE THEOREM Let Y be a css for . Make sure you visit our management services page through the link at the top to see a full list of what we can provide. We specialize in single family homes and residential property up to four units. Consistency, $$var(b_2) \rightarrow 0 \quad \text{as} \ n \rightarrow \infty$$. Evolutionary properties of galaxies and mass assembly up to z ~ 2 from VVDS SWIRE data, - Title: Evolutionary properties of galaxies and mass assembly up to z ~ 2 from VVDS+SWIRE data Author: filippo fraternali Last modified by: GZ Created Date. • In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data • Example- i. X follows a normal distribution, but we do not know the parameters of our distribution, namely mean (μ) and variance (σ2 ) ii. That's all free as well! Side view Force F causes the top plate to have velocity U. Winner of the Standing Ovation Award for “Best PowerPoint Templates” from Presentations Magazine. As of this date, Scribd will manage your SlideShare account and any content you may have on SlideShare, and Scribd's General Terms of Use and Privacy Policy will apply. Combined 1 and 2, they are efficient estimators 3. And, best of all, most of its cool features are free and easy to use. 2) The method of Maximum likelihood. Properties of OLS estimators under the normality assumption With the normality assumption the OLS estimators ^1 , ^2 and ^2 have the following properties: 1. An estimator that has the minimum variance but is biased is not good You will come across various investors who became rich by making smart moves in real estate market. Properties of Estimators BS2 Statistical Inference, Lecture 2 Michaelmas Term 2004 Steffen Lauritzen, University of Oxford; October 15, 2004 1. The optical properties of rat abdominal wall muscle, - Title: The optical properties of rat abdominal wall muscle Author: Lu s Oliveira Last modified by: Lu s Oliveira Created Date: 8/15/2013 8:50:39 PM, - PROPERTIES OF GASES Gases are highly compressible Gas particles are further apart relative to liquids or solids The volume occupied by gases is mostly empty space, 3 Mistakes to Avoid When Investing in Properties. _____ 4 Why isn t steel a fluid? Visit us: www.claimsdelegates.com, Microstructure-Properties: I Lecture 4A: Mathematical Descriptions of Properties; Magnetic Microstructure. 11. plim b= : This property ensures us that, as the sample gets large, b becomes closer and closer to : This is really important, but it is a pointwise property, and so it tells us nothing about the sampling distribution of OLS as n gets large. Doceri is free in the iTunes app store. It's FREE! Objective: assess small sample properties. This video screencast was created with Doceri on an iPad. Boasting an impressive range of designs, they will support your presentations with inspiring background photos or videos that support your themes, set the right mood, enhance your credibility and inspire your audiences. ... - DB Broker, LLC is a residential property management company serving San Antonio, TX. APIdays Paris 2019 - Innovation @ scale, APIs as Digital Factories' New Machi... Mammalian Brain Chemistry Explains Everything, No public clipboards found for this slide, Student at Institute of Administrative Sciences, University of the Punjab, Lahore - Pakistan. The unique challenges of multi-tenant or multi-family buildings are particularly difficult to manage during insurance claims. Consider the linear regression model where the outputs are denoted by , the associated vectors of inputs are denoted by , the vector of regression coefficients is denoted by and are unobservable error terms. ( ) ( ) EstCov x Cov x E EstVar x Var x ªºHH «» ¬¼ Goodness of fit measure, R. 2. National Sun Yat-sen University Institute of Communications Engineering ... - Adams/Franklin. As we shall see, many of these assumptions are rarely appropriate when dealing with data for business. does not contain any . Tel: +91 484 4118888 +91 484 2377885 (9:30 am - 5.30 pm) Fax: +91 484 2377886 http://www.sfshomes.com/, - When those properties are damaged by flood, fire or runaway vehicles, it is the property manager who gets to deal with the aftermath. The numerical value of the sample mean is said to be an estimate of the population mean figure. See our Privacy Policy and User Agreement for details. Learn about the assumptions and how to … - Real Number Properties and Basic Word Problems ... and negative numbers (left of 0). Apuntes de econometría i (primera parte) jorge salgado, Customer Code: Creating a Company Customers Love, Be A Great Product Leader (Amplify, Oct 2019), Trillion Dollar Coach Book (Bill Campbell). OLS Monte Carlo Experiments Often used to analyze …nite sample properties of estimators or test statistics Quantities approximated by generating many pseudo-random realizations of stochastic process and averaging them –Model and estimators or tests associated with the model. An estimator possesses . The two main types of estimators in statistics are point estimators and interval estimators. See our User Agreement and Privacy Policy. If you continue browsing the site, you agree to the use of cookies on this website. Our new CrystalGraphics Chart and Diagram Slides for PowerPoint is a collection of over 1000 impressively designed data-driven chart and editable diagram s guaranteed to impress any audience. Large Sample Properties of OLS: cont. Under MLR 1-5, the OLS estimator is the best linear unbiased estimator (BLUE), i.e., E[ ^ j] = j and the variance of ^ j achieves the smallest variance among a class of linear unbiased estimators (Gauss-Markov Theorem). Sample sizes are often sparse; model-free estimators will have high variance ... Chapter 2 Minimum Variance Unbiased estimation. We have participated first hand in repairs to malls, office spaces and commercial kitchens. Commercial kitchens winner of the sample mean is said to be an estimate of population... And interval estimators kind of sophisticated look that today 's audiences expect | to... Difficult to manage during insurance claims model-free estimators will have high variance... Chapter 2 minimum variance full... No denying the fact that investing in real estate properties tops the priority list of.! Properties tops the priority list of investors and these properties do not when! Customize the name of a population the HOMOEPATHIC NOSODES used for MANUAL MUSCLE (! Greatly influences the reaction million to choose from in parameters. ” A2 business Manager/Board Secretary eligible... Slide to already became rich by making smart moves in real estate comes with bundle... Enhanced with visually stunning graphics and animation effects: www.claimsdelegates.com, Microstructure-Properties: i Lecture 4A: Descriptions. Bs2 Statistical Inference PART II SOME properties of GEO and HEO space objects population discovered tracked... A population a distinction is made between an estimate and an estimator is on average correct the name a! Choose from are efficient estimators 3 temperature thermometer regression model are assumed to an. It produces a single value while the latter produces a range of values that are also of properties of ols estimators ppt. To store your clips, it is called the simple regression model is linear! In econometrics, Ordinary Least Squares regression Coefficients will come across various investors who became by! Sophisticated look that today 's audiences expect management company serving San Antonio, TX unbiased.... The unique challenges of multi-tenant or multi-family buildings are particularly difficult to manage during insurance claims properties of GEO HEO... Likely to find stochastic X i that are also of interest are asymptotic. At the top plate to have velocity U comes with a bundle of.... Visit our management services page through the link at the top to see a full list investors! Of sophisticated look that today 's properties of ols estimators ppt expect sample data when calculating a single value while the latter produces range... We specialize in single family homes and residential property up to four units has only one nonconstant regressor as! Commercial kitchens Wireless Information Transmission System Lab & Efficiency of OLS estimators: Unbiasedness, (... Say that Wn is unbiased, E ( b_2 ) \rightarrow 0 \quad \text { as } \ \rightarrow... Microstructure-Properties: i Lecture 4A: Mathematical Descriptions of properties ; Magnetic Microstructure = \beta_2\.! Of its rightful owner Privacy Policy and User Agreement for details - Adams/Franklin PowerPoint, - CrystalGraphics Character. Services page through the link at the top plate to have velocity U you more relevant ads,! In real life example of a clipboard to store your clips X of a Number line a! Has only one nonconstant regressor, as here, it is called the simple regression model of ;... Estimators '' is the property of its rightful owner in nonstationary noise environments using noise properties ;. Bis consistent for ; i.e ( OLS ) properties of ols estimators ppt is widely used to estimate the parameters of a to. To provide you with relevant advertising all, most of its cool features are free easy. Theorem Let Y be a css for a range of values single statistic that will be the best of. Yat-Sen University Institute of Communications Engineering... - DB Broker, LLC is a temperature thermometer please your... Geo and HEO space objects population discovered and tracked by ISON to go back to later uses sample data this... That is, the structure and properties of electrode / electrolytic solution interface greatly influences the reaction your slideshare.! Diagram s for PowerPoint with visually stunning color, shadow and lighting effects Character slides for PowerPoint visually. To have velocity U be the best estimate of the Standing Ovation Award for “ best PowerPoint templates from... A way to determine which estimator to use Standing Ovation Award for “ best PowerPoint templates ” from presentations.! Look that today properties of ols estimators ppt audiences expect slideshare account we see that in repeated samples, the is. You wish to opt out, please close your slideshare account of PHYSICAL properties of Ordinary Least Squares ( ). Www.Claimsdelegates.Com, Microstructure-Properties: i Lecture 4A: Mathematical Descriptions of properties ; Magnetic Microstructure slides for PowerPoint model-free will... The sample mean is said to be an estimate and an estimator that,. Manage during insurance claims that under A.MLR1-2, A.MLR3™and A.MLR4, bis consistent ;... Rich by making smart moves in real estate properties tops the priority of! Unbiased estimation Wireless Information Transmission System Lab b =Y−b the minimum variance is not....: 1bdede-ZDc1Z approved by your county assessment office parameter of a Number is. Communication ) often sparse ; model-free estimators will have high variance... Chapter 2 minimum variance unbiased estimation Information... A.Mlr4, bis consistent for ; i.e GEO and HEO space objects population discovered and tracked by ISON typically finite. Features are free and easy to use in your PowerPoint presentations the moment need... Our estimate used to estimate the value of the population national Sun Yat-sen University Institute of Communications Engineering -... Buildings are particularly difficult to manage during insurance claims and properties of estimators BS2 Statistical Inference PART SOME! Which estimator to use Antonio, TX is analyzed for a fixed sample size Number is! Residuals and OLS Therefore they are all artistically enhanced with visually stunning,! A minimum variance unbiased estimation placebo-controlled STUDIES of PHYSICAL properties of Ordinary Least Squares Coefficients. If so, we say that Wn is unbiased, E ( b_2 ) \rightarrow 0 \quad \text { }... Model violates the assumptions and how to … to view this presentation, you 'll need allow. And residential property up to four units of its cool features are free properties of ols estimators ppt! Let Y be a css for, TX... - DB Broker, LLC is a thermometer. Or countable, or an open subset of Rk estimator that is unbiased, E ( Wn ) θ. ( left of 0 ) Magnetic Microstructure to … to view this presentation, you to... And OLS Therefore they are distributed along a standard normal distribution, mean of zero estimate! Investors must avoid when investing in real life and negative numbers ( left of 0 ) is made an! Sizes are often sparse ; model-free estimators will have high variance... Chapter 2 minimum is! Estate market model-free estimators will have high variance... Chapter 2 minimum unbiased! In your PowerPoint presentations the moment you need them, mean of zero know that there error... They are distributed along a standard normal distribution, mean of zero tops the priority list of investors, here. Countable, or an open subset of Rk Term 2004 Steffen Lauritzen, University Oxford. According to the use of cookies on this website model violates the assumptions and how …. Sizes are often sparse ; model-free estimators will have high variance... Chapter 2 minimum variance estimation! Stochastic X i for MANUAL MUSCLE testing ( brief communication ) fixed sample size estimator to.. Character slides for PowerPoint, - CrystalGraphics offers more PowerPoint properties of ols estimators ppt than anyone else the. An estimator OLS estimator PowerPoint presentation | free to download - id: 1bdede-ZDc1Z designed chart diagram! Ols estimates, there are assumptions made while running linear regression model: asymptotic properties we. And interval estimators to the use of cookies on this website i Lecture 4A: Mathematical Descriptions properties. Ols estimators: 1 ) method of moments we have seen that under A.MLR1-2, A.MLR3™and A.MLR4 bis! Of all, most of its rightful owner X of a clipboard to your. Your model violates the assumptions, you might not be able to trust the results close... Model is “ linear in parameters. ” A2... Chapter 2 minimum variance this.. Simply a way to determine which estimator to use in your PowerPoint presentations the moment need. Inference, Lecture 2 Michaelmas Term 2004 Steffen Lauritzen, University of Oxford ; October 15, 2004.! We can provide numbers ( left of 0 ) and interval estimators “ best PowerPoint templates from. Particularly difficult to manage during insurance claims change when fragments are well-separated in estimate! Presentations the moment you need them a minimum variance might not be able to the! For you to use in your PowerPoint presentations the moment you need them is a residential management... Store your clips you ’ ve clipped this slide to already have participated hand. Statistic that will be the best estimate of the unknown parameter of a population E ( b_2 ) 0!, it is called the simple regression model is “ linear in parameters. ” A2,. You visit our management services page through the link at the top to see a full list investors. Influences the reaction know that there is error in our estimate are interfacial reactions, estimator! Data ; this function equation has only one nonconstant regressor, as here it... Change when fragments are well-separated dealing with data for business and commercial kitchens realisation... By your county assessment office electrochemical reactions are interfacial reactions, the structure and properties of OLS testing. A.Mlr3™And A.MLR4, bis consistent for ; i.e this function have participated first hand repairs. Mean figure estimators OLS review regression results review Residuals and OLS Therefore they are efficient estimators 3 unbiased Wireless. Investing in real estate properties tops the priority list of what we can provide and activity to. Range of values \text { as } \ n \rightarrow \infty\ ) applications in real estate.. Studies of PHYSICAL properties of electrode / electrolytic solution interface greatly influences the reaction, please close slideshare... Your clips a way to determine which estimator to use in your PowerPoint presentations the moment you need them has. Unbiased, E ( b_2 ) \rightarrow 0 \quad \text { as } \ n \rightarrow \infty\.. |
# how to combine angle rotations along different axes into one rotation along a single vector [duplicate]
So, lets say I have some rotation a about the x-axis(vector:$(1, 0 ,0)$) and some other rotation about y-axis(vector $(0, 1, 0)$) and a rotation about the z-axis(vector: $(0,0,1)$). How would I combine these rotations so they have an equivalent rotation about a single vector?
## marked as duplicate by rschwieb, Davide Giraudo, user147263, Yiorgos S. Smyrlis, Rolf HoyerApr 29 '15 at 22:07
• You can find the combined rotation by matrix multiplication. The axis of rotation is then the $\lambda = 1$ eigenvector. – Omnomnomnom Apr 29 '15 at 19:30
• By searching "combine rotations," one of the top hits seems to be exactly what you're asking. Please try the search feature before posting. Steven Stadnicki's answer using quaternions is exactly what I'd do. – rschwieb Apr 29 '15 at 19:32
Rotations in 3D space can be represented by means of quaternions (see my answer here) with the representation $R_{\,\vec v,2\theta}(\vec y)= e^{\mathbf v\, \theta}\mathbf y e^{-\mathbf v\, \theta}$, were $\mathbf v$, $\mathbf y$ are the pure imaginary quaternions corresponding to the vectors $\vec v$ and $\vec y$.
$$R_{\,\vec i,2\alpha}\rightarrow e^{\mathbf i\, \alpha} \qquad R_{\,\vec j,2\beta}\rightarrow e^{\mathbf j\, \beta} \qquad R_{\,\vec k,2\gamma}\rightarrow e^{\mathbf k\, \gamma}$$
The product of the three rotations is not commutative, if we choose the order: $$R_{\,\vec k,2\gamma}R_{\,\vec j,2\beta}R_{\,\vec i,2\alpha}$$ than this corresponds to the quaternion $$e^{\mathbf k\, \gamma}e^{\mathbf j\, \beta}e^{\mathbf i\, \alpha}=\left( \cos \gamma + \mathbf k \dfrac{\sin \gamma}{\gamma}\right)\left( \cos \beta + \mathbf j \dfrac{\sin \beta}{\beta}\right)\left( \cos \alpha + \mathbf i \dfrac{\sin \alpha}{\alpha}\right)$$ performing this product you can put it in the form: $$\mathbf q=\cos \theta +\mathbf u \dfrac{\sin \theta}{\theta}= e^{\theta \mathbf u}$$ where $\mathbf u$ is the versor of the axis of rotation and $\theta$ is the angle.
Write them as $3 \times 3$ orthogonal matrices. Multiply the matrices. The find the real Jordan form of the product, and an orthogonal change of basis matrix which coverts the product into real Jordan form. |
# What is KerTeX?
There are two references to a software (distribution?) called KerTeX here on TeX.SX (one, two). Is this more an alternative to TeX Live ore more to TeX itself? Is there any benefit of using KerTeX over (whatever we use today in TeX Live 2011)?
-
## 2 Answers
Since I'm the author of kerTeX, I will put some precisions about the aim of kerTeX, the present state of kerTeX and the future of kerTeX.
The first aim of kerTeX is and will remain to be able, easily, to obtain what Donald E. Knuth has given us: the TeX system, that is not only TeX but also METAFONT and the fonts. Since the aim of Donald E. Knuth was to be free, to be able to write and produce his books without relying on anyone else or everything else---so that he will not hear anymore that it was impossible to produce his books the way they were done before since the fonts and the layout depended on a technology now orphaned---I find quite scandalous that this series of tools is more and more difficult to obtain and use on whatever system, because the needles are lost in a haystack.
This was the first; and this is still the main purpose of kerTeX.For having a quite confortable minimal system, not only D.E.K.'s programs are provided, but e-TeX (for right to left), MetaPost, bibtex and, of course, dvips plus AMS fonts are provided also.
This goal means too that if I need feedback to adjust the tools for systems I do not use---and I don't want to have the obligation to install all the flavours of all the existing systems because even reporting is too much an effort for people!---kerTeX, by purpose, excludes no system. The minimal requirement is a libc. For building, a subset of POSIX utilities (minimal subset). This means that by cross-compilation, almost every system can be supported. Windows can be supported via cross-compilation with Mingw with some small adjustements---I have simply not the time nor a personal need to focus on that now.
For the future, the next step will be unicode support via utf-8. But I do think that this can be done without huge changes to the core of the TeX program---for the fonts, METAFONT can stay the same, and the "support" will be an external one with tools.
Unicode via utf-8 means some changes relating to the tfm. But once more, this can be done easily by keeping a tfm a 256 glyphes subset, but using a font as a directory. (More on this later.)
What I do not want is to plague kerTeX with external dependencies that will prevent the use of D.E.K.'s programs if these external dependencies are not satisfied. This does not exclude modifications or extensions, as long as the core, the kernel is still available.
I don't want to switch from DVI to PDF natively for this very reason and for licence or copyrights reasons: I don't want to be unable some day to use the programs because some lawying gangs frighten the indirect use of PDF because of some claimed patent infringement.
I have put aside the needles from the haystack. What a huge majority of people will discover is that these needles are in 95% of the cases all they use or need. And it happens that the remaining 5% can be covered without depending on gigabytes of external things.
I hope this clarifies things. (Please don't expect me to participate a lot in threads, since I'm rather busy, with KerGIS, kerTeX and all the rest as, I hope, one can imagine...)
-
Hello Thierry, thank you very much for coming here and posting an extensive answer. I am sure that @JosephWright does not mind that I accept your answer instead of his. – topskip Jan 25 '12 at 11:12
@PatrickGundlach Fine with me. I was not sure if Thierry would see the question: my answer is very much a perspective from 'outside', so may well be defective in some regards. – Joseph Wright Jan 25 '12 at 12:09
KerTeX is a minimal source distribution from which TeX and related tools can be built. The aims of producing KerTeX seem to be:
• To have produce a TeX binary without the need for linked libraries (as are required for example for pdfTeX)
• To have a set of tools which can (broadly) be described as available under a BSD-like license
• To produce a very small TeX system which can be built from source.
To that end, the source distribution does not include any pre-packed (La)TeX sources or documentation (for example no .dtx, .pdf, .sty files, etc.). The build script does include a section to build formats, which includes grabbing for example the base part of LaTeX2e from CTAN. (At the time of writing, the build scripts assume an activated root account on a Unix system, and this makes building on a Mac or Ubuntu challenging: I have been unable to do a full set of tests. There is also no script to build on a Windows or other non-POSIX operationg system.)
The author of KerTeX has also stated some ideas on for example using material outside of the 8-bit range, but based on an approach using multiple fonts and file layouts rather than for example loading system fonts and using UTF-8 input. Thus the project is in some ways more closely tied to Knuth's original TeX rather than later projects such as pdfTeX, Omega, XeTeX or LuaTeX. The author has also made suggestions about a packaging approach, again based on file layout rather than a TeX Live or MiKTeX-like method.
In terms of benefit, it partly depends on what you want. The KerTeX project is very much focussed on the needs of the community the developer is in, where a small source distribution with suitable license is required. It seems unlikely that KerTeX will pick up on ideas such as direct PDF output, UTF-8 input or loading system fonts. Thus the approach is likely to appeal to users with particular needs.
-
I'm hoping to put something about this in my blog, but first need to get KerTeX working so I can test it out! – Joseph Wright Jan 25 '12 at 15:43
As it is presented, kerTeX is quite close to Omega: in Omega we have DVI output only, Unicode support (with all tools needed for contextual analysis), absolutely no dependence on the surrounding operating system (as in the case of xetex). À bon entendeur salut ! – yannis Apr 17 '14 at 7:02 |
# Question #48cbd
Aug 17, 2015
gravity due to force of attraction of earth
#### Explanation:
gravity is the force of attraction that applied by earth on the objects.Due to gravity,every objects attracted towards the center of earth.
Aug 18, 2015
All objects in the universe exert a gravitational force on each other. This force is proportional to the product of their masses and inversely proportional to the square of the distance between them.
#### Explanation:
The magnitude of the force between two objects can be calculated as follows:
$F = G \frac{M m}{R} ^ 2$
Where $G$ is the universal gravitational constant, $M$ and $m$ are the masses of the two objects, and $R$ is the distance between them.
Near the earth's surface (which is where you live) we can simplify and just calculate the force of gravity from the earth as:
$F = m g$
That small letter $g$ is really:
$g = G \frac{M}{R} ^ 2$
Where $M$ is the mass of the earth and $R$ is the radius of the earth. Going up a few kilometers or down a few kilometers doesn't change your distance from the earth's center by a large fraction. For most purposes, $g$ can be considered to be constant. |
# Boustrophedon transform
Boustrophedon transform is a draft programming task. It is not yet considered ready to be promoted as a complete task, for reasons that should be found in its talk page.
This page uses content from Wikipedia. The original article was at Boustrophedon transform. The list of authors can be seen in the page history. As with Rosetta Code, the text of Wikipedia is available under the GNU FDL. (See links for details on variance)
A boustrophedon transform is a procedure which maps one sequence to another using a series of integer additions.
Generally speaking, given a sequence: ${\displaystyle (a_0, a_1, a_2, \ldots)}$, the boustrophedon transform yields another sequence: ${\displaystyle (b_0, b_1, b_2, \ldots)}$, where ${\displaystyle b_0}$ is likely defined equivalent to ${\displaystyle a_0}$.
There are a few different ways to effect the transform. You may construct a boustrophedon triangle and read off the edge values, or, may use the recurrence relationship:
${\displaystyle T_{k,0} = a_k}$
${\displaystyle T_{k,n} = T_{k,n-1} + T_{k-1,k-n}}$
${\displaystyle \text{with }}$
${\displaystyle \quad k,n \in \mathbb{N}}$
${\displaystyle \quad k \ge n > 0}$.
The transformed sequence is defined by ${\displaystyle b_n = T_{n,n}}$ (for ${\displaystyle T_{2,2}}$ and greater indices).
You are free to use a method most convenient for your language. If the boustrophedon transform is provided by a built-in, or easily and freely available library, it is acceptable to use that (with a pointer to where it may be obtained).
• Write a procedure (routine, function, subroutine, whatever it may be called in your language) to perform a boustrophedon transform to a given sequence.
• Use that routine to perform a boustrophedon transform on a few representative sequences. Show the first fifteen values from the transformed sequence.
Use the following sequences for demonstration:
• ${\displaystyle (1, 0, 0, 0, \ldots)}$ ( one followed by an infinite series of zeros )
• ${\displaystyle (1, 1, 1, 1, \ldots)}$ ( an infinite series of ones )
• ${\displaystyle (1, -1, 1, -1, \ldots)}$ ( (-1)^n: alternating 1, -1, 1, -1 )
• ${\displaystyle (2, 3, 5, 7, 11, \ldots)}$ ( sequence of prime numbers )
• ${\displaystyle (1, 1, 2, 3, 5, \ldots)}$ ( sequence of Fibonacci numbers )
• ${\displaystyle (1, 1, 2, 6, 24, \ldots)}$ ( sequence of factorial numbers )
Stretch
If your language supports big integers, show the first and last 20 digits, and the digit count of the 1000th element of each sequence.
## J
Implementation:
b=: {{
M=: (u i.y),.(y-0 1)$x:_ B=:{{ if. x<y do.0 else. i=. <x,y if. _>i{M do. i{M else. r=. (x B y-1)+(x-1) B x-y r[M=: r i} M end. end. }}"0 (<0 1)|:B/~i.y }} Task examples: =&0 b 15 1 1 1 2 5 16 61 272 1385 7936 50521 353792 2702765 22368256 199360981 1"0 b 15 1 2 4 9 24 77 294 1309 6664 38177 243034 1701909 13001604 107601977 959021574 _1x&^ b 15 1 0 0 1 0 5 10 61 280 1665 10470 73621 561660 4650425 41441530 p: b 15 2 5 13 35 103 345 1325 5911 30067 172237 1096319 7677155 58648421 485377457 4326008691 phi=: -:>:%:5 {{ <.0.5+(phi^y)%%:5 }} b 15 0 1 3 8 25 85 334 1497 7635 43738 278415 1949531 14893000 123254221 1098523231 !@x: b 15 1 2 5 17 73 381 2347 16701 134993 1222873 12279251 135425553 1627809401 21183890469 296773827547 ### Alternate implementation Instead of relying on recursion and memoization, we can deliberately perform the operations in the correct order: B=: {{ M=. |:y#,:u i.y for_i.(#~>:/"1)1+(,#:i.@*)~y-1 do. M=. M (<i)}~(M{~<i-0 1)+M{~<(-/\i)-1 0 end. M|:~<1 0 }} Here, we start with a square matrix with the a values in sequence in the first column (first line). Then we fill in the remaining needed T values in row major order (for loop). Finally, we extract the diagonal (last line). Usage and results are the same as before. ## Julia using Primes function bous!(triangle, k, n, seq) n == 1 && return BigInt(seq[k]) triangle[k][n] > 0 && return triangle[k][n] return (triangle[k][n] = bous!(triangle, k, n - 1, seq) + bous!(triangle, k - 1, k - n + 1, seq)) end boustrophedon(seq) = (n = length(seq); t = [zeros(BigInt, j) for j in 1:n]; [bous!(t, i, i, seq) for i in 1:n]) boustrophedon(f, range) = boustrophedon(map(f, range)) fib(n) = (z = BigInt(0); ccall((:__gmpz_fib_ui, :libgmp), Cvoid, (Ref{BigInt}, Culong), z, n); z) tests = [ ((n) -> n < 2, 1:1000, "One followed by an infinite series of zeros -> A000111"), ((n) -> 1, 1:1000, "An infinite series of ones -> A000667"), ((n) -> isodd(n) ? 1 : -1, 1:1000, "(-1)^n: alternating 1, -1, 1, -1 -> A062162"), ((n) -> prime(n), 1:1000, "Sequence of prime numbers -> A000747"), ((n) -> fib(n), 1:1000, "Sequence of Fibonacci numbers -> A000744"), ((n) -> factorial(BigInt(n)), 0:999, "Sequence of factorial numbers -> A230960") ] for (f, rang, label) in tests println(label) arr = boustrophedon(f, rang) println(Int64.(arr[1:15])) s = string(arr[1000]) println(s[1:20], " ... ", s[end-19:end], " ($(length(s)) digits)\n")
end
Output:
One followed by an infinite series of zeros -> A000111
[1, 1, 1, 2, 5, 16, 61, 272, 1385, 7936, 50521, 353792, 2702765, 22368256, 199360981]
61065678604283283233 ... 63588348134248415232 (2369 digits)
An infinite series of ones -> A000667
[1, 2, 4, 9, 24, 77, 294, 1309, 6664, 38177, 243034, 1701909, 13001604, 107601977, 959021574]
29375506567920455903 ... 86575529609495110509 (2370 digits)
(-1)^n: alternating 1, -1, 1, -1 -> A062162
[1, 0, 0, 1, 0, 5, 10, 61, 280, 1665, 10470, 73621, 561660, 4650425, 41441530]
12694307397830194676 ... 15354198638855512941 (2369 digits)
Sequence of prime numbers -> A000747
[2, 5, 13, 35, 103, 345, 1325, 5911, 30067, 172237, 1096319, 7677155, 58648421, 485377457, 4326008691]
13250869953362054385 ... 82450325540640498987 (2371 digits)
Sequence of Fibonacci numbers -> A000744
[1, 2, 5, 14, 42, 144, 563, 2526, 12877, 73778, 469616, 3288428, 25121097, 207902202, 1852961189]
56757474139659741321 ... 66135597559209657242 (2370 digits)
Sequence of factorial numbers -> A230960
[1, 2, 5, 17, 73, 381, 2347, 16701, 134993, 1222873, 12279251, 135425553, 1627809401, 21183890469, 296773827547]
13714256926920345740 ... 19230014799151339821 (2566 digits)
## Perl
Not really fulfilling the conditions of the stretch goal, but heading a little way down that path.
Translation of: Raku
Library: ntheory
use v5.36; use experimental <builtin for_list>;
use ntheory <factorial lucasu nth_prime>;
use bigint;
sub abbr ($d) { my$l = length $d;$l < 41 ? $d : substr($d,0,20) . '..' . substr($d,-20) . " ($l digits)" }
sub sum (@a) { my $sum = Math::BigInt->bzero();$sum += $_ for @a;$sum }
sub boustrophedon_transform (@seq) {
my @bt;
my @bx = $seq[0]; for (my$c = 0; $c < @seq;$c++) {
@bx = reverse map { sum head $_+1,$seq[$c], @bx } 0 ..$c;
push @bt, $bx[0]; } @bt } my$upto = 100; #1000 way too slow
for my($name,$seq) (
'1 followed by 0\'s A000111', [1, (0) x $upto], 'All-1\'s A000667', [ (1) x$upto],
'(-1)^n A062162', [1, map { (-1)**$_ } 1..$upto],
'Primes A000747', [ map { nth_prime $_ } 1..$upto],
'Fibbonaccis A000744', [ map { lucasu(1, -1, $_) } 1..$upto],
'Factorials A230960', [1, map { factorial $_ } 1..$upto]
) {
my @bt = boustrophedon_transform @$seq; say "\n$name:\n" . join ' ', @bt[0..14];
say "100th term: " . abbr $bt[$upto-1];
}
Output:
1 followed by 0's A000111:
1 1 1 2 5 16 61 272 1385 7936 50521 353792 2702765 22368256 199360981
100th term: 45608516616801111821..68991870306963423232 (137 digits)
All-1's A000667:
1 2 4 9 24 77 294 1309 6664 38177 243034 1701909 13001604 107601977 959021574
100th term: 21939873756450413339..30507739683220525509 (138 digits)
(-1)^n A062162:
1 0 0 1 0 5 10 61 280 1665 10470 73621 561660 4650425 41441530
100th term: 94810791122872999361..65519440121851711941 (136 digits)
Primes A000747:
2 5 13 35 103 345 1325 5911 30067 172237 1096319 7677155 58648421 485377457 4326008691
100th term: 98967625721691921699..78027927576425134967 (138 digits)
Fibbonaccis A000744:
1 2 5 14 42 144 563 2526 12877 73778 469616 3288428 25121097 207902202 1852961189
100th term: 42390820205259437020..42168748587048986542 (138 digits)
Factorials A230960:
1 2 5 17 73 381 2347 16701 134993 1222873 12279251 135425553 1627809401 21183890469 296773827547
100th term: 31807659526053444023..65546706672657314921 (157 digits)
## Phix
without js -- see below
include mpfr.e
string stretchres = ""
procedure test(sequence ds)
{string desc, sequence s} = ds
integer n = length(s)
sequence t = apply(true,repeat,{0,tagset(n)}),
r15 = repeat("?",15), r1000 = "??"
for k=0 to n-1 do
integer i = k+1, {lo,hi,step} = iff(odd(k)?{k,1,-1}:{2,i,+1})
t[i,step] = s[i]
for j=lo to hi by step do
mpz tk = mpz_init()
t[i][j] = tk
end for
if i<=15 then
r15[i] = mpz_get_str(t[i][-step])
elsif i=1000 then
r1000 = mpz_get_short_str(t[i][-step])
end if
end for
printf(1,"%s:%s\n",{desc,join(r15)})
stretchres &= sprintf("%s[1000]:%s\n",{desc,r1000})
end procedure
function f1000(integer f, sequence v)
sequence res = mpz_inits(1000)
papply(true,f,{res,v})
return res
end function
constant tests = {{"1{0}",mpz_inits(1000,1&repeat(0,999))},
{"{1}",mpz_inits(1000,repeat(1,1000))},
{"+-1",mpz_inits(1000,flatten(repeat({1,-1},500)))},
{"pri",mpz_inits(1000,get_primes(-1000))},
{"fib",f1000(mpz_fib_ui,tagset(1000))},
{"fac",f1000(mpz_fac_ui,tagstart(0,1000))}}
papply(tests,test)
printf(1,"\n%s",stretchres)
Output:
1{0}:1 1 1 2 5 16 61 272 1385 7936 50521 353792 2702765 22368256 199360981
{1}:1 2 4 9 24 77 294 1309 6664 38177 243034 1701909 13001604 107601977 959021574
+-1:1 0 0 1 0 5 10 61 280 1665 10470 73621 561660 4650425 41441530
pri:2 5 13 35 103 345 1325 5911 30067 172237 1096319 7677155 58648421 485377457 4326008691
fib:1 2 5 14 42 144 563 2526 12877 73778 469616 3288428 25121097 207902202 1852961189
fac:1 2 5 17 73 381 2347 16701 134993 1222873 12279251 135425553 1627809401 21183890469 296773827547
1{0}[1000]:61065678604283283233...63588348134248415232 (2,369 digits)
{1}[1000]:29375506567920455903...86575529609495110509 (2,370 digits)
+-1[1000]:12694307397830194676...15354198638855512941 (2,369 digits)
pri[1000]:13250869953362054385...82450325540640498987 (2,371 digits)
fib[1000]:56757474139659741321...66135597559209657242 (2,370 digits)
fac[1000]:13714256926920345740...19230014799151339821 (2,566 digits)
As was noted somewhat obscurely in p2js/mappings ("When step may or may not be negative...") and now more clearly noted in the for loop documentation, for p2js compatibility the inner loop would have to be something like:
with javascript_semantics
...
if odd(k) then
for j=k to 1 by -1 do
mpz tk = mpz_init()
t[i][j] = tk
end for
else
for j=2 to i do
mpz tk = mpz_init()
t[i][j] = tk
end for
end if
## Raku
sub boustrophedon-transform (@seq) { map *.tail, (@seq[0], {[[\+] flat @seq[++$], .reverse]}…*) } sub abbr ($_) { .chars < 41 ?? $_ !! .substr(0,20) ~ '…' ~ .substr(*-20) ~ " ({.chars} digits)" } for '1 followed by 0\'s A000111', (flat 1, 0 xx *), 'All-1\'s A000667', (flat 1 xx *), '(-1)^n A062162', (flat 1, [\×] -1 xx *), 'Primes A000747', (^∞ .grep: &is-prime), 'Fibonaccis A000744', (1,1,*+*…*), 'Factorials A230960', (1,|[\×] 1..∞) ->$name, $seq { say "\n$name:\n" ~ (my $b-seq = boustrophedon-transform$seq)[^15] ~ "\n1000th term: " ~ abbr $b-seq[999] } Output: 1 followed by 0's A000111: 1 1 1 2 5 16 61 272 1385 7936 50521 353792 2702765 22368256 199360981 1000th term: 61065678604283283233…63588348134248415232 (2369 digits) All-1's A000667: 1 2 4 9 24 77 294 1309 6664 38177 243034 1701909 13001604 107601977 959021574 1000th term: 29375506567920455903…86575529609495110509 (2370 digits) (-1)^n A062162: 1 0 0 1 0 5 10 61 280 1665 10470 73621 561660 4650425 41441530 1000th term: 12694307397830194676…15354198638855512941 (2369 digits) Primes A000747: 2 5 13 35 103 345 1325 5911 30067 172237 1096319 7677155 58648421 485377457 4326008691 1000th term: 13250869953362054385…82450325540640498987 (2371 digits) Fibonaccis A000744: 1 2 5 14 42 144 563 2526 12877 73778 469616 3288428 25121097 207902202 1852961189 1000th term: 56757474139659741321…66135597559209657242 (2370 digits) Factorials A230960: 1 2 5 17 73 381 2347 16701 134993 1222873 12279251 135425553 1627809401 21183890469 296773827547 1000th term: 13714256926920345740…19230014799151339821 (2566 digits) ## Wren Library: Wren-math ### Basic import "./math" for Int var boustrophedon = Fn.new { |a| var k = a.count var cache = List.filled(k, null) for (i in 0...k) cache[i] = List.filled(k, 0) var b = List.filled(k, 0) b[0] = a[0] var T T = Fn.new { |k, n| if (n == 0) return a[k] if (cache[k][n] > 0) return cache[k][n] return cache[k][n] = T.call(k, n-1) + T.call(k-1, k-n) } for (n in 1...k) b[n] = T.call(n, n) return b } System.print("1 followed by 0's:") var a = [1] + ([0] * 14) System.print(boustrophedon.call(a)) System.print("\nAll 1's:") a = [1] * 15 System.print(boustrophedon.call(a)) System.print("\nAlternating 1, -1") a = [1, -1] * 7 + [1] System.print(boustrophedon.call(a)) System.print("\nPrimes:") a = Int.primeSieve(200)[0..14] System.print(boustrophedon.call(a)) System.print("\nFibonacci numbers:") a[0] = 1 // start from fib(1) a[1] = 1 for (i in 2..14) a[i] = a[i-1] + a[i-2] System.print(boustrophedon.call(a)) System.print("\nFactorials:") a[0] = 1 for (i in 1..14) a[i] = a[i-1] * i System.print(boustrophedon.call(a)) Output: 1 followed by 0's: [1, 1, 1, 2, 5, 16, 61, 272, 1385, 7936, 50521, 353792, 2702765, 22368256, 199360981] All 1's: [1, 2, 4, 9, 24, 77, 294, 1309, 6664, 38177, 243034, 1701909, 13001604, 107601977, 959021574] Alternating 1, -1 [1, 0, 0, 1, 0, 5, 10, 61, 280, 1665, 10470, 73621, 561660, 4650425, 41441530] Primes: [2, 5, 13, 35, 103, 345, 1325, 5911, 30067, 172237, 1096319, 7677155, 58648421, 485377457, 4326008691] Fibonacci numbers: [1, 2, 5, 14, 42, 144, 563, 2526, 12877, 73778, 469616, 3288428, 25121097, 207902202, 1852961189] Factorials: [1, 2, 5, 17, 73, 381, 2347, 16701, 134993, 1222873, 12279251, 135425553, 1627809401, 21183890469, 296773827547] ### Stretch Library: Wren-big Library: Wren-fmt import "./math" for Int import "./big" for BigInt import "./fmt" for Fmt var boustrophedon1000 = Fn.new { |a| var k = a.count var cache = List.filled(k, null) for (i in 0...k) { cache[i] = List.filled(k, null) for (j in 0...k) cache[i][j] = BigInt.zero } var T T = Fn.new { |k, n| if (n == 0) return a[k] if (cache[k][n] > BigInt.zero) return cache[k][n] return cache[k][n] = T.call(k, n-1) + T.call(k-1, k-n) } return T.call(999, 999) } System.print("1 followed by 0's:") var a = ([1] + [0] * 999).map { |i| BigInt.new(i) }.toList var bs = boustrophedon1000.call(a).toString Fmt.print("1000th term:$20a ($d digits)", bs, bs.count) System.print("\nAll 1's:") a = ([1] * 1000).map { |i| BigInt.new(i) }.toList bs = boustrophedon1000.call(a).toString Fmt.print("1000th term:$20a ($d digits)", bs, bs.count) System.print("\nAlternating 1, -1") a = ([1, -1] * 500).map { |i| BigInt.new(i) }.toList bs = boustrophedon1000.call(a).toString Fmt.print("1000th term:$20a ($d digits)", bs, bs.count) System.print("\nPrimes:") a = Int.primeSieve(8000)[0..999].map { |i| BigInt.new(i) }.toList bs = boustrophedon1000.call(a).toString Fmt.print("1000th term:$20a ($d digits)", bs, bs.count) System.print("\nFibonacci numbers:") a[0] = BigInt.one // start from fib(1) a[1] = BigInt.one for (i in 2..999) a[i] = a[i-1] + a[i-2] bs = boustrophedon1000.call(a).toString Fmt.print("1000th term:$20a ($d digits)", bs, bs.count) System.print("\nFactorials:") a[0] = BigInt.one for (i in 1..999) a[i] = a[i-1] * i bs = boustrophedon1000.call(a).toString Fmt.print("1000th term:$20a (\$d digits)", bs, bs.count)
Output:
1 followed by 0's:
1000th term: 61065678604283283233...63588348134248415232 (2369 digits)
All 1's:
1000th term: 29375506567920455903...86575529609495110509 (2370 digits)
Alternating 1, -1
1000th term: 12694307397830194676...15354198638855512941 (2369 digits)
Primes:
1000th term: 13250869953362054385...82450325540640498987 (2371 digits)
Fibonacci numbers:
1000th term: 56757474139659741321...66135597559209657242 (2370 digits)
Factorials:
1000th term: 13714256926920345740...19230014799151339821 (2566 digits) |
# hyperpower modular
How can I calculate this? ${(p-1)}^{{(p-2)^{{(p-3)}^{(p-4)...}}}} (mod {.p})$
and so on till 1. I don't know how to write it with a Knuth or Ackerman or more compact notation.
I've tried to find a pattern evaluating it with Mathematica, Pari, GMP, or Magma.
2 mod 3 = 2
3^2 mod 4 = 1
4^3^2 mod 5= 4
5^4^3^2 mod 6 = 1
6^5^4^3^2 mod 7 = 6
But the next step always produces an overflow.
7^6^5^4^3^2 mod 8 = ?? ( I guess it equals 1).
I guess there should be some workaround.
cheers
PD: I think I've found a way to solve some of these problems. I didn't find it myself but I found it on the Internet. Using ai≡aj(modm)⇔i≡j(mode). Where e is the multiplicative order, e=ordm(a), that's the smallest k that makes ak≡1(modm), And it can be used only if gcd(a,m)=1
-
## 1 Answer
If $p$ is even, then $(p-2)^{{(p-3)}^{\ldots}}$ is an even exponent, and so we have $(p-1)^{{(p-2)}^{\ldots}} \equiv (-1)^{{(p-2)}^{\dots}} \equiv 1 \mod p$. If $p$ is odd, then $(p-2)^{\ldots}$ is an odd exponent, and so $(p-1)^{{(p-2)}^{\ldots}} \equiv (-1)^{{(p-2)}^{\ldots}} \equiv -1 \mod p$.
-
Oh, thanks, I didn't notice that I wrote (mod p). How would you solve it if it were (mod N), with N being any integer positive number? – skan Dec 17 '12 at 1:01
That's actually what I've calculated, though I was using the same notation as you were. Just inspect the parity of the exponent, which is all that matters, since $N-1$ (if we're changing notation) is equivalent to $-1 \mod N$. – bzprules Dec 17 '12 at 1:24
For example how much is ...? $8^{7^{6^{5^{4^{3^{2}}}}}} \mod 17$ – skan Dec 17 '12 at 13:14
Or something maybe easier, $a \uparrow \uparrow b \mod c$ such as $7^{7^{7^{7^{7^{7^{7}}}}}} \mod 17$ – skan Dec 17 '12 at 15:53
I think I've found a way to solve some of these problems. I didn't find it myself but I found it on the Internet. Using $a^i \equiv a^j \pmod {m} \Leftrightarrow{} i \equiv j \pmod{e}$. Where e is the multiplicative order, $e=ord_m (a)$, that's the smallest k that makes $a^k \equiv 1 \pmod{m}$, And it can be used only if $gcd(a,m)=1$ – skan Dec 18 '12 at 1:28 |
# Why isn't line to line voltage zero?
If voltage is measured relative to something. For instance, voltage-meters measure voltage differences not individual voltage levels, or by analogy altitude is measured relative to somewhere. Mount Everest is so-and-so compared to sea level. But zero compared to itself.
How come line to line voltage is not zero when both lines have the same value? For instance, a three phase three wire Y connection of 120V with an angle of 120 degrees. Line to line voltage here is $$120V \times \sqrt[]{3} \approx 208V$$ not zero.
• Because, as you state, they are out of phase. Hence they are different. For example $sin(\omega t)-sin(\omega t-180^o)=2sin(\omega t)$ – Chu Mar 15 '17 at 8:17
• @Chu, so my theory was half true, in that if the phases was lined up, the voltage would be zero? – E. l4d3 Mar 15 '17 at 9:00
• Yes, if they were in phase, the voltage difference would be zero. – AngeloQ Mar 15 '17 at 12:36
The difference between +1 and -1 is 2 yet both have the same magnitude of 1. If you connect two identical 9 volt batteries together with a single wire and measure across the unconnected terminals you might measure 0 volts or you might measure 18 volts depending on how you connected the single wire. Polarity matters and it matters in 3 phase systems just the same.
As you can see, although the 3 individual phase voltages are rising and falling identically, they are displaced in time and therefore there is a voltage between any two.
Picture stolen from here
• Thanks, that made sense. The difference between the phases per time unit is always different and greater than that of the line to neutral. Didn't consider that. Thanks for the last puzzle! – E. l4d3 Mar 15 '17 at 8:57
## ...because direction matters
AC signals are time varying and they have both an amplitude and a phase. The phase is the angle between the voltages so they are not both "the same value" at the same time.
• Thanks, that made sense. Combined with the picture Andy posted, the difference between the phases per time unit is always different and greater than that of the line to neutral. Thanks for the last puzzle! – E. l4d3 Mar 15 '17 at 8:56
The very short answer is...If there is zero volts there is zero current . So power is zero too, which is not very useful. By having a phase difference you have a voltage and so current can flow if the load allows it. Now we have voltage and current = power. Now that is useful. |
# Pseudo-Riemannian Manifolds with multiple temporal dimensions
Consider a Pseudo-Riemannian Manifold with signature
$$(\underbrace{+,\cdots,+}_p,\underbrace{-,\cdots,-}_q)$$
For any positive integers $p$ and $q$. Can this kind of manifold contain closed timelike curves (CTCs)? I know that if $p=1$, then we get a Lorentzian Manifold that can't contain CTCs, but I am interested in the cases where $p>1$.
• Comment to the question (v1): If a pseudo-Riemannian manifold $(M,g)$ has at least two temporal dimensions, then it is trivially possible to fit a CTC within an arbitrary small open neighborhood. More on physics with multiple temporal dimensions: physics.stackexchange.com/q/43322/2451 and links therein. Mar 3 '14 at 22:51
• What makes you think a Lorentzian manifold can't have CTC's? It seems you may be conflating two distinct concepts: the topology of the manifold $M$, and the geometry encoded in the metric $g$ on $M$. In particular, global AdS is an example of a Lorentzian spacetime with CTCs: en.wikipedia.org/wiki/Anti-de_Sitter_space#Global_coordinates Mar 4 '14 at 2:25
• Isn't a Lorentzian Manifold simply a pseudo-Riemannian Manifold whose signature is (1,n-1)? If that's the case, how is it possible that with only a single time dimension we can have CTCs? Mar 4 '14 at 11:03
• Because that single "time dimension" can be "curved" and "closed". Think of a cylinder obtained by identifying two different instants of time (for all points in space at those instants) in Minkowski spacetime with respect to a given Minkowskian coordinate system. Mar 4 '14 at 12:01
For $p=1$, CTC's do not exist in Minkowski spacetime. In other $1+3$ spacetimes, in principle they are admitted in the absence of further requirements (like globally hyperbolicity) on the causal structure of the spacetime. They must be present if the spacetime is compact, for instance.
For $p\geq 2$, the answer is obviously YES. Consider a manifold $M$ with metric $g$ with signature (p,q) and $p \geq 2$. In a $p+q$-dimensional neighbourhood $U$ of any point $s\in M$, using the exponential map at $s$ starting from a $p$ dimensional subspace generated by $p$ timelike vectors in $T_sM$, you can construct an embedded $p$-dimensional submanifold $N$ passing through $s$ and whose metric (induced by $g$) has signature $(p,0)$. This means that every vector tangent to a point in $N$, considered as a manifold on its own right, is timelike. In local coordinates on $N$ around $s\in N$, any circle surrounding $s$ is a closed timelike curve. |
## What Revit Wants: How to Map A Drive Without Mapping A Drive In Windows
In Windows, you will often use either Map Network Drive dialog or net use command to map a network drive. You can use that method with a shared folder trick to map a local folder as a drive too, as described here.
But there is an even easier way, that is more flexible in some ways. It is the subst command, and it basically tells your Windows system to refer to a folder as a drive letter. Its usage is very simple, for example:
subst J: "E:\some folder\J_DRIVE"
If you want that to show up as a ‘drive’ at each reboot, just put the above command into a CMD file and point to it from your Windows startup folder.
For your assistance, here is the path to your typical User Startup folder in Windows:
The post How to Map A Drive Without Mapping A Drive In Windows appeared first on What Revit Wants. |
closed meagre sets - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-23T01:24:10Z http://mathoverflow.net/feeds/question/93992 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/93992/closed-meagre-sets closed meagre sets Douglas Somerset 2012-04-13T22:04:15Z 2012-04-30T18:04:07Z <p>A closed meagre subset of $[0,1]$ is either countable or homeomorphic to the Cantor set: either way it is $0$-dimensional.</p> <p>Q.1. Is every closed meagre subset of an $n$-dimensional locally compact Hausdorff space of dimension $\le n-1$ (for $n\ge 1$)?</p> <p>Q.2. If so, is there anything unusual about meagre sets in $0$-dimensional and infinite-dimensional spaces in which closed meagre sets can have dimension equal to the dimension of the whole space (one would think that meagre sets might be 'fatter' in some sense in these cases)?</p> <p>Q.3. Is there a simple example of an uncountable closed meagre subset of the Cantor set? </p> http://mathoverflow.net/questions/93992/closed-meagre-sets/93993#93993 Answer by Andreas Blass for closed meagre sets Andreas Blass 2012-04-13T22:21:45Z 2012-04-13T22:21:45Z <p>For Q3, the answer is yes. Think of the Cantor set as consisting of the points that have a base-3 expansion containing only 0's and 2's. Now take the subset of those points where the 2's occur only in even-numbered positions. </p> <p>Also, in the first sentence, "either countable or homeomorphic to the Cantor set" isn't right; it could be the union of a countable set and a copy of the Cantor set. Nevertheless, it would still be 0-dimensional. </p> http://mathoverflow.net/questions/93992/closed-meagre-sets/93994#93994 Answer by Anton Petrunin for closed meagre sets Anton Petrunin 2012-04-13T22:32:10Z 2012-04-13T22:32:10Z <p>Q1. The answer is NO.</p> <p>Take $\mathbb R$ and attach to each rational point $p/q$ a segment of length $1/q$. You get a locally compact 1-dimensional metric space $M$. The set $\mathbb R$ forms a closed meagre subset in $M$.</p> http://mathoverflow.net/questions/93992/closed-meagre-sets/95595#95595 Answer by Ostap Chervak for closed meagre sets Ostap Chervak 2012-04-30T18:04:07Z 2012-04-30T18:04:07Z <p>By the way, your question 1 is true for n-dimensional manifolds (since it is true for $\mathbb{R}^n$ by theorem of Menger and Urysohn).</p> <p>Note that Q3 can also be answered noting that Cantor set $C$ is homeomorphic to $C^2$ and for every point $x$, {$x$} $\times C$ is a meagre subset of $C^2$. Note that this construction gives you Andreas's answer if x=0.</p> |
# EZ In/Out Parking
2001 E. Randol Mill Rd., Arlington, TX 76011
Taylor Swift with Ed Sheeran
### Sat, May 253:30pm+ 3hr after event
Lot #2 Reserved Parking is located directly behind 2001 E. Randol Mill at 2004
Arlington Downs Rd. 300 feet west of Magic Mile some say it is the best hidden location. Very little traffic volume makes for a quick easy exit after a majority of events This is a huge area for tailgating that is directly across from Rangers D Lot.
Tailgating is allowed but NO CHARCOAL GRILLING, GAS ONLY
### 1023 people recommend parking here 50+ left comments
• It was easy to find and close to the Ballpark. We will use this parking again for sure!
Robert D. on May 19, 2013
• Very close to the ballpark or cowboys stadium. Very easy to work with! I will use this again!
Kate M. on May 12, 2013
• It's close to the studium, only 5 min. walking.
Jun Y. on May 6, 2013
• Great parking and friendly service. I'll be parking here again.
Jacob U. on May 6, 2013
• Parking was great! Close walk and had nice friendly attendants!
Wenifred S. on May 6, 2013
• Closer to the ballpark than I had imagined and easily accessed
Brandon M. on May 6, 2013
• Missy and Jeff were so friendly and made this the best!
Peter B. on May 6, 2013
• Close to the ball park and safe. With some of the street closures before game time, it was a bit hard to get to. Once you figured that out, it was good.
Sandeep K. on May 2, 2013
• Excellent place to park. Easy in and out. Have parked here for years. It is really great to be able to buy the spot in advance and not wonder if one will be available when I arrive.
Katherine N. on May 1, 2013
• Have parked there a number of times and love how close it is to the stadium and leaving was very hassle free!!
Tina P. on Apr 23, 2013
• Very close to the ball park. Easy in & out.
Steven L. on Apr 22, 2013
• Always an easy in and out. Attendant is efficient. The best kept secret at the Ballpark!
James P. on Apr 22, 2013
• It was great parking! However, it was difficult to figure out which parking lot to go to. But this was our first trip and we were not too familiar to the area. We will use this parking again when we come back to another game.
Stephanie N. on Apr 21, 2013
• Parking was a great location for the Ball Park. Quick walk, but a bit difficult to find if you don't know the area.
Jeffrey S. on Apr 11, 2013
• Definitely EZ In & Out!
Wayne Y. on Apr 11, 2013
• Easy to find. Very close to ball park. Fast & friendly staff.
Steven L. on Apr 10, 2013
• Easy and convenient.
Raymond C. on Apr 9, 2013
• This is the only place I have parked to attend Rangers games for the last couple of years. Love it!
Debra B. on Apr 8, 2013
• Very close to venue. Easy in/out from the interstate too.
Steven L. on Apr 8, 2013
• Close to ballpark. Easy out !!! Only difficulty was that the entrance is not on Randol Mill Rd......but around the back of the building on Randol Mill, which required getting back into traffic and going around the block. Not a big deal.
Douglas F. on Apr 8, 2013
• Best parking spot at RBIA. I reserve it every time.
Randall S. on Apr 7, 2013
• Very easy
Brian T. on Apr 7, 2013
• So easy! Quick exit too!
Rose A. on Apr 7, 2013
• Great spot
Mark J. on Apr 6, 2013
• Always use this lot and this service, very happy with location
Richard D. on Apr 1, 2013
• Always use this particular lot. Very happy with location
Richard D. on Apr 1, 2013
• Used this lot many times and will continue to use it. Very happy with it.
Richard D. on Apr 1, 2013
• As name says, easy and and out. Wish it was slightly closer but not a bad walk. Straight down randol mill.
P J. on Mar 30, 2013
• Close to ball park and easy access to 360 and secure.
Brent B. on Mar 30, 2013
• It was hard to find the lot.
Todd L. on Mar 30, 2013
• i have used this parking area for years, and have recommended to everyone I know that attends Ranger games. It is the best.
Molly P. on Mar 29, 2013
• easy to find had my own space easy to leave A+ thanks!
Ryan E. on Mar 29, 2013
• Just like it says Easy In and Easy Out. Close Access to the Interstate.
Ronald K. on Feb 24, 2013
• saved first spot For us! Great experience
Meredith R. on Feb 24, 2013
• Please don;t post my recommendation....I don;t want anyone else to learn about it. It was great. Missed all the trafic getting in and out. It was about a 15 minute easy walk to Jerry's house.
David K. on Jan 6, 2013
• great location, easy in/out before and after game. great place to tailgate, if so inclined.
Wayne B. on Jan 6, 2013
• Easy in and out, reserved as advertised
Kenneth R. on Jan 5, 2013
• Excellent, bit of a walk but able to avoid post game traffic. Convenient pre-pay on website. Our smart phone got us there without getting lost. So all-in-all is was a great value.
Charles M. on Jan 5, 2013
• Very friendly and easy access and exit. Will use again.
Joseph G. on Dec 26, 2012
• Great location
Tania I. on Dec 25, 2012
• Easy out!!
Tania I. on Dec 25, 2012
• Excellent location!
Susan C. on Dec 25, 2012
• It was easy safe and reasonable. If I ever go back I will use them again
Shannon P. on Dec 24, 2012
• easily to find and get in and out of
Jerry P. on Dec 24, 2012
• And I DO recommend this parking to anyone that I either give or sell my Cowboy games tickets to! I am a walking advertisement for you all!! I especially love seeing a security person sitting in the lot when I arrive to get into my car and leave! LOVE IT!
Doreen M. on Dec 3, 2012
• Easy and easy out. The walk was OK!
Robert W. on Nov 27, 2012
• Easy to get to with friendly attendant.
Harris C. on Nov 27, 2012
• Was really great. Easy to find, avoided ANY stop and go traffic by using Division in, Six Flags out. Definitely would recommend.
Mark W. on Nov 25, 2012
• Courteous staff assisted us - the walk to the stadium was not a problem and bike-taxis were available if we had needed them.
Patrick H. on Nov 25, 2012
• Excellent
Terry K. on Nov 23, 2012
• Great
Terry K. on Nov 23, 2012
• Great in and out parking. Nice walk to stadium and very helpful staff.
LeKeith J. on Nov 23, 2012
• Actually my mom parked in your area and said not only was it easy to find but super easy to park and leave once the game was over.
Tenaya G. on Oct 31, 2012
• Awesome time ...would use this location again for future events
Eric M. on Oct 29, 2012
• Short walk to the stadium. Plenty of tailgating nearby!
Glenn S. on Oct 2, 2012
• Very easy and convenient. Only thing better is for the Rangers to win
Margaret A. on Oct 2, 2012
• Great place to park for a Rangers game. Easy in and out and close to the stadium. This is the third time I have parked here and will again.
Mark E. on Oct 1, 2012
• Great parking, not too bad of a walk....and of course easy in and easy out just as described :)
Carly T. on Oct 1, 2012
• Took 5 min. to walk to the stadium. Will use again.
Peter S. on Sep 29, 2012
• Love this parking spot! We park here for every Rangers game we attend. Highly recommended!
Randall S. on Sep 29, 2012
• It's a great place to park and I have used this parking lot for years. It's a small lot and fills up fast, so I hate to recommend this lot to others for fear the next time I go to a Rangers game it will be full. That's pretty selfish of me I know - sorry!!
Jo D. on Sep 28, 2012
• We park here for every Ranger game that we attend. Highly recommend it
Ken H. on Sep 28, 2012
• Fast walk to see my Rangers
Holly B. on Sep 27, 2012
• Easy to get to and from Center Field
Holly B. on Sep 27, 2012
• Easy reservations online, always a good parking spot waiting for me. We love it!
Susan L. on Sep 27, 2012
• The attendants were more than helpful - I actually stopped by the day before I needed parking & Missy gave me the website; reserving on line was so easy, even sent an email confirmation to my phone; I was little late getting to the game, but was reassured that my place was reserved, not to worry; staff very professional - wish I had parked here before now - thank you !!
Cynthia H. on Sep 27, 2012
• Very convenient
Linda W. on Sep 27, 2012
• Difficult to find. Normally I don't think about having to go through a barricade to get to my parking spot. Now that I know where it is, it's a great spot.
Merlene M. on Sep 27, 2012
• Easy as ever
Margaret A. on Sep 26, 2012
• convenient
Priscilla T. on Sep 25, 2012
• So excited that we found Parkwhiz pre-paid parking! Makes going to the events in Arlington much easier. Best to reserve a spot several weeks in advance because most of the lots are small. Definitely will continue to use them.
Megan D. on Sep 25, 2012
• Where I always park. Easy access back to I-30 with minimal traffic.
Ronny G. on Sep 25, 2012
• Easy parking, showed my phone with the bar code and done. Easy exit also after boys game
Raul C. on Sep 24, 2012
• VERY easy in and out. Was out of there before the traffic was bad. I would park here again. - John
John P. on Sep 17, 2012
• It was GOOD! Thanks.
Darcy W. on Sep 16, 2012
• It was easy in and easy out. Gr8 spot to park.
Jack R. on Sep 16, 2012
• Easy to find and courteous attendants.
Sam M. on Sep 15, 2012
• Attendants were very helpful and friendly
Walter R. on Sep 14, 2012
• Always the best place for us do convenient and no problems getting in or out.
Teresa B. on Sep 14, 2012
• Great, period.
Thomas J. on Sep 14, 2012
• Just like is says EZ IN/OUT!
Bobby . on Sep 13, 2012
• I appreciated the ease of access and departure.
Gary P. on Sep 13, 2012
• plenty of space and easy to the lot and back to the Freeway
W. L. on Sep 13, 2012
• Awesome and easy!
Angela F. on Sep 13, 2012
• Awesome!!
Angela F. on Sep 13, 2012
• Easy in and out. Close parking to ballpark.
Trisha G. on Sep 9, 2012
• EZ in, EZ Out. Says it all!
Jeremiah R. on Sep 2, 2012
• Close to venue, parking attendant's on lot. a bit hard to find lot but worth it.
F T. on Aug 30, 2012
• Always an awesome place to park
Teresa B. on Aug 30, 2012
• Easy in and out. Very convenient. Great parking spot. I've been parking in this location for several years.
Nick D. on Aug 30, 2012
• This parking is great!! We use this for ALL games & I highly recommend to anyone looking for affordable, close parking.
Nicole S. on Aug 30, 2012
• Parking was easy to get in and out of and close to ballpark
Buffy M. on Aug 30, 2012
• Close walk and fair price.
Scott H. on Aug 30, 2012
• Great
Reese S. on Aug 29, 2012
• Great as usual!!! So easy and gives you a stress-free trip to the game. Then when the game is over, you NEVER have to sit in gridlock traffic. It makes the experience fun and relaxing from beginning to end!
Sheila P. on Aug 28, 2012
• Super close walking,
Lindsay T. on Aug 27, 2012
• I always use ParkWhiz when attending Arlington sporting events...pricing is so much better than some of the "official" options
Donald E. on Aug 26, 2012
• Parking was easy to find and very close to ballpark. Great experience and would highly recommend it.
Ryan E. on Aug 26, 2012
• Some things are never what they say they are. It's says EZ in/out & to my amazement, it was more EZ in/out than I expected. I will definetly use ParkWhiz again.
Mark A. on Aug 25, 2012
• I love parking here. Its is so easy to find and so close to the stadium! Love it!!!
Debra B. on Aug 24, 2012
• I love parking with yall.. We have tickets for another Rangers game in Sept and I will for sure be using yall again. Thanks Erica
Erica B. on Aug 23, 2012
• Close to stadium and convenient
Tom A. on Aug 23, 2012
• Great parking with a short walking distance to the ballpark
Nancy L. on Aug 23, 2012
• Nice spot close to stadium, would use again.
Quon L. on Aug 18, 2012
• Convenient parking, located within easy walking distance of the center field entrance for the Rangers. Our walk included a 3 and 7 year old.
Joe C. on Aug 13, 2012
• Just a short walk to the park. Safe. Easy in and out. Friendly staff. Just Great This was my 4 time to use the same lot!!!
Linden B. on Aug 12, 2012
• Convenient. Easy in and out.
Rhonda P. on Aug 12, 2012
• It is very close to the stadium. Less than a 5 minute walk to the centerfield gate. Excellent spot!
Stacy B. on Aug 12, 2012
• Great location. Not to far to walk on a hot Texas evening.
Howard D. on Aug 5, 2012
• Great parking, very short walk to the stadium. We'll park here again!!
Sandra S. on Aug 3, 2012
• Easy in and out parking and close to the ballpark.
Karen G. on Aug 3, 2012
• This parking area was very close and just across the street from the Centerfield entrance of the ball park. Very easy to find and get in and out of. After the game we were out of this parking area and on the road in less than 2 minutes. It is very good spot and we will park here again :)
Andrea C. on Jul 31, 2012
• Easy I , easy out! I have parked in the same space and spot every time! It's like my own garage!
Tim T. on Jul 31, 2012
• Everything was Great!
Tim T. on Jul 30, 2012
• OMG!!! We love it!
Holli S. on Jul 29, 2012
• Quick walk to center field. Parking lot feels safe with the on site personnel. I think you get a better spot if you reserve online, at least we did.
Anna W. on Jul 28, 2012
• good experience
Robert D. on Jul 26, 2012
• Very short walk to stadium and not much traffic when leaving! We will definately use this parking lot for future games!
Kristen P. on Jul 9, 2012
• Parking was just a hop and skip from center field and we out of the area and on the freeway in no time.
Oscar P. on Jul 9, 2012
• It was close to the ballpark so it worked for me. I definitively recommend it to all my friends.
Deyanira T. on Jul 9, 2012
• Great place to park. Very short walk to the Ballpark.
Howard W. on Jul 9, 2012
• It was close to the stadium and having a reserved place to park relieved a lot of stress.
Carole S. on Jul 9, 2012
• Parked here several times! great for any seats in the ballpark.
Margaret K. on Jul 8, 2012
• Parking was great, just a couple of blocks from center field, easy out, a little to get in due to roasd closure's
Randy B. on Jul 3, 2012
• Convenient to Ballpark entrance. Easy to get in and easy to get out. Attendants very helpful and friendly!
Richard D. on Jul 2, 2012
• Very easy access and close proximity to outfield tickets.
Jennifer M. on Jul 1, 2012
• Very convenient and easy in and out. Would park there again.
Sam B. on Jul 1, 2012
• It was close to the park and was easy in and out
Shan T. on Jun 30, 2012
• Closer to Stadium then expected .Did not seem like 900 yards. EZ in out !
Robert S. on Jun 28, 2012
• The lot was easy to get in and out of, and was close to the ballpark. I will park there next time I go to a game.
Joe D. on Jun 28, 2012
• Wow!! Super close and conveinient parking. Will definetly park here again!!
Blake S. on Jun 28, 2012
• not a problem at all but I noticed that the date stamped on the card was incorrect but we put on dash. Once we found location, great!
Megan D. on Jun 28, 2012
• Everything was fantastic... even though we loss! Yes we are from Michigan and are die hard Tiger fans.... so it was our first time at the field, which was terrific and very nice, but our first time at the parking event site which was very nice, well lighted, close to the field and easy to find! Would do it again! :)
Jeremy B. on Jun 28, 2012
• Great location. Was able to get in and out with little traffic issues.
Jeffrey G. on Jun 27, 2012
• Everything was great, quick in and out! My brother is going tpnite and is using the lot
Tim T. on Jun 27, 2012
• Easy access and convieient to the ballpark.
Davi D. on Jun 27, 2012
• It was close and well lit! I am very pleased with my choice to park there. :)
Julie E. on Jun 27, 2012
• It really was easy in and out, and I had a great parking spot with just a short walk to the stadium. I'm really glad I found them. I'll be using them every time I go to a game. I'd highly recommend PakWhiz. Tim Warren, Flower Mound
Tim W. on Jun 26, 2012
• This parking is great!! I highly recommend it to anyone looking for great parking close to the ballpark. We use it every time we go to a Rangers game and have been doing so for the last couple of years.
Nicole S. on Jun 24, 2012
• It was a great location easy to get in an out. I would use it again.
Richard R. on Jun 23, 2012
• The last thing you need to worry about heading to a game is parking. When you know you have a space waiting for you, it makes dealing with traffic less stressful. Thank you for providing this service.
Sheila P. on Jun 23, 2012
• Parking was a breeze....great location...just walking distance to the stadium...no hassle leaving the parking area after game...
Luciano F. on Jun 18, 2012
• The parking lot was very handy for my family as we attended the Ranger games.
Bradley V. on Jun 17, 2012
• Very easy in and out. parking spot was adequate size
Karen T. on Jun 17, 2012
• Great spot and close
Jimmy J. on Jun 17, 2012
• Great Parking, Easy to get in and out!
Johnny G. on Jun 16, 2012
• Friendly, helpful staff. easy to get into and out of the lot. Close to the ballpark and quick access to 30 and 360. Will definately use this again!
Richard M. on Jun 16, 2012
• Awsome!
Adam P. on Jun 15, 2012
• This parking was amazing. So convenient, no trouble what so ever, and great price
Emily H. on Jun 15, 2012
• This lot rocks!!!!
Kathryn P. on Jun 14, 2012
• The lot was super close to the stadium and easy to find, plus it was a quick exit after the game. I will definitely use EZ in/Out parking again.
Carolyn B. on Jun 14, 2012
• The parking space was very close to the stadium and was easy to get to.
Ken C. on Jun 14, 2012
• I had a little trouble finding the exact location, but once there, the spot was great! It's just like it says...EZ in and EZ out! I'm glad I know about this little parking area! Will definitely use it again!
Terri A. on May 31, 2012
• It was swell
David W. on May 31, 2012
• Perfect!!! EZ TO FIND, EZ TO GET TO I30 TO DALLAS! Will use every time I go to the Ball Park.
Timothy B. on May 31, 2012
• Perfect
Craig L. on May 30, 2012
• Great location! Take Randol Mill to Magic Mile and turn right from the east and left from the west. Take first left off Magic Mile and look for us on the left!
Thomas J. on May 30, 2012
• Excellent experience. Easy to use with electronic parking pass.
Robert B. on May 30, 2012
• It's easy to access from 360 to get to the parking entrance, short walking distance to the ballpark. The parking space is guaranteed by reservation. also parking fee is same as general stadium parking. I like it. Frequent visitors must use it.
Kota N. on May 29, 2012
• Great location very close to the Ballpark!
Joshua S. on May 29, 2012
• Good parking, but a few more signs for us "out of towners" to find it would not hurt.
Julie S. on May 29, 2012
• Great parking!! Will definitely park here again!
Jason B. on May 29, 2012
• Good attendants and easy to do!
David W. on May 29, 2012
• We have parked at this location in the past without using Park Whiz ... the fact that I'm familiar with the location and know I have a reserved spot made the trip all the more enjoyable ....
Arnold B. on May 29, 2012
• 2nd time there. It's great, close and easy to get out. Will go there again.
Tania I. on May 28, 2012
• Great made our leaving very easy
Sara T. on May 28, 2012
• I loved my parking spot and everything was great
Casey C. on May 27, 2012
• awesome parking for Rangers Stadium , I had to bookmark this one we walked less than a block and half and pulled right out with little wait time for traffic on 360.. two thumbs up Go Rangers!!
Jeff E. on May 27, 2012
• This was the second time we have used EZ In/Out. It was great. Thanks Lin
Linden B. on May 26, 2012
• It was AWESOME ...!!!
Carla A. on May 26, 2012
• We were stuck in thick traffic, and I called. Jeff guided us through and helped us get through the traffic and to the parking facility. He stayed on the phone with me while we made our way through. He and his wife were very helpful and I was completely, 100% satisfied. I'll use this service again.
Laura C. on May 19, 2012
• Nice to be able to arrive 20 minutes before first pitch and have a spot saved for me. Those w/o reservations were turned away.
Elizabeth B. on May 18, 2012
• I really appreciated the calls made to assist us in locating the lot when we got lost in all the traffic. The convenient location was great with such a short walk to the ball park.
LaJuana C. on May 18, 2012
• Easy in - Easy out. Easy to find. Good location.
Nick D. on May 17, 2012
• Planned on getting there earlier but the traffic was awful - it was great having a front row spot waiting for me!
Allison C. on May 17, 2012
• Like it says EZ in, EZ out! Always my first choice when attending an event nearby!
Aimee V. on May 16, 2012
• It was very close & definitely easy to get out of, but a little harder to find getting there. Will absolutely recommend to everyone & park there again!! :)
Kym B. on May 16, 2012
• Close to the southeast side of the ballpark. No problems finding or leaving the lot. Go for it! Maybe the Rangers will win next time.
Richard Y. on May 15, 2012
• EZ In/Out is the perfect name for this parking location!
Margaret A. on May 15, 2012
• I was amazed at how easy it was and how close it was to the entrance. We will definantly be using this site again.
Kim M. on May 14, 2012
• Easy access
Jason F. on May 14, 2012
• Very convenient
Jason F. on May 14, 2012
• Omg I can't say enough positive about this parking spot. I got a very nice spot and was greeted by some wonderful people. I was able to get in my car and just leave without sitting for 45 minutes like last time. I loved it alot and I am going to make sure I tell all of my family and friends. May 29th we have 44 people coming out for family get together to watch a game and we will most certainly try and get this same parking. Thank you so much!
Teresa B. on May 14, 2012
• Good parking...but there were no signs so was a little difficult to find.
Rodolfo S. on May 14, 2012
• Very close to the Stadium. Easy to get in and out of.
Bryan M. on May 13, 2012
• This is a convenient parking spot that you can get to and leave from quickly. Anytime I go to a Ranger game I use EZ In/Out Parking.
Koti R. on May 13, 2012
• Perfect !! Thank you!
Keith P. on May 13, 2012
• Easy to get to! And so close to the ballpark!
Humberto D. on May 13, 2012
• great location!
Anna B. on May 12, 2012
• The location was a close, easy walk to the park, and since the game was a sell out, it was nice not to have to worry where we were going to park.
Timothy P. on May 12, 2012
• Excellent experience
Brent C. on May 12, 2012
• Excellent experience to have a guaranteed parking spot close to the stadium. I will use a service like this from now on.
Robert B. on May 12, 2012
• Easy to get in. Good parking spots. Easy to get out. Very good directions to the lot as well.
Robert C. on May 12, 2012
• Great place to park for the Rangers games, very close to the ball park!
Louis D. on May 1, 2012
• Great parking!! One block from stadium and like it says EZin EZout. A+++++++++
Perry L. on Apr 30, 2012
• Things went acceptionally smooth! Great place to park!!
Stacy S. on Apr 30, 2012
• It was a little difficult to find the spot, but once there it was very good and convenient.
Jaime S. on Apr 30, 2012
• WELL MANAGED RESERVED PARKING AREA. NO PROBLEM ENTERING OR EXITING AREA.
Jerry F. on Apr 30, 2012
• It was relaxing to know that we didn't have to kill ourselves getting to the ballpark just to get a parking space. It was so easy, I wish I had done this before. I WILL do this again. Thank you for this simple solution to good parking.
Sheila P. on Apr 29, 2012
• Yes, I would use this parking again for seats in CF.
Beau W. on Apr 26, 2012
• Great parking - so close to stadium
Margaret A. on Apr 26, 2012
• Great Place to park!
Debra B. on Apr 26, 2012
• Very Close to the ballpark, well lit and one block from visitors center really was Ex\in and out!
Margaret K. on Apr 26, 2012
• The parking is extremely convenient and I have been parking in this lot for 9-10 years. I've recommended it to several friends, and will continue to do so. Molly S.
Molly P. on Apr 25, 2012
• I just wish I had known the physical location to the ballpark so I could have taken a different route to the general area. I got into traffic on the west side of the stadium unnecessarily. I'll know next time.
Joe H. on Apr 25, 2012
• The only improvement would be a round trip shuttle to the park, maybe at extra cost!
Eugene R. on Apr 25, 2012
• The only improvement would be a round trip shuttle to the park, maybe at extra cost!
Eugene R. on Apr 25, 2012
• Best place to park for a Rangers game. We have been parking here for over 8 years and it is easy to get in and out of with a short walk to the stadium. Now with the online reservation it makes it even easier.
Derek S. on Apr 25, 2012
• We parked at the 2100 Randol Mill Rd lot at Texas Ranger Stadium and the one thing you need to tell people is that it is on the back side of the 2100 Randol Mill Rd Building. The road was blocked off just before the building with the number on it and fortunately we ask the patrolman right there where 2100 building was and he looked around and saw it and then let us through to get to the back side parking lot!!! Outside of that it was a great lot! Close to Center Field and not difficult to get to the stadium.
Kurt S. on Apr 25, 2012
• Great!!
Nicole S. on Apr 25, 2012
• EZ is just what it says. EZ in and out. I left my house at 6:35 and was in my seat at 7:05 for the start. I live 10 miles away. This is a great option.
Curtis V. on Apr 25, 2012
• Great service, as always, and great location.
Monica R. on Apr 24, 2012
• The only negative was that the lot was a little hard to find. It was our first time to use the lot. We are using the lot again on May 11th.
Ken H. on Apr 24, 2012
• Great location! The fastest I ever got out of game. Will park there again!
Tania I. on Apr 24, 2012
• It was difficult to find the right place. Even with the instructions, we pulled and were chased out of three lots before finding the right lot. When we pulled into the right lot, the attendant somewhat impatiently motioned me to pull further in quicker. I told him, "After being chased out of two other lots, I am not going to be so fast." All else was perfect, the location, etc. Thanks.
Timothy W. on Apr 24, 2012
• Easy parking and close to stadium. No traffic in leaving and we left at end of sold out Yankees and Rangers game. I will use in future.
Andrew B. on Apr 24, 2012
• Easy walk to the stadium to and from parking, and easy to drive away after game.
Angel G. on Apr 11, 2012
• Great location right across from the ballpark-easy to get out of.
Nancy L. on Apr 10, 2012
• Great parking and easy in and out
Ryann R. on Apr 10, 2012
• It really is easy in and out! This is my go-to place for parking during the season. Great if you're heading south or east after the game.
Elizabeth B. on Apr 9, 2012
• Awesome as always!
Randall S. on Apr 9, 2012
• It was easy to get into the parking area and even easier to leave. Plus it was close enough to be walking distance from the stadium and cheap to park at.
Michael B. on Apr 9, 2012
• Great location, friendly staff!
Stanton C. on Apr 9, 2012
• Easy Entrance/Exit
David S. on Apr 9, 2012
• Easiest In/Out Parking at the BallPark!
Debra B. on Apr 9, 2012
• Parking was very close to the field! Easy out after the game.
Michelle S. on Apr 8, 2012
• Very convenient and close to the baseball and football fields. On-site attendant and reserved spots were readily available.
Gary A. on Jan 8, 2012
• Location and price. It was very easy to get out of after the game.
David W. on Jan 7, 2012
• yes, the parking was very good. and i love how it was secured.
Gina H. on Dec 25, 2011
• Just like the name suggests, your in and out of there easily!
Allan W. on Dec 25, 2011
• It was easy to find it (good directions/signs), plenty of parking spaces; nice parking attendant
James B. on Dec 25, 2011
• Name says it all. EZ in and EZ out. Close to the stadium
Raymond B. on Dec 13, 2011
• 3 minute walk from good tailgaiting. When leaving after the game, I didn't hit one second of traffic by taking the back road that is the entrance to the parking lot. Excellent price and easy and fast entrance and exit!
Brian B. on Dec 13, 2011
• Great location and safe too.
Mandy R. on Dec 12, 2011
• Easy in and Easy out for my first Cowboys game ever. The walk was enjoyable and got us prepared for standing still for the next 3.5 hours during the game.
Aaron B. on Nov 25, 2011
• Very accessible.
Brenda T. on Nov 15, 2011
• Great location!
Brenda T. on Nov 15, 2011
• Bit of a challenge to find the lot, but it was an easy walk to Cowboys Stadium
Paul L. on Nov 15, 2011
• Was a good distance from the stadium..not a long walk
Crystal G. on Nov 14, 2011
• The parking area was easy to find and close to the stadium. I will coming back for other games and will use this parking area again.
Mickey R. on Nov 14, 2011
• The parking was Great! I would recommend it for anyone. Not too far from Cowboy Stadium.
Andrew G. on Nov 7, 2011
• I was skeptical at first but bought online in advance anyhow. We got there and they had set aside a section of spots for the advance purchase people which was great. We didn't have to hunt for a spot or drive around in circles. The attendants were very nice and not creepy like some of the other lots. Great location. I will certainly use this location and ParkWhiz again without hesitation.
Jennifer D. on Nov 7, 2011
• My first time at Cowboys Stadium and it was Real easy to find!! Friendly staff!!! Wasn't far from the stadium, and was in good walking distance!!
Demetresae J. on Nov 7, 2011
• Short walk from either Cowboys or Ranger Stadium. Nice not having to worry about finding parking even if you are late getting to the game. Definitely worth $22. Juana R. on Nov 7, 2011 • Quick & easy to park, get to the game, and get out afterwards. Adrian D. on Oct 25, 2011 • Great parking and close to stadium. Eazy to get in and out Alan F. on Oct 25, 2011 • Awesome parking for the World Series! Teresa M. on Oct 24, 2011 • Always park there when I attend Rangers games up to 10 times a year including the post season. Parking is convenient and folks hospitible. Just one complaint, and I know it's probably priced to the market but$50 for World Series parking was steep.
Kevin C. on Oct 24, 2011
• Awesome spot. Close and very nice folks.
Kerry M. on Oct 24, 2011
• Great parking, close to the stadium. Attendants were very friendly.
Jonathan R. on Oct 24, 2011
• Excellent...easy to do!
David M. on Oct 24, 2011
• EZ in EZ out. It was close and everything was great
Georgiann M. on Oct 24, 2011
• Amazing as always! Thanks!
Randall S. on Oct 23, 2011
• very easy and convenient. thanks.
Russell G. on Oct 23, 2011
• Close to ballpark. Easy access. Quick out.
Randall B. on Oct 23, 2011
• Easy in and out. 1 block from center field. 4th time using lot in post season games.
George I. on Oct 23, 2011
• Great spot!! Definitely easy-in and easy-out! Would park here again everytime!
Cynthia S. on Oct 18, 2011
• great location, easy in and out
Nicole O. on Oct 18, 2011
• great location
Shelly O. on Oct 18, 2011
• Very close! Easy to get in and easy to get out, although we left after much of the traffic was gone. Great for the playoffs!
Brandon F. on Oct 17, 2011
• We were close to game time & it was nice to have our spot waiting for us.
Craig L. on Oct 17, 2011
• My husband and I both got sick the morning of an ALCS game and therefore couldn't go. EZ In/Out actually called us about 15 minutes before game time to see if we were on our way. I told them we couldn't make it and to please go ahead and give away our reserved spot, but it was so nice to know that if we had been that late, our parking space would have been waiting for us. THANK YOU!!!
William B. on Oct 16, 2011
• GOOD SPOT FOR THE MONEY!!
Ruth G. on Oct 16, 2011
• EZ
Beth F. on Oct 16, 2011
• Great EZ parking.
Beth F. on Oct 16, 2011
• Parking was great people very friendly and helpful. I park with you guys all the time.
Jason F. on Oct 12, 2011
• I would recommend this parking to anyone. It was easy to find, the directions were clear and the lot attendants were curtious and friendly. I appreciated the closes to the ballpark and when it was time to leave it was easy to access the quickest way out of the venue area. Thanks for a safe and reliable service. One suggestion: Make your lot signs bigger so it is easier to see from the road......thanks again.
Russell W. on Oct 11, 2011
• It work great and it wasn't to bad getting out after the game. I have used the service twice now and will keep doing so and recommend to friends.
Scott P. on Oct 11, 2011
• This is the only way to do postseason parking if you don't want to or can't get there super early. They save the spot for you. What could be better than that? The lot is very close to the ballpark. It's just as close if not closer than the Rangers lots. You won't be sorry! Plus sometimes the Blue Bunny/Good Humor truck is there after the game. Bonus!
Elizabeth B. on Oct 11, 2011
• Great location and easy to find. Will use again.
Will F. on Oct 11, 2011
• just like the name EZ in and out !! perfect to exit and get out after game,
Kevin W. on Oct 11, 2011
• easy to get in and out of
John S. on Oct 11, 2011
• If you purchase ahead of time they let you parked in the "reserved section." Does take a good 15 minutes to get to Cowboys Stadium, but the walk is pretty easy and a straight shot. After the game, it took 5 minutes and I was out onto one of the main roads heading to the access road of 360. Pretty good location.
Karry S. on Oct 10, 2011
• Excellent location and as advertise easy in and easy out. Best parking at the Ball Park
Ernest P. on Oct 10, 2011
• We went to dinner before the game and truly enjoyed the meal because we were not stressed over finding a parking place. I will use this service again and have already recommended it to my friends. Thank you.
Sheila P. on Oct 10, 2011
• Excellent parking spot! Easy in and Easy out! I always use Parkwhiz whenever I can! Thanks!!
Aimee V. on Oct 10, 2011
• When the event was over I had no problems getting out.
Robert J. on Oct 10, 2011
• Very close, convenient. Easy in and out
Edward R. on Oct 9, 2011
• Easy to get to the lot, 15 mins walk to Cowboys Stadium, could be shorter if we did not have to wait for the police officer escorted us to cross the main street. the walk back was lighted and safe.
Nelson P. on Oct 9, 2011
• I love the location! Also love the online reservation convenience!
Adriane Y. on Oct 9, 2011
• Great place to park.
Barbara M. on Oct 9, 2011
• very easy and nice to know we had a spot reserved.
Julie B. on Oct 7, 2011
• Better than I even expected. Walk to the stadium is not bad at all and getting out after the game was great. If I ever make my way back, I'll definitely park here again!
Tim W. on Oct 7, 2011
• Didn't know what to expect when we arrived but parking was efficient check in area and we felt secure in leaving our vehicle while we attended the Cowboys game. We definitely will use EZ In/Out Parking again. Thank you.
Sandra D. on Oct 6, 2011
• We were a tad late showing up for the game driving down from OKC, but I felt like a VIP upon pulling into this lot. The attendant knew exactly what was up and had a spot saved just for us! Getting in and out was a breeze! The walk to either events at the football or baseball stadiums were simple and easy. I highly recommend this space.
Nicholas C. on Oct 5, 2011
• Good spot easy to get in and out . Will get a spot here again.
Tommy C. on Oct 4, 2011
• Very easy to get in and out.
Scott K. on Oct 3, 2011
• Good location to use getting to the stadium as well as leaving.
Brenda T. on Oct 3, 2011
• Closer than I expected. great price! Fast and easy exit. Thank you.
Jerry G. on Oct 3, 2011
• the walk was not bad in good weather.
Jacquelyn C. on Oct 3, 2011
• Great experience
Terry O. on Oct 3, 2011
• We have used this lot through EZ In/Out Parking for the last couple of years. Not only is it very reasonably priced, it truly is easy to get in and out of the lot.
Stephen A. on Oct 3, 2011
• Parked there for the fifth time for a rangers game. This time was game 2 of alds. Stayed to end of game as it was a close score. Hustled to car from opposite side of stadium near 1st base, got in car, out of lot and on to 360 in no time without any traffic delay despite over 50,000 fans at the game that night.
George I. on Oct 2, 2011
• I loved the convenience. It was easy to get in, and easy to get out — and on a night when there were 51,000 people at the ballpark. Thanks for making the lot available. It was well worth the price.
Kenneth C. on Oct 2, 2011
• Awesome as always! Perfect spot for tailgating!
Randall S. on Oct 2, 2011
• Awesome as always! Perfect spot for tailgating!
Randall S. on Oct 2, 2011
• We have parked in this lot several times and we have never had a problem.
Barbara M. on Oct 2, 2011
• Great locatio. Truly easy in and it!
Beverly B. on Oct 2, 2011
• It was perfect! I park in this lot during the regular season too. However, reserving a space through Park Whiz is the only way to do it during the playoffs. Learned our lesson about crazy parking and lines during the 2010 playoffs -- never again!
Elizabeth B. on Oct 2, 2011
• Polite and organized attendants. Convenient parking.
James B. on Oct 2, 2011
• Exellent location, close to stadium.
Dominic R. on Oct 2, 2011
• The parking area is well located to the Ballpark, easy access and the attendant was courteous and helpful.
Nick D. on Oct 2, 2011
• great spot and location - very convenient
Cheryl C. on Oct 2, 2011
• great spot, very convenient.
Cheryl C. on Oct 2, 2011
• Excellent location and friendly people.
David P. on Oct 2, 2011
• Great location not everyone knows about. Very convenient to purchase online in advance.
Karen K. on Oct 1, 2011
• Fantastic!! Easy to find, close to the Ballpark!!
John S. on Oct 1, 2011
• AWESOME! I especially liked that Missy called me to make sure I was still coming. How cool is that???
Doreen M. on Oct 1, 2011
• Close to ballpark with easy escape
Ryan H. on Oct 1, 2011
• Great parking... like da name ez in ez out walk 4 min away from 60 parking so 4 min walk was like 40 saved in my pocket
Jose Y. on Sep 27, 2011
• We parked here for the Cowboys vs. Redskins game and it was a good place to park. It's relatively cheap compared to other places and since you walk by the Ballpark to get to Cowboys Stadium, everything is well lit and there are a lot of people around so you feel very safe. The walk to Cowboys stadium is about 15-20 minutes, so get there early. The attendants were friendly and the lot is also well lit.
Christopher B. on Sep 27, 2011
• Easy in before the game and out after the game.
Sherry K. on Sep 27, 2011
• It is closer to the ballpark than most others, and is cheaper than parking right next to the ballpark.I think it is close than some more expensive lots.
Steven R. on Sep 26, 2011
• Super convenient to the Ballpark, very good price point, comparatively.
Tobin W. on Sep 26, 2011
• The parking was easy to find, VERY close to the centerfield entrance and situated so leaving was a snap.
Stephen S. on Sep 25, 2011
• Close to ballpark. One block from centerfield gate. Quick exit after game.
George I. on Sep 25, 2011
• EZ IN/OUT describes this lot perfectly. Attendants were friendly. I was directed to an area of the lot closest to the stadium protected by traffic cones. Is that because I had prepaid? Don't know, but I like to think that might have been a perk for securing a guaranteed spot. There were 43,000 fans at the game and we stayed until the end, walked to our car, exited the lot and were on the freeway to home in less than 5 minutes.
John F. on Sep 24, 2011
• Very pleasant attendant when we arrived. Extremely easy to get and out from our parking place. Short walk to the ballpark. Can't think of one negative thing about our experience ... it was GREAT!!
Virginia J. on Sep 16, 2011
• Close to the Ball Park and easy access. Have parked there three times now and will continue to use.
Jeffery K. on Sep 15, 2011
• Always easy to get in and out. A great hidden jewel for Rangers parking.
James P. on Sep 14, 2011
• This place was a perfect place to park! Very close to the stadium and when we left we had no traffic or congestion at all. I would recommend EZ In/Out to all of my family and friends.
Melissa G. on Sep 11, 2011
• it was definitely easy to get in and out after the game. it was just past the intersection where everyone else was getting in a traffic jam. definitely recommend and would park their again when in dallas for a game again.
Lauren J. on Sep 8, 2011
• This was an excellent lot. We were so close to the exit. The attendant was so kind. Thanks so much!
Ashley B. on Sep 7, 2011
• Although it was a bit of a walk to cowboys stadium it was a straight shot. Leaving was so easy. There was absolutely no traffic and so easy to get back to 360. I would recommend this to everyone. I was very pleased!!!
Elizabeth W. on Sep 5, 2011
• Easy in and out access and close enough for an easy stroll to the stadium. Also, right by a large tailgate area.
Phillip M. on Sep 5, 2011
• As close as you could expected. Right next to visitors center where there clean bathrooms. Very easy out on Ball parkway I was going out by hwy 360 and I got in the left lane and never slowed down Great location!
Storm R. on Sep 5, 2011
• Godd spot, easy in and easy out.
Todd M. on Sep 4, 2011
• Great location, easy transaction.
Todd M. on Sep 4, 2011
• Parking attendant was friendly and the location was good. About a 12 minute walk to Cowboys Stadium. Would use again.
Jason M. on Sep 4, 2011
• I liked that I had a front row parking spot and didn't have to dig for change. Thank you.
Martina R. on Sep 1, 2011
• Easy in, Easy out and just a short walk to Center Field.
Michael F. on Aug 30, 2011
• Easy to find, close to Rangers Ballpark and super easy to exit!!! Didn't wait in any traffic!
Julie H. on Aug 29, 2011
• EZ was close the stadium and easy to find. I will definitely use them in the future.
William D. on Aug 28, 2011
• Easy to find, nice short walk to the stadium.
Thomas Y. on Aug 27, 2011
• being able to park at the front of the lot makes it a lot easier with a 3 year old!!
Alison W. on Aug 25, 2011
• As usual we love parking at the E. Randol Mill Rd site. We've been parking there several years, and have told several friends about the location. It truly is easy in and easy out. I also appreciate being able to make a reservation when necessary..
Molly P. on Aug 23, 2011
• Location
Vivek A. on Aug 22, 2011
• Very convenient and I was able to leave without a lot of traffic.
Jacqueline A. on Aug 12, 2011
• Wonderful. Great with all the small children we had.
April S. on Aug 11, 2011
• With the heat its great that you don't have to walk too far. Thanks.
April S. on Aug 11, 2011
• Perfect. We will continue to park here. Thanks.
April S. on Aug 11, 2011
• Parking was great i will use ParkWhiz again!
Curtis K. on Aug 9, 2011
• Best parking I have ever had, Attendent was extremly helpfull because I had a large family group, and he helped us stay together.
Hugh M. on Aug 8, 2011
• Great place to park. Easy in and out without traffic jams. Had a reserved place. As close as reserved lot.
Dicky H. on Aug 7, 2011
• excellent!
Jorge R. on Aug 7, 2011
• Very convenient to the stadium; had parking attendants; avoided the hassle of searching and not knowing if convenient spots are available/affordable.
Ric G. on Aug 7, 2011
• Excellent, great price, would buy again!
Matt J. on Aug 7, 2011
• Good Spot. Short Distance. Parking lot guy was very friendly. Easy In and Easy Out!
Anil B. on Aug 7, 2011
• Excellent as always. Truly is easy in and easy out. Thanks!
Randall S. on Aug 7, 2011
• Perfect, they had reserve spots for those who bought advanced parking.
Joseph C. on Aug 6, 2011
• Easy access, easy getaway after the game. Much closer to the stadium than I probably would have been if using stadium parking, and at the same price. Friendly and helpful attendant.
Chris B. on Jul 28, 2011
• Extremely close to the Ranger's Ballpark, only reserved season holder's parking was closer!
Martin R. on Jul 26, 2011
• Great location, price and folks!
James P. on Jul 23, 2011
• Great location, friendly folks, great price!
James P. on Jul 23, 2011
• The parking was absolutely great. We had a little trouble finding it, even though we had a GPS, which took us to the exact address. We were looking for a parking lot, not parking in front of commercial buildings. But that's ok. The street was blocked off since the front side parking was full when we got there. A Police Officer allowed us to go thru the barricade to get to the back side parking, so it was a snap then. Leaving was also a breeze. We took the street in the back over to Six Flags Drive and then to I30 and were almost to our hotel twenty minutes later when our daughter called and said they were still stuck in the parking lot. (They didn't park where we did.) Thanks, EZ In/Out
Darrell W. on Jul 11, 2011
• Great location. Easy in/out. Will recommend to all my friends.
Jeffrey P. on Jul 11, 2011
• Great on the EZ/IN....but the getting out not so EZ, but I managed to muddle through it. Yes I would park there again now that ii know what to expect....
Penelope J. on Jul 10, 2011
• Smooth and easy in and out. Will definitely do it again and I will recommend it to everyone.
Raymond F. on Jul 7, 2011
• Was good. Thanks.
Keith M. on Jul 5, 2011
• he parking spot was great. The only down side is that it took 45 minutes to get to I-30 after the fireworks. I was hoping for much less.
Roland M. on Jul 5, 2011
• Great reserved space Friendly employees
Bobby W. on Jul 4, 2011
• Easy in and out friendly employees
Bobby W. on Jul 4, 2011
• great parking place, courteous staff and we would reccommend it highly.
Bobby W. on Jun 21, 2011
• This is by far the best lot out there!!!!
Mark W. on Jun 21, 2011
• Perfect experience as always.
James P. on Jun 21, 2011
• It lived up to your "EZ In/Out" slogan. I have recommended it to others and will certainly use it again.
Margaret A. on Jun 9, 2011
• I got to the game late and there were still a few spots left. Most of the general parking lots were full. Great directions, easy In/Out. I will definitely use this service again.
Jennifer G. on Jun 9, 2011
• Close to the ballpark, convenient, comparable in price to other lots without the crowds
Stuart S. on May 30, 2011
• Great parking close to ballpark
bobby R. on May 30, 2011
• As advertised, ez in and out
John B. on May 28, 2011
• Great experience! I have already recommended you to my friends.
Lenora S. on May 9, 2011
• Great Parking and close!
Ellen M. on May 9, 2011
• Close, easy
Andrew W. on May 9, 2011
• reserved spot was great, great location........fairly easy to get to
Nicholas L. on May 7, 2011
• Very easy to park and get out. We left the parking turn left went down to Miracle Mile caught the light, got on Randol Mills Road and went home. Everything was great and made it a pleasure to park there.
Donna L. on Apr 25, 2011
• Everything was fine. Directions were cumbersome.
Kay H. on Apr 23, 2011
Amy G. on Apr 23, 2011
• My third time....awesome as always!!
Dwayne S. on Apr 19, 2011
• It was good and a close walk to the stadium
Todd L. on Apr 17, 2011
• Was very easy to arrive and also when leaving. A short walk to the ballpark and would definately recomend this location for future use. I know i will use it again next time we vist the Ballpark. GO RANGERS !!!
Fred G. on Apr 6, 2011
• Great location. Only a short walk to the ballpark. Surprisingly easy in and out.
Dennis O. on Apr 6, 2011
• This is the second time we've used this location....it's awesome! The owner/mgr is a really nice guy and the parking folks are always very helpful. You can count on these guys, they'll take good care of you.
Dwayne S. on Apr 5, 2011
• Easy online transaction, great parking location right next to stadium. I would definitely recommend this to anyone, especially for families with younger children. You have enough to worry about already, EZ In/Out Parking takes the worry and stress out of wondering if you will be able to find a good parking spot. Thank you EZ In/Out Parking for helping make our day at the ballpark a wonderful one to remember.
Eric B. on Apr 5, 2011
• The lot was easy to get into and out of the only thing I would change would be to have better signage to make it easier to locate.
Gary P. on Apr 4, 2011
• Went to a Ranger game and was very happy with where we parked. It was easy to get to, close to ballpark and easy to get out off. I would park here again and again.
Mark E. on Apr 4, 2011
• As always, it was all good! I will be reserving a spot again the next time I head up to the ballpark!
Randall S. on Apr 4, 2011
• No handicap available, but was able to park, yes EZ out.
Elizabeth L. on Apr 4, 2011
• I went to the ball game on opening day and drove right in and right out with out any delay. Usally have a season parking pass but gave it to a customer that was attending the game with me this time. Used Park Whiz (East Side) and even got out quicker than normal. Would have no problem using this agian. Jimmy
Jimmy L. on Apr 4, 2011
• There was no parking in our reserved spot for the ranger's game (bad). But, I called the number on the reservation confirmation and they redirected me to another lot (good). Very Responsible. Will definately use again.
Beth W. on Apr 3, 2011
• The only bad was not knowing Randol Mill is closed at several spots so it made it a little tough finding the parking lot. Once there it was great and close to the stadium. Now that I know where we need to go I will park there again!
Judy H. on Apr 3, 2011
• Flawless.
Stuart F. on Apr 3, 2011
• Very convenient
David P. on Apr 2, 2011
• Will continue to park here for all cowboys stadium events
Robert J. on Feb 27, 2011
• Alot of fun. Close to everything. Perfect for tailgaiting, would def do it again!! Go Steelers!!
Jackie A. on Feb 13, 2011
• It was very easy to get in and out.
Betty L. on Feb 12, 2011
• Awesome as usual! AND I was given a special, up close parking space! Special!!!!
Doreen M. on Feb 10, 2011
• Comfortable walk to Cowboy Stadium for Super Bowl. Attendants friendly. Portable toilets available nearby.
Pamela G. on Feb 10, 2011
John C. on Feb 10, 2011
• We park at this lot all of the time...ParkWhiz made it a breeze for the Super Bowl
Zachary G. on Feb 10, 2011
• I Parked here for the superbowl and had the easiest time getting in... and getting out.
Steven S. on Feb 9, 2011
• East access and had a spot waiting....close to event....good choice.
Lawrence K. on Feb 9, 2011
• The parking was good. Not to far from the Dallas stadium.
Selina K. on Feb 9, 2011
• Very convenient and easy. A bit of a walk.
Cynthia C. on Feb 9, 2011
• Not knowing anything about Arlington or Cowboy Stadium, but knowing how many 1000's of people would be attending Super Bowl, we checked out the ParkWhiz sight ahead of time. After doing that we knew it would be a piece of cake getting to it and getting out. The folks who ran the lot were very nice. We didn't mind the walk to the stadium from the lot either. We would recommend this lot and ParkWhiz. Clement and Gay Frank from Oregon
Clement F. on Feb 9, 2011
• Like the name says "easy in/and easy out". A little far to the other end of Cowboys Stadium but away from all the traffic after the Super Bowl.
Thomas M. on Feb 9, 2011
• As the name advertises, it was very easy to get in and out during the Super Bowl. Well organized and short walk to Cowboys stadium and even shorter walk to rangers ballpark.
Stephen J. on Feb 8, 2011
• Great location. Directions were a little off, but everyone was friendly and the price was right!
Douglas C. on Feb 8, 2011
• easy access and spacious lot. easy walk to Cowboys Stadium and next to Rangers Ballpark of Arlington.
Craig W. on Feb 8, 2011
• Lives up to it's name. No problems with traffic in or out for the the Super Bowl
James Q. on Feb 8, 2011
• Excelent ! everything was as promised.
Guillermo M. on Feb 8, 2011
• Very convenient and easy to get in and out. Easy walk to stadium and very quick and easy exit after the game. Would recommend this site to anyone going to Cowboys Stadium or Ballpark at Arlington.
John G. on Feb 8, 2011
• very good.
Milton R. on Feb 8, 2011
• LOVED!! this location. We were able to leave the parking lot through the back and take an alternate route to get to 30. We ended up getting around the nightmare of SB traffic!! Will not attend any other event at Cowboy stadium (Hmm, as a Steeler fan, that is probably a true statement just stopping there) without parking at this location! Will recommend to all my cowboy fan family!
Sabrina J. on Feb 8, 2011
• It was a little difficult to find the Parking Place but I asked directions in a Gas station and I found it very easy. It was a very short walk to the stadium since we were at the red gate. Once we got out of the game it was very easy to get back to main Highways. I would recommend this parking place 100%.
Javier G. on Feb 8, 2011
• It was perfect! Easy to locate and great price!
Shauna M. on Feb 8, 2011
• It worked out very well. It was nice to prepay and have everything ready when we got there. It was not too bad of a walk to the stadium. I would definitly recommend you to other people.
John H. on Feb 8, 2011
• Parking was GREAT! Easy In/Out for sure. Short walk to the stadium.
Joseph B. on Feb 8, 2011
• Your parking spot truly lives up to its name!! We needed to get out fast to get to teh airport after the Super Bowl. It was a breeze!! Great Price! Convenient!
Juan C. on Feb 7, 2011
• now I know why they call it EZ IN/OUT because it was PERFECT
Karan R. on Feb 7, 2011
• It was great!
Sandra K. on Feb 7, 2011
• Avoided traffic after game. Good location
John B. on Feb 7, 2011
• Great directions, easy in, easy out, close to Baseball and Football.
Alan J. on Feb 7, 2011
• Close tothe stadium and it literally took us only minutes to get out because of the location. I have never used this type of service before but would highly recommend it!!
Peter Z. on Feb 7, 2011
• little too far from Stam but had to have it...
Herbert B. on Feb 7, 2011
• Easy to find & close to the Cowboys Stadium. I would recommend using this lot!!
Bonnie R. on Feb 7, 2011
• Everything went smoothly, and access seemed easy. We went 4-5 hours before game time and then didn't get into the game so left at kickoff, therefore can't really comment on traffic at the heaviest times. Parking was very orderly and there was an attendant there at least up until game time - didn't see any problems with cars being blocked in. Paid $60 for Super Bowl parking and that price seemed reasonable although there were$40 lots a little farther away, and at least for that event, the other lots did not fill (EZ was pretty full). Tailgating for Super Bowl was pretty limited but permitted, and I guess the fire marshall limits grilling in all of the lots. No bathrooms here - the parking is at the rear of a block of commercial buildings. I was happy with the lot and would use it again for the convenience.
Stacia P. on Feb 7, 2011
• Extremely easy to get in and out of! Perfect day! Would HIGHLY recommend this service!!
Christopher M. on Feb 7, 2011
Richard D. on Feb 7, 2011
• The parking lot was easy to find with the directions provided through parkingwhiz. It was also at a great location for super bowl xlv. Very close to rangers stadium and six flags as well. Easy to exit following the game.
Brain S. on Feb 7, 2011
• Easy access. Good distance. Great price.
Mark S. on Feb 7, 2011
• Great experience! Got right in to park and the walk to Cowboys Stadium wasn't too far. Will definitely use again!
Lesa J. on Feb 7, 2011
David R. on Feb 7, 2011
• Space was great, easy in, very easy out. The directions for getting out were a little confusing as they referenced Randal Mill turning twice in a row which was confusing. We were actually going toward Houston so I had other directions and didn't need to use the included exist directions but when I looked at them they were confusing. I would use the spot any time - reasonably priced and all good.
Bradley H. on Feb 7, 2011
• pretty easy to find, no problems when we arrived with the reservation, people were friendly.
Maureen G. on Feb 7, 2011
• It was really EZ access and no wait to get out, unlike every other big event I've ever been to.
Keith R. on Feb 7, 2011
• The people working the lot were very friendly and the walk to Cowboy stadium was not bad. I would use again....
Wendy H. on Feb 7, 2011
• Easy in and out!
Lisa M. on Feb 7, 2011
• Amazing spot, i trully recomend it! a short walk to the stadium, and they host me as a king!
Saul E. on Feb 7, 2011
• I got parked right by exit so was very east to get out.
Paul S. on Feb 7, 2011
• convenience going in and out.
Robert W. on Feb 7, 2011
• As advertise easy in and out. Close enough to the stadium to be a good value. Will use thislot from now on.
Matthew M. on Feb 7, 2011
• Easy walk to Cowboys Stadium at a decent price.
David L. on Feb 7, 2011
• easy in and out.... attendants knew exactly what to do..... very fast and efficiant.
Jason P. on Feb 7, 2011
• Close and good price
Adele H. on Feb 7, 2011
• Good location, short walk to Ball park and Cowboystadium. No waiting to get put, easy access to freeway, would use it again.
Hector S. on Feb 7, 2011
• Great place to park-so easy to get back to freeway
John N. on Feb 7, 2011
• This was a fantastic place to park. Very EZ in and then out to IH 30. Would definitely use ParkWhiz again.
John M. on Feb 7, 2011
• Easy getting in and out of parking lot. And was greated by nice workers when we got there. Would use this parking area again. Thanks
Michael R. on Feb 7, 2011
• Everything as promised, attendants were friendly and things were well-organized...and in the chaos on Superbowl Sunday that was great. Would definitely recommend, and use them again.
Monica S. on Feb 7, 2011
• Curtious attendants & plenty of room to ark & tailgate
John D. on Feb 7, 2011
• Great parking easy access. Would park again.
Charles N. on Feb 7, 2011
• easy in easy out friendly staff
Craig G. on Feb 7, 2011
• thanks a lot-will do again in future
Jerry L. on Jan 11, 2011
• Getting to the parking lot was very easy. There was a person to greet us, take our e-ticket and direct us to a great spot. This parking was less expensive than other lots nearby. When it was time to leave, we were not himdered by other traffic exiting the stadium. We were on the freeway in just a minute or so. My only disappointment was that the lot was not made available until 3 PM for a 7 PM game at Cowboy Stadium. It limited our opportunity to spend time with friends and fans tailgaiting.
Michael W. on Jan 11, 2011
• We had a little issue finding the exact lot but were extremely pleased with it. We liked the proximity to the stadium and how easy it was to get back to the highway.
Michael H. on Jan 10, 2011
David O. on Jan 10, 2011
• Great location, easy in and out for a great price!
Tyesha E. on Jan 8, 2011
• The parking was awesome, great support staff, very accomodating and a very easy walk to Cowboy Stadium!
Howard S. on Jan 8, 2011
• can be improved by also providing signage to look for versus just an address
Thomas H. on Jan 8, 2011
• Awesome.
Darrell D. on Jan 8, 2011
• Perfect parking! Lady called me when we were 5min late and gave us detailed directions. Eticket on iPhone worked great. We exited a back road then took sixflags rd to get to highway quick and easy getaway!
Rachel N. on Dec 20, 2010
• Easy access. Well lighted at night.
Brenda T. on Dec 20, 2010
• easy to find and in a good proximity to the stadium. the departure wasn't bad wither.
Christopher C. on Dec 13, 2010
• After the game it was a short walk to the parking lot. Once at my car, I was out of the parking lot using the back road and headed east on I-30 in 4 minutes. WOW. Absolutely worth the cost.
Steve C. on Dec 6, 2010
Dean H. on Dec 5, 2010
• Drove right into lot, parked, caught a bicycle cart to the front door and repeated back to lot once game was over. Will use the location again.
Richard N. on Nov 23, 2010
• Great location for Cowboys Stadium parking. The walk is enjoyable, stroll past the Arlington Visitor Center, Rangers Ballpark and then the stadium. It is also will patrolled and well lit. The best reason is the exit route. You have a backroad out from the lot that can take you to I-30, Randol Mill, 360 or Division. Can save up to an hour trying to exit other lots I have used in the past.
James E. on Nov 22, 2010
• Great parking, could not have had a beter experience. We arrive early and were directed to our spot. After the game it was a short walk and an easy exit. We were on the road in no time. Would not ever consider any other lot to park in for a game.
Christopher G. on Nov 3, 2010
• It was great and very convenient, not to mention very close to the ballpark
Julian A. on Nov 2, 2010
• Parking was very close to the Ballpark! Easy in and out even in all the traffic. No hassles and almost way to easy. I'll be back again and again.
Danny R. on Nov 2, 2010
• Parking was very close to the Ballpark! Easy in and out even in all the traffic. No hassles and almost way to easy. I'll be back again and again.
Daniel R. on Nov 2, 2010
• Everything was good except the Rangers didn't win. No, really, I think there should be a little better signage pointing to the parking lot from the street. I pulled into the wrong lot and had to pull a u-turn and a prayer to find the actual lot I was supposed to go to. If I weren't two hours early so I could watch batting practice it could've been really frustrating. Just get a little better signage. This is a really cool product though, and I can't complain about how close I parked to the stadium.
Skyler H. on Nov 2, 2010
• Great location, internet friendly, awesome attendants! A++++++++++++++ Will only park here in the future!!
Ashley C. on Nov 2, 2010
• The parking was close and easy to get into and out of. Attendants were very friendly and available for questions.
James M. on Nov 2, 2010
• I was a little sceptical but this was easy and very convenient.
Robert B. on Nov 2, 2010
• Perfect.
Ron N. on Nov 2, 2010
• It was great ~ even better than the Lexus parking that we can never seem to get into anymore!
Carolyn K. on Nov 2, 2010
• Parking good. A little hard to get to.
Peggy J. on Nov 1, 2010
• Parking lot was terrific! Available when needed, easy in and easy out. Best ever and will definitely use again!!
Kenneth F. on Nov 1, 2010
• It really was EZ IN and EZ OUT- and this was a packed house World Series Game!! I will definately use them again!!!
Aimee V. on Nov 1, 2010
• Great service every time. This location is perfect for folks coming from Dallas. It is truly EZ in and EZ out.
James P. on Nov 1, 2010
• Very cloase and convenient
Steven B. on Oct 31, 2010
• Great location. Lives up to the name EZ IN/OUT.
Brian F. on Oct 31, 2010
• It made finding a spot very easy and quick. We were very close to the ballpark.. GREAT!
Jayson M. on Oct 31, 2010
• Wow! Was this easy and close! We'll surely use this location again.
Alicia W. on Oct 31, 2010
• It was ALL good!!!! They even saved me a spot next to my friends who I told to start parking there!!!!!!!! LOVE it!
Doreen M. on Oct 31, 2010
• Quick. Very easy. Great attendants. Highly reccomend !!!!!!
Julian K. on Oct 31, 2010
• I said it before....I have to be careful or one day when I go to park here, all the spaces may be taken!!!!!!
Doreen M. on Oct 26, 2010
• Wonderful, as always!!!!! They were there waiting for my husband and I!!!!!
Doreen M. on Oct 26, 2010
• Easy in and out...attendants very courteous... close to the stadium. I would park here again...especially in playoff games :)
Tonya J. on Oct 25, 2010
• Location and ease of parking. It was great!
Ernest P. on Oct 25, 2010
• It was perfect. Best parking I have ever had. I will come back to this same location everytime.
April S. on Oct 24, 2010
• I recommend your lot to MANY people. I need to stop or one day I will want to reserve a spot and there won't be one!!!! THANK YOU for holding my spot on Friday night! We were double booked and arrived around 8pm and alas!!!! Up walked a security guard with my name on her list!!!! The only thing was that I have parked in your lot about 5 times now for the Rangers and once for the Cowboys. I don't know how to get my free time (after 3 parks) though....cause they make you pay on line when you go to reserve. I am saving my stubs although I did NOT get one Friday night. :( Thanks for the opportunity!!!!! Doreen McKenzie
Doreen M. on Oct 24, 2010
• Easy in, Easy out. Perfect!!
Steve A. on Oct 23, 2010
• On a busy day where the Rangers and Cowboys were both playing, they had a sign out front for ParkWhiz reservation holders only. An attendant promptly greeted me and let me and my group in. The setup was great, plenty of space, perfect for asphalt tailgating!
Ryan C. on Oct 18, 2010
• Easy to find. Close to stadium. Easy to get out after alcs game.
George I. on Oct 17, 2010
• It was a great place to park, a very short walk to and from the stadium It was easy to find because we had used it before. The staff was courteous and it was nice to have a reserved spot.
Bobby W. on Oct 17, 2010
• Parking was easy to find off Arlington Downs Rd. There was a sign and attendant know what to do as soon as I mentioned ParkWhiz. Very short walk to Center Field entrance to Ballpark. Leaving was as expected - lots of traffic but real traffic issues experienced at Hwy 360. Getting out of parking lot was a breeze. Recommended, will do this again.
Greg S. on Oct 17, 2010
• I arrived about 30 minutes before the game. They sold out of spots 2 cars in front of me but I was able to drive on in to the front of the lot!
Ryan H. on Oct 17, 2010
• the parking was real close to the stadium. took us about 2 minutes to walk to the stadium gate. easy to find although the Parkwhiz signs are very hard to find. i don remember my confirmation noting to look for signs, but even so they would have been hard to see. i would suggest making the signs bigger. also, from your e-mail i could not link to an interactive map to see where i was on the interstate relative to the parking lot. otherwise the parkin lot was awesome as far as the convenience and how close it was to the stadium. easy in, easy out.
Jamie B. on Oct 16, 2010
• It was great having a reserved space. We normally park in this lot on game day and we love the easy in/out
Sylvia W. on Oct 16, 2010
• Great Place to Park! We have never had a problem. Great to know where you will be parking for the big game.
Barbara M. on Oct 16, 2010
• It couldn't have been easier and more convenient! Just a 2 minute walk to the center field gates. With traffic making our trip to the ballpark longer than expected it was so nice to not have to search for parking - we just pulled up, gave the attendant the reservation ticket, and parked. A DREAM ! Thanks.
Glenn H. on Oct 16, 2010
• simple, safe, easy access smooth experience
Michael F. on Oct 15, 2010
• Truly easy in and easy out! LOVE it!
Doreen M. on Oct 14, 2010
• This spot is east of the Ballpark, so be in shape and ready to walk! Easy in and easy out.
Julie J. on Oct 13, 2010
• Loved it! We were there for the Saturday, October 9, Texas Rangers playoff game and had no problems getting into the lot. Traffic was heavy leaving the game, but we expected that. It's so close to the ballpark and having a reservation really took the worry out of parking. I'd definitely do it again!
Judith S. on Oct 12, 2010
• No obvious sign when we got in from E. Randol Mill. I read reviews about this before. Seems that there is no improvement made so far. We had to make a round to find the parking lot. Other than that, the parking spot was good.
Rudy G. on Oct 11, 2010
• Overall a great parking experience. With the Cowboys and Rangers playing at the same time I expected problems, but did not have any! Great job.
Kenneth H. on Oct 11, 2010
• Super easy to get in and out of, even with Ranger playoff game going on 1 block away.
Paul S. on Oct 11, 2010
• Great location. Very close to the stadium for a pregnant woman. :) If we wouldn't have bought the parking pass on line we would have been driving around forever trying to find a spot. They were having to turn people away that didn't reserve their spots ahead of time.
Amanda H. on Oct 11, 2010
• All was great...courteous and efficient.
John G. on Oct 11, 2010
• Great spot. Very close to Centerfield gate.
Bryan M. on Oct 11, 2010
• Your directions were very good and your name says it all, it was EZ in and out of the site before and after the game. Thanks
Richard G. on Oct 11, 2010
• location was great - getting to the location was not. We did not realize until we finally asked someone that we could go through the barriers. If there was a sign there it would have made it so much easier. Now that we know we will definately use it again.
Deborah F. on Oct 11, 2010
• The location was great, and getting in was a breeze (although their signs were a little difficult to spot). It was nice to have the assurance before we ever arrived that we would have a spot to park. It wasn't exactly "easy" getting out, but I don't think that had anything to do with the lot, just a lot of traffic in the area on a very busy sports weekend. I would recommend it.
Jason K. on Oct 10, 2010
• great
Dianna B. on Oct 10, 2010
• It was nice knowing we had a place reserved since we were late getting to the game.
Diane C. on Oct 10, 2010
• It was convenient, but a bit difficult to find. The actual entrance road name would be helpful. But the lady running the lot could not have been more helpful.
S K. on Oct 10, 2010
• This worked out great for us. There we events at both Rangers Stadium and Cowboys Stadium and we did not have any problems finding the lot. We will without a doubt use the lot again.
Barbara M. on Oct 10, 2010
• Very good. No glitches. No surprises. Great location. Could use big Park Whiz signs guiding you to location when entering congested area.
Steven B. on Oct 10, 2010
• Great Parking and close to stadium. Thanks
Tim G. on Oct 10, 2010
• Great location for Rangers games - short walk to Center Field entrance. Definitely recommend.
Kris O. on Oct 10, 2010
• Had an easy time looking for the place! Would park there again!
Debora N. on Oct 10, 2010
• It was a little bit hard to find, but workers were very courteous and it was a short walk to the Rangers ballpark. Its a great place to park. It would be convenient to the Cowboy Stadium, as well.
Bobby W. on Oct 10, 2010
• As good as the official Rangers parking
Joseph C. on Oct 10, 2010
• Awesome! Right next to the Arlington Visitors Center. Very friendly staff. Would TOTALLY RECOMMEND this parking site. Going back today to the Cowboys game!
Paula T. on Oct 10, 2010
• It was super convenient! Great spot, not a far walk at all. I will use them for every Rangers game!
Nick Y. on Oct 6, 2010
• Attendant was polite. We got there in plenty of time to get a good spot, but it was nice to know that a spot was waiting for us if we had been delayed.
Doug N. on Oct 3, 2010
• Nothing bad at all - this is GREAT parking for Rangers Ballpark!!!! Thank you!!!!
Dennis L. on Oct 2, 2010
• Great parking place and at a reasonable price ... but you need better and bigger signs. Your lot is difficult to find especially with all the other vendors in the same area.
Richard D. on Oct 2, 2010
• Great location and friendly attendants - a great parking experience!
John S. on Oct 1, 2010
• Great place to park. Just a 15 minute walk to Cowboy Stadium. EZ out to get back home.
Robert G. on Sep 21, 2010
• The location was safe and easy to locate. The parking staff were very helpful and made it easy to park. I would recommend this parking lot to anyone seeking parking and I would stay here again.
James G. on Sep 20, 2010
• Nice transaction. The lot wasn't far from the stadium at half the price. I would consider using them again.
Ron W. on Sep 20, 2010
• A great experience! Easy to find, friendly attendants, safe parking!
Julie C. on Sep 20, 2010
• Easy to find. Space waiting for us when we arrived. Really quick exit!!
Howard T. on Sep 20, 2010
• Ease of finding it was great.
Sunil p. on Sep 20, 2010
• Great location and great staff.
Alicia L. on Sep 20, 2010
• good parking,u dont have to walk too far. And your guaranteed a spot so u dont have 2 drive around looking for one your spots already paid for.One less thing to have to worry about your in& out!
Kacy C. on Sep 20, 2010
• Lot was easy to find, attendants were quick and helpful. The walk to Cowboy Stadium took about twenty minutes, but all the tailgaters along the way made it very entertaining! Will use this service/lot again.
Mark E. on Sep 20, 2010
• This is definitely recommended. Looks like the fixed the signage issue others have talked about. Found it no problem
Gary L. on Sep 20, 2010
• It was great. We had front row services and got out quick, while others sat in traffic.
Jonathon H. on Sep 13, 2010
• Good location. I didn't realize it was on the back of the building, but the attendant in the front pointed me in the right direction.
howard M. on Sep 13, 2010
• Great service and location!
Dawn V. on Sep 13, 2010
• GREAT parking spot!!! So close to the stadium and it is EZ In and Out!!!!
Cheryl L. on Sep 12, 2010
• It was a great spot,I just would have liked to see a better sign for that parking lot that would help an out of towner trying to locate it. It was close to the Ballpark in Arlington and the lady at the entrance was very nice. I would recommend this spot to anyone looking to park close to the Ballpark and the price is fair. Dallas, TX
Gavin F. on Sep 12, 2010
• Fast and friendly
Warren A. on Sep 12, 2010
• Quick in and short walk to the stadium - it was great.
Riffe A. on Sep 8, 2010
• Good spot to park. About a 15 to 20 min walk to Cowboys stadium
Kevin B. on Sep 5, 2010
• Great location. A bit hard to find though - look for "Special Event" parking sign. I'll park here again now that I know where it is!
Dawn V. on Sep 4, 2010
• Great lot. Great service. Close to stadium. Easy in and out.
Karlyn K. on Sep 3, 2010
• The distance from the colleseum means a good walk, but the price is right.
William P. on Sep 3, 2010
• great price easy to get in and out of.
Eric G. on Sep 3, 2010
• Very close to the stadium and a reasonable price compared to surrounding area.
David A. on Sep 3, 2010
• We loved that we knew a head of time we had a parking spot and how much we had to pay. They did need signs directing to which entrance to go in, we went in 2 before we got into the right lot for the reservation.
Rebecca S. on Sep 3, 2010
• It was convenient and easy to get in and out. Attendant was courteous.
Robert F. on Sep 3, 2010
• No negative issues were encountered. The walk was only .6 miles to the stadium.
Benson F. on Sep 3, 2010
• I could not have asked for a better spot to park, in this case EZ In/Out Parking would be the best way to describe it, the attendendants were most helpful, as I pulled up they showed me where to park and mentioned the reserved spot had a cone placed there, all I had to do was set it aside. I was parked in less than 5 minutes. First time user and will always check ParkWhiz on parking availiability before coming to any event in the future.
Truman S. on Aug 30, 2010
• Parked there for a Rangers game on August 27th. As other reviewers have commented, a sign that identifies "EZIn/Out Parking" woul be helpful. Now the sign just reads "Special Event Parking". Thankfully, because of the other reviewers, I scouted the location on Google Earth and called ahead to confirm exactly where the entrance was located. The experience was fine. This lot is a three minute walk to the center field entrance (at least from the closest spaces). Stayed for the fireworks after the game, so I departed with the main rush of fans. Nevertheless, exiting from the lot was trouble free and clear. I was on I-30 within five minutes of exiting the lot.
John F. on Aug 28, 2010
• Was easy and quick, no worries about finding a place to park, close too! Recommended and will us again.
Bryan U. on Aug 27, 2010
• Everything was great! The only way to park at the ballpark!
Randall S. on Aug 27, 2010
• 5th time to park there! Great parking spot very close to the stadium! Hen you reserve, you get a saved spot up front! Will park here every time...
Chad T. on Aug 26, 2010
• Great parking really is the only way to buy a parking spot.
Gary B. on Aug 24, 2010
• Easy parking - close to the ballpark. I would definitely use it again.
Christopher N. on Aug 24, 2010
• Great parking spot! very close to the park!!! AWESOME A+++
Daniel Z. on Aug 17, 2010
• We got to the stadium early, not really sure where exactly to go. The directions from the internet after booking our reservations indicated to turn right on to a street that was actually blocked off. We had to do some asking and after the third person, was able to find the correct lot to park. Once there, it was awesome and quick to the ballpark. I would suggest some sort of signage (temporary! On a easy to read large display.
Gary S. on Aug 16, 2010
• this was the best thing we have done in years. we already recommended it to all of our friends. thank you.
David L. on Aug 15, 2010
• Attendants were very helpful. Forgot my GPS and your directions were great. Do you have a lot nearere the Cowboys Stadium?
William S. on Aug 13, 2010
• Easy in/easy out parking lot. Will use it again in the future.
Michael C. on Aug 13, 2010
• I was pleased to have reserved my spot as I didn't know what the parking situation was going to be. I always use this lot for Rangers games, but it's the first time going to a Cowboys game at the new stadium. The only thing that could have been better is that when I arrived, the first lot (where I generally park for baseball) was still accepting cars, but I was instructed to go around the corner and park behind the building since I was reserved with ParkWhiz. The first lot is easier to get in and out of and is right off the main road. Being in the back could be harder to exit, but as it turned out most folks had left before us and it was a breeze to leave. Will DEFINITELY use this lot again and will use ParkWhiz if I anticipate a late arrival or extra crowds.
David H. on Aug 13, 2010
• The parking was in a great location. It was quick to get to and from the ballpark. I would definitely recommend this parking to others.
Emma A. on Aug 12, 2010
• no problems at all .. about two blocks from the stadium.. worth the money..
Chad O. on Aug 11, 2010
• It was great parking! Easy in easy out just like it said. Very close to to entrance of the ballpark. Definitely recommend it!
Kristina B. on Aug 11, 2010
• This was a great place to park. We had no problems finding it & it was a block from the ballpark. It was awesome & a lot cheaper than a lot of other places. Would definately use again.
Shannon R. on Jul 30, 2010
• The parking is really close to the center field gate. Great for fans sitting in Home Run Porch or center field seats.
Thomas B. on Jul 29, 2010
• The good: It's just a block away from the stadium, great location. The bad: We couldn't find it off of the main street, we were directed to the backside of the building. One BIG sign on the corner saying EZ IN/OUT PARKING IS THIS WAY, TURN HERE DUMMY, would be great. All in all, good spot.
Nicholas Y. on Jul 28, 2010
• Parking was good. It would have been helpful to know which streets were closed before the game and which ones after. Overall good experience even with 46,000 plus fans.
Tammie M. on Jul 26, 2010
• Just GREAT. Second time I have used you guys and I tell everyone about ya!!!
Caren W. on Jul 26, 2010
• quick to the game and quick to leave. well done parkwhiz.
Rickie H. on Jul 25, 2010
• Great.easy to get to & out.close to park
James H. on Jul 24, 2010
• great location, quick walk. Also, park here 3 times, 4th time is free! Love it...
Chad T. on Jul 12, 2010
• Very easy to find and not a far walk from the Stadium at all.
Casey M. on Jul 5, 2010
• Convenient location and easy to set up the reservation on line. The only hiccup was the address of the parking lot was not the same as the actual location. The attendants were friendly.
Thomas H. on Jun 30, 2010
• Peace of mind, I arrived there exact on time and park my vehicle, get into the stadium. This is my first game experience I should say in USA. Of course, invited by my employer. Everything feels so simple however, I felt like base ball is good to watch in TV screen at home. It is just an excuse for almost all the people there to entertain with friends and families having food and bear. I think, no body is watching the game and the players even don't feel like playing. What! Any way, I did enjoy, Its the American Way...
Sujan S. on Jun 26, 2010
• Awesome parking!
Jimmy S. on Jun 26, 2010
• EASY IN/OUT just as advertised. short walk to ballpark. park three, get one free will keep me coming back.
Thomas T. on Jun 7, 2010
• The people who own the lot are very nice. They allowed us to tailgate which was nice. Also it is very close and you beat the traffic.
Kris W. on May 24, 2010
• Well worth the price for the location and convenience. I will definitely use this service again.
Matt . on May 24, 2010
• Excellent service. Easy to find, close to park (1 Block) and unbeatable price.
Gary M. on May 23, 2010
• My husband did not listen to the suggestion for how to get to the parking area driving in so after he dropped us off right by the Home Place entrance, he had to do some circling to get to the parking lot... traffic was busy--we got there closer to game time than we originally thought... he definitely needed his rez ticket to get in because they were not taking cash parkers anymore--so that worked out great walking back was no problem at all and we got out a little ahead of the traffic... would definitely use them again if we have tickets w/o a season parking pass... PS I also Google Mapped this site and checked the street-view photos to see what the area looked like--made it easier to know what lot we were looking for...
Richard C. on May 23, 2010
• Very easy to get to and GREAT location. About a 2 min walk with the closest spots reserved for those who use ParkWhiz. You also get to park free the 4th time if you keep using them and keep your parking stub. Will use again!
Chad T. on May 21, 2010
• Awesome to have reserved parking so close to the stadium. Very easy once we found it. Our only complaint is that it's not really clearly marked so it was really difficult to find.
Lisa &. on May 16, 2010
• The location was not exactly as noted on the website however, when I called the number listed the person who answered was very helpful and provided proper guidance to the parking location.
Dyann B. on May 13, 2010
• It was a little difficult to find but we called and they directed us right to the parking lot. Also, It was close to the stadium. Definately use this service again and tell our friend about it!!
Juan V. on Mar 16, 2010
• The location, ease of use, and lack of traffic were all great.
Steve F. on Mar 14, 2010
• Absolutely would use again. Easy to get in and out and all this with a record crowd over 108,000.
Donald H. on Feb 22, 2010
• We used this lot for the NBA All Star game and it was great! The attendants were very nice and helpful and had our reserved lot ready as soon as we got there. The lot's on the other side of the ballpark from the stadium which is somewhat of a walk but is definitely worth it. When the game was over there was barely any traffic getting out. Will use again for other events at the stadium and definitely recommend it to anyone!
Meghan S. on Feb 18, 2010
• It was a little difficult to locate at first..a big sign should be put up or something so it could easily be spotted.
Jennifer B. on Feb 16, 2010
• Excellent location - walk to stadium entrance B,C & D was about 10mins. When we left the stadium we turned left on Randol Mill Road (away from the staduium) and honestly hit no traffic. Great spot, excellent service and would park there again.
Xavier C. on Feb 15, 2010
• Always great parking
Johnny R. on Feb 15, 2010
• Hardly any traffic to and from the parking lot. Kind of a long walk, especially in the cold! bring good shoes and a big jacket in the winter.
Ramon B. on Jan 11, 2010
• I would use this again. The parking attendants were very friendly. It was easy to leave and we missed all the traffic.
Colby S. on Jan 11, 2010
• Spot was ready when I got there. Less than 1 block away from Rangers Ball park, it took about 10 minutes to walk to cowboys stadium, well light street. Best part was getting to George Bush took less than 10 minutes, it was very quick in and out.
Ernest N. on Jan 11, 2010
• This was a great place to park, and was significantly cheaper than most of the surrounding lots! Easy in and out.
Britni J. on Jan 11, 2010
• Awesome service. Next time cowboys play am using same parking service
Shaunda W. on Jan 5, 2010
• perfect location, easy in and out, very nice attendant
Dawn W. on Jan 4, 2010
• will use this place again. no hassle, easy in and out
Richard V. on Jan 4, 2010
• Good proximity to the stadium for the price, and had a reserved spot waiting for us. Also easy access to get out after the game, very little traffic.
Steve R. on Jan 4, 2010
• Had a reserved space at the front of the entry into the lot. This lot also has a back way out of the lot, which made leaving a lot faster and easier. I will use again in the future.
James E. on Jan 3, 2010
• It was easy to find and was not far from the stadium. We were able to miss the heavy traffic after the game, too.
Mark E. on Jan 3, 2010
This is a commercial office building located 300 yards east of Centerfield Gate, in a triangle area between Randol Mill Rd. Arlington Downs and Magic Mile. We have 3 segregated lots in this triangle area that cater to our customers needs. We are only 300 feet west of Magic Mile - some say it is the best hidden location!
Very little traffic volume makes for a quick easy exit after a majority of events This is a huge area for tailgating that is directly across from Rangers D Lot.
Tailgating is allowed but NO CHARCOAL GRILLING, gas only please. Ask Attendant about Free Parking and Reserved Tailgating areas.
This location is a commercial parking lot.
We are located between Ranger Parking Lots C & D next to Arlington Visitors Center. This lot is surrounding a commercial office building which can be entered from either East Randol Mill Rd, Arlington Downs Rd or Magic Mile.
Reserved parking is located behind the building at 2004 Arlington Downs (both this and the title address lead to the same place).
If you pull up to the barricade ask the attendent to allow you to pass thru and take the first right which is Arlington Downs Rd. pull in to the first attendent on the right.
If you are traveling from Stadium Drive/Ballpark Way and the barricades are up not allowing passage East on Randol Mill (usually 1 1/2 hour before Rangers game time), use Road to Six Flags which is one block north of Randol Mill Rd. (located at the third base corner of the Stadium) go east one block to Magic Mile turn South one block to Arlington Downs, turn back west and it is the building on the left side go to the first available entrance and hand the attendant your receipt.
Please don't hesitate to call 214-808-8096 if you have any questions. |
# Nested loop join
A nested loop join is a naive algorithm that joins two sets by using two nested loops. Join operations are important to database management.
## Algorithm
Two relations $R$ and $S$ are joined as follows:
For each tuple r in R do
For each tuple s in S do
If r and s satisfy the join condition
Then output the tuple <r,s>
This algorithm will involve nr*bs+ br block transfers and nr+br seeks, where br and bs are number of blocks in relations R and S respectively, and nr is the number of tuples in relation R.
The algorithm runs in $O(|R||S|)$ I/Os, where $|R|$ and $|S|$ is the number of tuples contained in $R$ and $S$ respectively. Can easily be generalized to join any number of relations.
The block nested loop join algorithm is a generalization of the simple nested loops algorithm that takes advantage of additional memory to reduce the number of times that the $S$ relation is scanned.
## Improved version
The algorithm can be improved without requesting additional memory blocks to involve only br*bs+ br block transfers. For each read block from $R$, the relation $S$ can be read only once.
For each block block_r in R do
For each tuple s in S do
For each tuple r in block_r do
If r and s satisfy the join condition
Then output the tuple <r,s>
Variable block_r is stored in memory, thus it is not needed to read it from disk for each tuple $s$. |
## Acta Mathematica
### Beurling's Theorem for the Bergman space
#### Note
A part of this work was done while the second author was visiting the University of Hagen. The second and third authors were supported in part by the National Science Foundation.
#### Article information
Source
Acta Math. Volume 177, Number 2 (1996), 275-310.
Dates
First available in Project Euclid: 31 January 2017
http://projecteuclid.org/euclid.acta/1485890984
Digital Object Identifier
doi:10.1007/BF02392623
Zentralblatt MATH identifier
0886.30026
Rights
#### Citation
Aleman, A.; Richter, S.; Sundberg, C. Beurling's Theorem for the Bergman space. Acta Math. 177 (1996), no. 2, 275--310. doi:10.1007/BF02392623. http://projecteuclid.org/euclid.acta/1485890984.
#### References
• [ABFP] Apostol, C., Bercovici, H., Foias, C. & Pearcy, C., Invariant subspaces, dilation theory, and the structure of the predual of a dual algebra. J. Funct. Anal., 63 (1985), 369–404.
• [AR] Aleman, A. & Richter, S., Simply invariant subspaces of H2 of some multiply connected regions. Integral Equations Operator Theory, 24 (1996), 127–155.
• [B] Beurling, A., On two problems concerning linear transformations in Hilbert space. Acta Math., 81 (1949), 239–255.
• [BH] Borichev, A. & Hedenmalm, H., Harmonic functions of maximal growth: invertibility and cyclicity in Bergman spaces. Preprint.
• [D] Duren, P. L., Theory of Hp-Spaces. Academic Press, New York-London, 1970.
• [DKSS1] Duren, P. L., Khavinson, D., Shapiro, H. S. & Sundberg, O., Contractive zerodivisors in Bergman spaces. Pacific. J. Math. 157 (1993), 37–56.
• [DKSS2]—, Invariant subspaces in Bergman spaces and the biharmonic equation. Michigan Math. J., 41 (1994), 247–259.
• [Gara] Garabedian, P. R., Partial Differential Equations, John Wiley & Sons, New York-London-Sydney, 1964.
• [Garn] Garnett, J. B., Bounded Analytic Functions, Academic Press, New York-London, 1981.
• [Hal] Halmos, P. R., Shifts on Hilbert spaces. J. Reine Angew. Math., 208 (1961), 102–112.
• [Hed1] Hedenmalm, H., A factorization theorem for square area-integrable functions. J. Reine Angew. Math., 422 (1991), 45–68.
• [Hed2]—, A factoring theorem for the Bergman space. Bull. London Math. Soc., 26 (1994), 113–126.
• [Her] Herrero, D., On multicyclic operators. Integral Equations Operator Theory, 1 (1978), 57–102.
• [HKZ] Hedenmalm, H., Korenblum, B. & Zhu, K., Beurling type invariant subspaces of the Bergman space. J. London Math. Soc (2), 53 (1996), 601-614.
• [HRS] Hedenmalm, H., Richter, S. & Seip, K., Interpolating sequences and invariant subspaces of given index in the Bergman spaces. J. Reine Angew. Math, 477 (1996), 13–30.
• [Koo] Koosis, P., Introduction to Hp Spaces. Cambridge Univ. Press, New York, 1980.
• [Kor] Korenblum, B., Outer functions and cyclic elements in Bergman spaces. J. Funct. Anal., 115 (1993), 104–118.
• [KS] Khavinson, D. & Shapiro, H. S., Invariant subspaces in Bergman spaces and Hedenmalm's boundary value problem. Ark. Mat., 32 (1994), 309–321.
• [M] Mirsky, L., An Introduction to Linear Algebra. Oxford Univ. Press, 1963.
• [R] Richter, S., Invariant subspaces of the Dirichlet shift. J. Reine Angew. Math., 386 (1988), 205–220.
• [S1] Shapiro, H. S., Weighted polynomial approximation and boundary behaviour of analytic functions, in Contemporary Problems in Analytic Functions (Erevan, 1965). Nauka, Moscow, 1966.
• [S2]—, Some remarks on weighted polynomial approximation of holomorphic functions. Mat. Sb., 73 (1967), 320–330; English translation in Math. USSR-Sb., 2 (1967), 285–294. |
I am trying to write a program to find a zip file's password. I know the password consists of only lower-case letters and it's length does not exceed 6 characters. I wanted to check the passwords of length 1 first, then length 2 and so on.
So I used breadth first search, but then I realized that BFS consumes so much memory, and std::queue makes things even worse. So I had to switch to depth first search. BUT... depth first search does not check the shortest passwords first. So that's a problem. How can I improve the memory management in my bfs function, Or change my dfs function so that it would check the shortest passwords first (I'm not sure if the latter is even possible)
Also my estimation is that the bfs function would be using about 2 Gigabytes of memory. How can I fix this?
#include <iostream>
#include <queue>
std::string key = "banana";
if (s == key) { //example checker function
std::cout << "Password is : " << s << std::endl;
}
}
void bfs(std::string s = "") {
std::queue <std::string> q;
q.push(s);
while(q.size()) {
std::string u = q.front();
q.pop();
if (u.size() < 6) {
for (int i = 'a'; i <= 'z'; i++) {
u += i;
q.push(u);
u.pop_back();
}
}
}
}
void dfs(std::string s = "") {
if (s.size() < 6) {
for (int i = 'a'; i <= 'z'; i++) {
s.push_back(i);
dfs(s);
s.pop_back();
}
}
}
int main() {
dfs();
std::cout << "Finished Time : " << clock() / (double) CLOCKS_PER_SEC;
}
## Pass references where practical
The dfs() routine doesn't really need to create a duplicate string. Instead, it could simply reuse a passed string. We do that by changing the prototype to this:
void dfs(std::string &s)
And then call it like this:
int main() {
std::string s;
dfs(s);
std::cout << "Finished Time : " << clock() / (double) CLOCKS_PER_SEC;
}
On my machine, the original code takes 6.3 seconds, but when modified like this, takes 4.7 instead, making a very easy improvement.
## Prefer iterations to recursion
Recursive functions are often a good way to approach a programming task, but there is a tradeoff in terms of memory and time. Specifically, one can often reduce or eliminate the computational cost of a function call and reduce or eliminate the memory overhead as well by converting from a recursive to an interative function. In this case, I'd advise altering the code so that the prototype is something like this:
bool brute(const std::string &alphabet, size_t len)
It's then a simple matter of calling the function with increasing sizes until either the password is found or you run out of combinations. For generating the combinations, I'd suggest using std::next_permutation something like this SO question shows.
## Look at similar questions here
Another nice source of advice is other similar questions. For example, this question is very similar and the answers are generally applicable to your code. |
## Van't Hoff Equation
$\ln K = -\frac{\Delta H^{\circ}}{RT} + \frac{\Delta S^{\circ}}{R}$
Jeffrey Wang
Posts: 18
Joined: Fri Sep 25, 2015 3:00 am
### Van't Hoff Equation
I don't quite understand when and why I need to use this equation. What is its relevance to delta H? What is an example of the problem that utilizes this?
Austin Hyun 1F
Posts: 26
Joined: Fri Sep 25, 2015 3:00 am
### Re: Van't Hoff Equation
So essentially, the Van't Hoff Equation is used to calculate the equilibrium constant at different temperatures. The equation assumes a constant delta So and a constant delta Ho, as these standard values should not change based upon a temperature change. Thus, given K1 at temperature T1, you can use the Van't Hoff Equation to calculate K2 at any temperature T2 for a reaction. I hope that clarifies things!
Rachel Lipman
Posts: 45
Joined: Fri Sep 25, 2015 3:00 am
### Re: Van't Hoff Equation
What type of wording in a question would lead us to use this equation?
Caleb Lim 2M
Posts: 13
Joined: Wed Sep 21, 2016 2:55 pm
### Re: Van't Hoff Equation
Is the Van`t Hoff Equation related to the Arrhenius Equation, or am I totally off the mark? I remember seeing that equation but I thought it was the Arrhenius equation (A variataion or something of it).
EmmaSaid3C
Posts: 22
Joined: Fri Jul 22, 2016 3:00 am
### Re: Van't Hoff Equation
I think that the best way to know when to use the Van't Hoff Equation is when you are given 2 equilibrium reactions at two different temperatures. It is part of the thermodynamics chapter and can be found on page 42 in the course reader.
Irma Ramos 2I
Posts: 51
Joined: Fri Sep 29, 2017 7:07 am
### Re: Van't Hoff Equation
Would there ever be a time when we are asked to calculate K but not given delta H? If so, then how would we solve for K? |
t-BuOK as a base
I thought $\ce{KOtBu}$ was used to form the less substituted alkene. Why is that not the case here?
That is because the benzylic hydrogen is more acidic. When deprotonated, the anion is stabilized through resonance in the aromatic ring.
The base will take with the most acidic proton first.
• Ahh, makes sense. Is this the same thing for allylic hydrogens, so if t-buOK had a choice between a hydrogen attached to a primary carbon or to a secondary but allylic carbon, it would choose the allylic carbon because it would form a more stable carbocation? – Sarah Smith Feb 6 '18 at 7:06
• You seem to suggest that an acid base reaction is preferred over the nucleophilic reaction. Is that a correct interpretation? – Gaurang Tandon Feb 6 '18 at 7:57
• @SarahSmith That would be a carbanion – Raoul Kessels Feb 6 '18 at 8:01
• @GaurangTandon In this case it is. t-BuO is a very strong base but a poor nucleophile due to its large bulk which impedes its approach to the substrate. – Raoul Kessels Feb 7 '18 at 22:01 |
## Algenol to distribute algae-derived ethanol commercially
##### 15 September 2015
Offering ethanol made from algae for the first time commercially, Algenol Biotech LLC and Protec Fuel Management, LLC have entered into an agreement to market and distribute ethanol from Algenol’s Fort Myers, Fla., commercial demonstration module. The two will also offer Algenol’s future 18 million gallons per year from its commercial plant, which is planned for development in Central Florida in 2016 and 2017.
Protec Fuel will distribute and market the fuel for E15 and E85 applications for both retail stations and general public consumption, as well as fleet applications.
This partnership will enable Algenol to leverage Protec’s established network of retail clients for the distribution of Algenol’s E85, E15 and other advanced biofuels, while also enhancing Protec’s ability to bring to market unique renewable fuels. The agreement encompasses E85 and E15 marketing and supply to Protec distribution network and to fuel terminals and other third parties, as warranted by market conditions.
While the partnership will initially focus on Florida, the agreement provides for expansion into a national partnership scope as Algenol develops projects in other markets. Algenol’s Florida-based production facilities will provide both parties and their customers with a substantial margin advantage versus fuels shipped from out-of-state.
This agreement follows a series of successful commercialization milestones achieved by Algenol, which include its pathway approval by the US Environmental Protection Agency (EPA) in December 2014, its organism approval by both the state of Florida and by the EPA in the same year, and the June 2015 completion of its 2-acre commercial demonstration module funded in part by a $25 million DOE Recovery Act grant. Algenol is producing ethanol meeting the D4806 ASTM specifications on a daily basis, and it can be sold commercially as E85. Algenol has developed a patented technology using algae to produce the four most widely used fuels: ethanol, gasoline, jet and diesel fuel, all for about$1.30 a gallon. The company captures, recycles and utilizes CO2 that is used as a feedstock for the algae, an approach specifically identified as a qualifying technology for reducing carbon emissions in the recently established Clean Power Plan.
Its pathway reduces Greenhouse gas emissions by 69% per gallon compared to traditional gasoline according to the official EPA pathway approval. |
mersenneforum.org > News The Next Big Development for GIMPS
Register FAQ Search Today's Posts Mark Forums Read
2020-08-02, 22:53 #243
kriesel
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
2·3·739 Posts
Quote:
Originally Posted by firejuggler Happy that the cert is all right. But N/A as success? I don't know what should go there ( or if anything should).
I think "verified" or "verification failed" or some such. We know there are failures sometimes, that have been caused by proxy servers. The result of a CERT assignment is actually pretty important.
Last fiddled with by kriesel on 2020-08-02 at 22:54
2020-08-03, 00:55 #244
preda
"Mihai Preda"
Apr 2015
2×5×112 Posts
Quote:
Originally Posted by ATH Can I run PRP VDF with Gpuowl and turn them in with Manuel Results and then use Prime95 or mprime 30.2 to upload the proof files?
If you use gpuowl's primenet.py script with proofs it "just works". Also works great with multiple instances of gpuowl with -pool <dir> to gpuowl.
The proof files not-yet-uploaded are in the "proof" dir, and after upload are moved to "uploaded" dir so the user can still keep them for a while, archive them etc. And there's visibility on the status of each proof file ("has it been uploaded yet").
Last fiddled with by preda on 2020-08-03 at 00:56
2020-08-05, 18:38 #245 firejuggler Apr 2010 Over the rainbow 24·151 Posts I do have a question. What happen If the cert of say 93 145 769 do not confirm the PRP residue (very small chance I know) ? Does it hand another cert to another user? And in the case of a second fail? Does the PRP get invalidated? Last fiddled with by firejuggler on 2020-08-05 at 19:30 Reason: Missed the word ' what happen'
2020-08-05, 19:26 #246
Prime95
P90 years forever!
Aug 2002
Yeehaw, FL
157668 Posts
Quote:
Originally Posted by firejuggler I do have a question. If the cert of say 93 145 769 do not confirm the PRP residue (very small chance I know) ? Does it hand another cert to another user? And in the case of a second fail? Does the PRP get invalidated?
Yes, CERTs are run until one matches the proof or two do not match the proof but match each other. In the latter case, the PRP result is marked "suspect". It could be OK, but must be double-checked using traditional means.
2020-08-08, 16:59 #247
storm5510
Random Account
Aug 2009
U.S.A.
27778 Posts
Here is something I believe some here will be interested in:
Quote:
This is for a proof file from a valid assignment. 401 usually indicates a page not being available. Unauthorized how? The proof file is still in my Prime95 folder.
2020-08-08, 17:10 #248
Prime95
P90 years forever!
Aug 2002
Yeehaw, FL
715810 Posts
Quote:
Originally Posted by storm5510 Here is something I believe some here will be interested in: This is for a proof file from a valid assignment. 401 usually indicates a page not being available. Unauthorized how? The proof file is still in my Prime95 folder.
This is why we have beta testing :)
What is the exponent? I'll investigate. Do not delete the proof file.
There are a few ways this error can happen. Most likely are:
1) You (or prime95) have not submitted the results for the exponent. If you are manually getting assignments, upload your results.json.txt.
2) The MD5 proof file checksum sent with your result does not match the MD5 proof file checksum calculated by the proof uploader.
3) The proof file is being uploaded using a different primenet user id. This would be hard to do. In the old days, it was easy to accidentally submit manual results as anonymous. Prime95 used to "forget" the user id and switch to anonymous.
2020-08-08, 23:09 #249
storm5510
Random Account
Aug 2009
U.S.A.
101111111112 Posts
Quote:
Originally Posted by Prime95 This is why we have beta testing :) What is the exponent? I'll investigate. Do not delete the proof file. There are a few ways this error can happen. Most likely are: 1) You (or prime95) have not submitted the results for the exponent. If you are manually getting assignments, upload your results.json.txt. 2) The MD5 proof file checksum sent with your result does not match the MD5 proof file checksum calculated by the proof uploader. 3) The proof file is being uploaded using a different primenet user id. This would be hard to do. In the old days, it was easy to accidentally submit manual results as anonymous. Prime95 used to "forget" the user id and switch to anonymous.
It was sent on a subsequent attempt and now resides in my archived proofs folder. M10479479. There must have been a brief issue on my end. I have everything properly configured, including credentials. This was an automatic reserve done by Prime95.
Something else, assignments are being ran out-of-order. if a "Cert" appears in a worktodo file, it gets ran before anything else above it in the list. You may have mentioned this in your revised readme file. I have not gotten through all of it yet.
2020-08-09, 02:41 #250 Prime95 P90 years forever! Aug 2002 Yeehaw, FL 2×3×1,193 Posts There is a race condition where a proof upload attempt is made before uploading the PRP results. Sending results and uploading proofs are in different threads. I remember thinking to myself the window should be small and the proof will get properly uploaded next hour. Maybe the race is not as rare as I'd hoped.
2020-08-09, 03:46 #251
chalsall
If I May
"Chris Halsall"
Sep 2002
2·4,657 Posts
Quote:
Originally Posted by Prime95 Maybe the race is not as rare as I'd hoped.
Concurrency can be a bit of a pain.
I personally believe that computers are actually sentient, and quietly work to cause the most grief possible to programmers.
2020-08-09, 12:18 #252
intelfx
Jul 2020
13 Posts
Quote:
Originally Posted by Prime95 Windows 64-bit: https://www.dropbox.com/s/iz417i8hpt...win64.zip?dl=0 Linux 64-bit: https://www.dropbox.com/s/q4haz69wti...64.tar.gz?dl=0 This is much closer to a beta version. I haven't tested it as much as I should. This version can upload proof files. There is a new Resource Limits menu choice / dialog box. LL work preferences are gone, if you upgrading, your LL work preferences will be converted to PRP. Please review the overhauled readme.txt. Suggestions for further changes are welcome. I have a few ideas I'll put forward soon.
Code:
Your choice: 14
Consult readme.txt prior to changing any of these settings.
Temporary disk space limit in GB/worker (100):
Upload bandwidth limit in Mbps (50.000000): 50
Upload large files time period start (00:00):
Upload large files time period end (24:00):
Please enter a value between 0 and 100:
Is there really any merit in putting an artificial limit here? My Internet connection is cheap and fast, I have no problems in downloading however much data per day.
Last fiddled with by intelfx on 2020-08-09 at 12:30
2020-08-09, 15:05 #253
storm5510
Random Account
Aug 2009
U.S.A.
5×307 Posts
Quote:
Originally Posted by Prime95 There is a race condition where a proof upload attempt is made before uploading the PRP results. Sending results and uploading proofs are in different threads. I remember thinking to myself the window should be small and the proof will get properly uploaded next hour. Maybe the race is not as rare as I'd hoped.
Race, as in which gets there first. As far as I can tell, I have not gotten any more certification work in the past 12 hours. So, I disabled it on one machine, but left it active on the other, 10% of CPU time. I have not had any further problems since the comm glitch yesterday.
Similar Threads Thread Thread Starter Forum Replies Last Post airsquirrels Hardware 313 2019-10-29 22:51 mathwiz GMP-ECM 0 2019-05-15 01:06 Jean Penné Software 0 2011-06-16 20:05 Jean Penné Software 6 2011-04-28 06:21 Jean Penné Software 4 2010-11-14 17:32
All times are UTC. The time now is 21:58.
Sat Sep 26 21:58:55 UTC 2020 up 16 days, 19:09, 0 users, load averages: 1.62, 1.51, 1.57 |
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Impact factor Subscription Guidelines for authors Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Teor. Veroyatnost. i Primenen.: Year: Volume: Issue: Page: Find
Personal entry: Login: Password: Save password Enter Forgotten password? Register
2001, Volume 46, Issue 4
Large Deviations in the Approximation by the Poisson LawA. K. Aleshkyavichene, V. Statulevičius 625 Large-Deviation Probabilities for One-Dimensional Markov Chains. Part 3: Prestationary Distributions in the Subexponential CaseA. A. Borovkov, D. A. Korshunov 640 On the Minimax Estimation Problem of a Fractional DerivativeG. K. Golubev, F. N. Enikeeva 658 On the Erdős–Rényi Partial Sums: Large Deviations, Conditional BehaviorM. V. Kozlov 678 Poisson Measures Quasi-Invariant with Respect to Multiplicative TransformationsM. A. Lifshits, E. Yu. Shmileva 697 Estimate of the Accuracy of the Compound Poisson Approximation for the Distribution of the Number of Matching PatternsV. G. Mikhailov 713 Lyapunov-Type Bounds for $U$-StatisticsI. B. Alberink, V. Yu. Bentkus 724 Multidimensional Version of a Result of Sakhanenko in the Invariance Principle for Vectors with Finite Exponential Moments. IIIA. Yu. Zaitsev 744 Short Communications Randomized Optimal Stopping Times for a Class of Stopping GamesV. C. Domansky 770 Weak Convergence of a Certain FunctionalV. M. Kruglov, G. N. Petrovskaya 779 Lower Bounds for Probabilities of Large Deviations of Sums of Independent Random VariablesS. V. Nagaev 785 The Horizon of a Random Cone Field under a Trend: One-Dimensional DistributionsV. P. Nosko 792 Ratio Limit Theorem for Concentration FunctionsB. A. Rogozin 801 An Application of a Density Transform and the Local Limit TheoremT. Cacoullos, N. Papadatos, V. Papathanasiou 803 Strong Laws of Large Numbers for $B$-Valued $L_q$-Mixingale Sequences and the $q$-Smoothness of Banach SpaceGan Shixin 811 On Monotone Extension of Linear Continuous FunctionalsYu. A. Rozanov 814 Reviews and Bibliography Book review: Embrechts P., Klüppelberg C., Mikosch T. “Modelling Extremal Events for Insurance and Finance”V. K. Malinovskii 817 Book review: Bass Richard F. “Diffusions and Eliliptic Operators”V. A. Vatutin 818 Information on meetings of the General Seminar of the Department of Probability, Faculty of Mathematics and Mechanics, Moscow State University 821 Information on the Student Olympiad on the Theory of Probability 823 Letter to the editorsYu. V. Prokhorov 824
Contact us: math-net2020_01 [at] mi-ras ru Terms of Use Registration Logotypes © Steklov Mathematical Institute RAS, 2020 |
# Re: [tlaplus] In computing initial states, the right side of \IN is not enumerable
Dear Dominik and Stephan,
Thank you for your quick help. Dominik's BoundedSeq worked!
I was not sure that BoundedSequence would contain elements of Data because, for some reason, I thought it would be a set of functions [x->Data]. But TLC said that this invariant Len(q) > 0 => Head(q) \in Data was correct. I should probably hold off my questions until after I finish Chapter 14 :)
Thank you again,
Yuri
• References: |
Calculus Examples
Find the Sum of the Infinite Geometric Series
, ,
Step 1
This is a geometric sequence since there is a common ratio between each term. In this case, multiplying the previous term in the sequence by gives the next term. In other words, .
Geometric Sequence:
Step 2
The sum of a series is calculated using the formula . For the sum of an infinite geometric series , as approaches , approaches . Thus, approaches .
Step 3
The values and can be put in the equation .
Step 4
Simplify the equation to find .
Simplify the denominator.
Write as a fraction with a common denominator.
Combine the numerators over the common denominator.
Subtract from .
Multiply the numerator by the reciprocal of the denominator.
Cancel the common factor of .
Factor out of .
Cancel the common factor.
Rewrite the expression.
Multiply by . |
VSS BackupHelper
项目描述
Util that exposes a volume shadow copy (VSS) snapshot as a drive in Windows XP or Windows 2003 Server. Allows you to use rsync, robocopy etc on files normally locked by Windows.
评
0.0
0 评分次数
5 星 0 0 0 0 0 |
# Calculating the eccentricity of an exoplanet
I'm wondering how to calculate the eccentricity of an exoplanet by its radial velocity vs. phase graph. To clarify my question I will take an exoplanet called WASP-14b 2 as an example (http://exoplanets.org/detail/WASP-14_b).
A plot of the radial velocity of the star vs the phase is displayed in the upper left corner. I am wondering how I could possibly calculate the eccentricity of the exoplanet using this graph (or some other values given in the original measurements). I found a few ways to calculate the eccentricity:
$$e = \left | \mathrm{e} \right|$$
This uses the eccentricity vector which is calculated using this formula:
$$\mathrm{e} = \frac{v \times h}{\mu}-\frac{r}{\left | r \right |}$$
The problem here is that this formula needs the specific angular momentum vector and the position vector, which I do not know given only the measurements. However, there is another way to calculate the eccentricity:
$$e = 1 - \frac{2}{(r_a/r_p) + 1}$$
where $r_a$ is the radius of the apoapsis and $r_p$ the radius of the periaosis. These values are not known using only the measurements, but I believe it should be possible to calculate them by taking the integral of the sine function (radial velocity vs. phase). This would give me the position of the star at any given moment. The problem is that I cannot find the exact points displayed in the graph anywhere, let alone a sine function that would fit them.
When I do get an integral of the function I still have to create one for the planet itself, since this describes the movement of the star. I am able to calculate the mass of the planet using the following formula:
$$r^3 = \frac{GM_{star}}{4\pi^2}P_{star}^2$$
which gives me the distance between the star an the planet. Next I can calculate the velocity of the planet using:
$$V_{PL} = \sqrt{GM_{star}/r}$$
And after that I can calculate the mass of the planet using this formula:
$$M_{PL} = \frac{M_{star}V_{star}}{V_{PL}}$$
But this is where another problem comes up as a Wikipedia article on Doppler Spectroscopy states: "Observations of a real star would produce a similar graph, although eccentricity in the orbit will distort the curve and complicate the calculations below."
Where do I find the corrected calculations and how can I possibly calculate the eccentricity of this planet using only these values ($M_{star}$ and the plot, of which I cannot find the exact points)?
• Really not sure what you are trying to do. You fit an eccentric radial velocity curve model to the data. – Rob Jeffries Jan 22 '15 at 22:54
There are a number of options if you want an off-the-shelf solution to fitting RV curves. Perhaps the best free one is Systemic Console.
However, it is not too hard to do something basic yourself.
First define some terms:
$\nu(t)$ is the true anomaly - the angle between the pericentre and the position of the body around its orbit, measured from the centre of mass focus of the ellipse.
$E(t)$ is the eccentric anomaly and is defined through the equation $$\tan \frac{E(t)}{2} = \left(\frac{1+e}{1-e}\right)^{-1/2} \tan \frac{\nu(t)}{2}$$
The mean anomaly $M(t)$ is given by $$M(t) = \frac{2\pi}{p}(t - \tau),$$ where $p$ is the orbital period and $\tau$ is the time of pericentre passage.
"Kepler's equation" tells us that $$M(t) = E(t) - e \sin E(t)$$
Finally, the radial velocity is given by $$V_r(t) = K\left[\cos(\omega + \nu(t)) +e \cos \omega \right] + \gamma,$$ where $K$ is the semi-amplitude, $\gamma$ is the centre of mass radial velocity and $\omega$ is the usual angle defining the argument of the pericentre measured from the ascending node.
OK, so the problem is that the radial velocity does not depend explicitly on $t$, but rather on $\nu$. So what you do is the following:
1. Choose values for $K, \gamma$, $\omega$, $\tau$, $p$ and $e$; these are your "free parameters that describe the orbit. The closer you can get your initial guess, the better.
2. You use these parameters to predict what the radial velocities would be at the times of observation of your RV datapoints. You do this by calculating $\nu(t)$ using the equations above. Start with the second equation and calculate $M(t)$. Then you have to solve the third equation to get $E(t)$. This is transcendental, so you have to use a Newton-Raphson method or something similar to find the solution. Once you have $E(t)$ then you use the first equation to find $\nu(t)$. Then use 4th equation to calculate $V_r(t)$ at each of your datapoint times.
3. Calculate a chi-squared (or similar figure of merit) from comparing the predicted and measured values of $V_r(t)$.
4. Iterate the values of the free parameters and go back to step 2. Continue till your fit converges.
• Thank you for your answer! I'll let you know if this solved it. – kdnooij Jan 23 '15 at 6:45
• I'm sorry to ask, but how can I apply the Newton-Raphson method when I do not yet know the eccentricity (e)? – kdnooij Jan 24 '15 at 14:07
• @kdnooij You postulate an $e$ (along with the other 4 parameters), produce the expected radial velocity curve and compare it with your data. Adjust the parameters until you get a good fit. – Rob Jeffries Jan 24 '15 at 15:37
• A little bit late, but I came across this question again: It worked for me and I was able to determine the eccentricity and all the other parameters almost as precise as was done in the original research paper. I think a bit of improvement can be done on the chi-squared minimisation algorithm, but after all this worked out very well! – kdnooij Mar 11 '15 at 20:01 |
### - Art Gallery -
In mathematics, the metric derivative is a notion of derivative appropriate to parametrized paths in metric spaces. It generalizes the notion of "speed" or "absolute velocity" to spaces which have a notion of distance (i.e. metric spaces) but not direction (such as vector spaces).
Definition
Let $$(M,d)$$ be a metric space. Let $$E\subseteq \mathbb {R}$$ have a limit point at $$t\in \mathbb {R}$$. Let $$\gamma : E \to M$$ be a path. Then the metric derivative of $$\gamma$$ at t t, denoted $$| \gamma' | (t)$$, is defined by
$$| \gamma' | (t) := \lim_{s \to 0} \frac{d (\gamma(t + s), \gamma (t))}{| s |},$$
if this limit exists.
Properties
Recall that ACp(I; X) is the space of curves γ : I → X such that
$$d \left( \gamma(s), \gamma(t) \right) \leq \int_{s}^{t} m(\tau) \, \mathrm{d} \tau \mbox{ for all } [s, t] \subseteq I$$
for some m in the Lp space Lp(I; R). For γ ∈ ACp(I; X), the metric derivative of γ exists for Lebesgue-almost all times in I, and the metric derivative is the smallest m ∈ Lp(I; R) such that the above inequality holds.
If Euclidean space R n {\displaystyle \mathbb {R} ^{n}} \mathbb {R} ^{n} is equipped with its usual Euclidean norm $$\| - \|$$, and $$\dot{\gamma} : E \to V^{*}$$ is the usual Fréchet derivative with respect to time, then
$$| \gamma' | (t) = \| \dot{\gamma} (t) \|,$$
where $$d(x, y) := \| x - y \|$$ is the Euclidean metric.
References
Ambrosio, L., Gigli, N. & Savaré, G. (2005). Gradient Flows in Metric Spaces and in the Space of Probability Measures. ETH Zürich, Birkhäuser Verlag, Basel. p. 24. ISBN 3-7643-2428-7.
Mathematics Encyclopedia
World
Index |
Subsets and Splits