text
stringlengths
100
356k
# Difference between manometer and barometer Barometer Manometer A barometer is an instrument used to measure the air pressure as it varies with distance either above or below sea level. A manometer is used for measuring the liquid pressure with respect to an outside source which is usually considered to be the earth’s atmosphere. Types of Barometer: Mercury barometer Aneroid barometer Types of manometer: U-tube Manometer Enlarged Leg Manometer Well Type Manometer Inclined Tube Manometer
ToughSTEM ToughSTEM A question answer community on a mission to share Solutions for all STEM major Problems. Cant find a problem on ToughSTEM? 0 In compressing the spring in a toy dart gun, 0.2 J of work is done. When the gun is fired, the spring gives its potential energy to a dart with a mass of 0.04 kg. What is the darts kinetic energy as it leaves the gun? What is the darts speed? Edit Community 1 Comment Solutions 0 Work Done in Compressing spring = Work done by spring while returning to natural length Work Done on Dart = Work Done on spring during compression Work done on Dart = 0.2 J = KE of the dart (Because Change in Kinetic energy equals work done) $\frac{1}{2} m v^2 = W$ $\frac{1}{2} \times 0.04 \times v^2 = 0.2$ $v = \sqrt {10} m/s$ $v = 3.16 m/s$ Edit Community 1 Comment Close Choose An Image or Get image from URL GO Close Back Close What URL would you like to link? GO α β γ δ ϵ ε η ϑ λ μ π ρ σ τ φ ψ ω Γ Δ Θ Λ Π Σ Φ Ω Copied to Clipboard to interact with the community. (That's part of how we ensure only good content gets on ToughSTEM) OR OR ToughSTEM is completely free, and its staying that way. Students pay way too much already. Almost done!
Does finite+reduced fibers+connected fibers imply isomorphism? Suppose I have a morphism of Noetherian schemes over a field $k$ (if one needs this then assume $k$ is algebraically closed) $f:C'\rightarrow S$ which is finite with geometrically connected and reduced fibers. Is $f$ an isomorphism? - Note that this is true if $S$ is normal, by Zariski's main theorem. – Damian Rössler Nov 26 '12 at 8:07 Also note that if you do not assume that $S$ is surjective, then any closed immersion gives a counterexample (Jason Starr's answer is a closed immersion but you can see that he assumed that you wanted the morphism to be surjective). – Damian Rössler Nov 26 '12 at 8:10 3 Answers No: $\text{Spec} k[\epsilon]/\langle \epsilon \rangle$ mapping to $\text{Spec} k[\epsilon]/\langle \epsilon^2 \rangle$. - Though any closed embedding will give a counter example, it is true if you ask whether it is an isomorphism to the (scheme theoretic) image. In other words, the map is an isomorphism from $C'\to f(C')$ where $f(C')$ is thought of as the scheme theoretic image (which makes sense since the map is assumed to be finite), with $k$ algebraically closed. - Or, perhaps more to the point, the map from the normalization to a cusp on a curve. Note that such a map is an isomorphism on points without being an isomorphism of schemes. - Fiber above the cusp is not reduced. – René Nov 26 '12 at 9:39 Yes. You are right. I was thinking only about the isomorphism on points, not the fiber! – Ray Hoobler Nov 28 '12 at 17:24
# When will the last total solar eclipse occur? Oct 29, 2017 The last total solar eclipse visible from Earth will occur in about 558 million years time. #### Explanation: Solar eclipses occur because the angular diameter of the Moon can be greater than the angular diameter of the Sun. This means that when the Sun, Moon and Earth are in alignment, the Moon's disc can completely cover the Sun's disc. The Moon is moving away from the Earth at a rate of about 3.8cm per year. Eventually the Moon's angular diameter will always be smaller than the Sun's angular diameter and total solar eclipses will no longer occur. Let's do the calculations. I will use angular radius rather than angular diameter for convenience. I will also round some of the values as accuracy is not too important. The Sun has its smallest angular radius when it is at aphelion. The distance of the Earth from the Sun at aphelion is 152,000,000km. The radius of the Sun is 700,000km. The angular radius of the Sun at aphelion is: $\frac{700 , 000}{152 , 000 , 000} = 0.0046$ radians or ${0.26}^{\circ}$ The Moon has its largest angular radius at perigee. The distance from the Earth to the Moon at perigee is 356,400km. The radius of the Moon is 1,737km. The angular radius of the Moon at perigee is: $\frac{1 , 737}{356 , 400} = 0.0048$ radians or ${0.28}^{\circ}$ Clearly total eclipses are possible at present. Now the next thing is to calculate the perigee distance which make the maximum angular radius of the Moon equal to the minimum angular radius of the Sun. This distance is: $\frac{1 , 737}{0.0046} = 377 , 600 k m$ The distance the Moon needs to move away from the Earth is then: $377 , 600 - 356 , 400 = 21 , 200 k m$ Given that the distance the Moon moves away from the Earth in a year is $3.8 c m = 0.000038 k m$. The the time it will take for the Moon to be at that distance is: $\frac{21 , 200}{0.000038} = 558 , 000 , 000 y e a r s$ So, in about 558 million years time, no more total solar eclipses will be visible from Earth.
# zbMATH — the first resource for mathematics The essential spectrum of two-dimensional Schrödinger operators with perturbed constant magnetic fields. (English) Zbl 0564.35021 The author considers the Schrödinger operator $$L:=((1/i)\nabla -a)^ 2\upharpoonright C_ 0^{\infty}({\mathbb{R}}^ 2)$$ when the vector potential a is smooth and the vector field $$curl a$$ tends to a positive number $$B_ 0$$ at infinity. He shows that the essential spectrum of the closure of L consists of the odd multiples of $$B_ 0$$ by establishing the following interesting result about commutators. Theorem. Let P and Q be symmetric operators in a Hilbert space which are defined on a dense domain $$\Omega$$ which is left invariant of P and Q. Suppose that $$P^ 2+Q^ 2$$ is essentially self-adjoint and $$i(PQ- QP)u=(1+K)u$$ (u$$\in \Omega)$$ for some K which is relatively compact with respect to $$P^ 2+Q^ 2$$. Then $$\sigma_ e(\overline{P^ 2+Q^ 2})$$ is either empty or consists of the positive odd integers. $$\{$$ Reviewer’s remark. The spectrum of [may be totally different when curl a tends to zero at infinity. See K. Miller and B. Simon, Phys. Rev. Lett. 44, 1706-1707 (1980)$$\}$$. Reviewer: H.Kalf ##### MSC: 35J10 Schrödinger operator, Schrödinger equation 35P05 General topics in linear spectral theory for PDEs 47A10 Spectrum, resolvent ##### Keywords: Schrödinger operator; essential spectrum; commutators; curl a Full Text:
• Browse all Study of hard double-parton scattering in four-jet events in $pp$ collisions at $\sqrt{s} = 7$ TeV with the ATLAS experiment The collaboration No Journal Information, 2016 Abstract (data abstract) CERN-LHC. Inclusive four-jet events produced in proton--proton collisions at a centre-of-mass energy of sqrt{s} = 7 TeV are analysed for the presence of hard double-parton scattering using data corresponding to an integrated luminosity of 37.3 pb^-1, collected with the ATLAS detector at the LHC. The contribution of hard double-parton scattering to the production of four-jet events is extracted using an artificial neural network, assuming that hard double-parton scattering can be approximated by an uncorrelated overlaying of dijet events. For events containing at least four jets with transverse momentum p_T > 20 GeV and pseudorapidity eta < 4.4, and at least one having p_T > 42.5 GeV, the contribution of hard double-parton scattering is estimated to be f_{DPS} = 0.092 ^{+0.005}_{-0.011} (stat.) ^{+0.033}_{-0.037} (syst.). After combining this measurement with those of the inclusive dijet and four-jet cross-sections in the appropriate phase space regions, the effective overlap area between the interacting protons, sigma_{eff}, was determined to be sigma_{eff} = 14.9 ^{+1.2}_{-1.0} (stat.) ^{+5.1}_{-3.8} (syst.) mb. This result is consistent within the quoted uncertainties with previous measurements of sigma_{eff}, performed at centre-of-mass energies between 63 GeV and 8 TeV using various final states, and it corresponds to 21^{+7}_{-6}% of the total inelastic cross-section measured at sqrt{s} = 7 TeV. The distributions of the observables sensitive to the contribution of hard double-parton scattering, corrected for detector effects, are also provided. • #### Table 1 Data from Figure 10(a) 10.17182/hepdata.73908.v1/t1 Normalized distribution of the variable $\Delta^{p_{\mathrm{T}}}_{34}$, defined in Eq (16) of the paper, in data after unfolding to particle level. • #### Table 2 Data from Figure 10(b) 10.17182/hepdata.73908.v1/t2 Normalized distribution of the variable $\Delta\phi_{34}$, defined in Eq (16) of the paper, in data after unfolding to particle level. • #### Table 3 Data from Figure 11(a) 10.17182/hepdata.73908.v1/t3 Normalized distribution of the variable $\Delta^{p_{\mathrm{T}}}_{12}$, defined in Eq (16) of the paper, in data after unfolding to particle level. • #### Table 4 Data from Figure 11(b) 10.17182/hepdata.73908.v1/t4 Normalized distribution of the variable $\Delta^{p_{\mathrm{T}}}_{13}$, defined in Eq (16) of the paper, in data after unfolding to particle level. • #### Table 5 Data from Figure 11(c) 10.17182/hepdata.73908.v1/t5 Normalized distribution of the variable $\Delta^{p_{\mathrm{T}}}_{23}$, defined in Eq (16) of the paper, in data after unfolding to particle level. • #### Table 6 Data from Figure 11(d) 10.17182/hepdata.73908.v1/t6 Normalized distribution of the variable $\Delta^{p_{\mathrm{T}}}_{14}$, defined in Eq (16) of the paper, in data after unfolding to particle level. • #### Table 7 Data from Figure 12(a) 10.17182/hepdata.73908.v1/t7 Normalized distribution of the variable $\Delta^{p_{\mathrm{T}}}_{24}$, defined in Eq (16) of the paper, in data after unfolding to particle level. • #### Table 8 Data from Figure 12(b) 10.17182/hepdata.73908.v1/t8 Normalized distribution of the variable $\Delta\phi_{12}$, defined in Eq (16) of the paper, in data after unfolding to particle level. • #### Table 9 Data from Figure 12(c) 10.17182/hepdata.73908.v1/t9 Normalized distribution of the variable $\Delta\phi_{13}$, defined in Eq (16) of the paper, in data after unfolding to particle level. • #### Table 10 Data from Figure 12(d) 10.17182/hepdata.73908.v1/t10 Normalized distribution of the variable $\Delta\phi_{23}$, defined in Eq (16) of the paper, in data after unfolding to particle level. • #### Table 11 Data from Figure 13(a) 10.17182/hepdata.73908.v1/t11 Normalized distribution of the variable $\Delta\phi_{14}$, defined in Eq (16) of the paper, in data after unfolding to particle level. • #### Table 12 Data from Figure 13(b) 10.17182/hepdata.73908.v1/t12 Normalized distribution of the variable $\Delta\phi_{24}$, defined in Eq (16) of the paper, in data after unfolding to particle level. • #### Table 13 Data from Figure 13(c) 10.17182/hepdata.73908.v1/t13 Normalized distribution of the variable $\Delta y_{12}$, defined in Eq (16) of the paper, in data after unfolding to particle... • #### Table 14 Data from Figure 13(d) 10.17182/hepdata.73908.v1/t14 Normalized distribution of the variable $\Delta y_{34}$, defined in Eq (16) of the paper, in data after unfolding to particle... • #### Table 15 Data from Figure 14(a) 10.17182/hepdata.73908.v1/t15 Normalized distribution of the variable $\Delta y_{13}$, defined in Eq (16) of the paper, in data after unfolding to particle... • #### Table 16 Data from Figure 14(b) 10.17182/hepdata.73908.v1/t16 Normalized distribution of the variable $\Delta y_{23}$, defined in Eq (16) of the paper, in data after unfolding to particle... • #### Table 17 Data from Figure 14(c) 10.17182/hepdata.73908.v1/t17 Normalized distribution of the variable $\Delta y_{14}$, defined in Eq (16) of the paper, in data after unfolding to particle... • #### Table 18 Data from Figure 14(d) 10.17182/hepdata.73908.v1/t18 Normalized distribution of the variable $\Delta y_{24}$, defined in Eq (16) of the paper, in data after unfolding to particle... • #### Table 19 Data from Figure 15(a) 10.17182/hepdata.73908.v1/t19 Normalized distribution of the variable $\phi_{1+2} - \phi_{3+4}$, defined in Eq (16) of the paper, in data after unfolding to... • #### Table 20 Data from Figure 15(b) 10.17182/hepdata.73908.v1/t20 Normalized distribution of the variable $\phi_{1+3} - \phi_{2+4}$, defined in Eq (16) of the paper, in data after unfolding to... • #### Table 21 Data from Figure 15(c) 10.17182/hepdata.73908.v1/t21 Normalized distribution of the variable $\phi_{1+4} - \phi_{2+3}$, defined in Eq (16) of the paper, in data after unfolding to...
+0 # help 0 29 1 23,90*15/16 Guest Jun 14, 2017 Sort: #1 +6463 0 23,90*15/16 $${\color{red}23.90\cdot\frac{15}{16}}=\frac{239}{10}\cdot\frac{15}{16}=\frac{239\cdot3\cdot5}{5\cdot32}\\ \color{blue}=\frac{717}{32}=22\frac{13}{32}=22.40625$$ ! asinus  Jun 14, 2017 ### 7 Online Users We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners.  See details
# Designing a Black Dayan Zhanchi to a Steam Punk Cube #### fat10000y ##### Member Hey guys, So I just started to learn to solve the 3x3x3 last week, and got so addicted... I already bought like 6 cubes just to tested out which one I like best. So far I like the Dayan Guhong best, and decided to sacrifice my Zhanchi for my art project. So I wanted to create a "steam punk" version of the cube. It took me around 5-6 hours total, and I'm finally finished today, just wanted to share the joy~~~ Check it out and let me know what you guys think!^^ https://flic.kr/s/aHsjXpg2QA Thanks~~~ Sylvia #### Attachments • 363.5 KB Views: 103 Nice! #### SweetSolver ##### Member That looks amazing! Very impressive, well done. Last edited: #### sk8erman41 ##### Member That is awesome! Great job, I am really impressed. Really cool concept and execution. #### Arti ##### Member Great work! Is it solvable? Or just a display piece? #### fat10000y ##### Member Great work! Is it solvable? Or just a display piece? Thanks all for the good comments! I'm so grateful^^ It is solvable. I tried couple times already. But it's hard for me because I don't remember which color belongs to which side. lol It is definitely not a speedcube tho ... #### DeeDubb Looks really nice! Sad you had to use a Zanchi though there's like $3 cubes you could have sacrificed. #### megaminxwin ##### Current Clock NR Holder Whoa... that's really really cool! #### brian724080 ##### Member That looks really nice! Thanks all for the good comments! I'm so grateful^^ It is solvable. I tried couple times already. But it's hard for me because I don't remember which color belongs to which side. lol It is definitely not a speedcube tho ... How is it not a speedcube? Isn't it a Zhanchi? #### DeeDubb ##### Member How is it not a speedcube? Isn't it a Zhanchi? I'm guessing the mods probably messed with it's turning ability. #### Rocky0701 ##### Member Looks really nice! Sad you had to use a Zanchi though there's like$3 cubes you could have sacrificed. I think the fact that it is a Zhanchi makes it even cooler! I am shocked that it only took you 5-6 hours. #### applemobile ##### Member Where you you put the water? #### fat10000y ##### Member I'm guessing the mods probably messed with it's turning ability. Yeah, I used a Zhanchi because I wanted to be a smoothly cube. It still turns smoothly, but since I had one of the clock hands sticking out, it kind of blocks the way. But the clock hand is movable, so I'll just have to move it away a bit so it won't block. It is more of a novelty cube I guess ^^ Just to look at, and play a little.. not for speed solving.. lol #### DAoliHVAR ##### Member dayum make a tutorial or smth i know most of us wont have the drawing capabilities but i'm willing to do it i have a gorillaz picture cube made with a zhanchi but i could do another with my sulong or smth great job! #### Mikel That looks pretty cool! #### wrathofgods54 ##### Member wow, that looks awesome #### Blurry ##### Member Wow. That looks amazing, I'd pay for that to be a display piece. Good Look on your grading and be sure to inform us #### Destro ##### Member Cool,can u make a tutorial on how to make one? #### LarryLunchmeat ##### Member That's awesome, great work! #### mati1242 ##### Member Your work is quite a piece of art I'd say. Looks amazing, but I have a question that might seems stupid for some of you so yeah here it comes : Could someone please explain me what "steam punk" means ?
One hundred and twenty students take an exam which is marked out of $100$ (with no fractional marks). No three students are awarded the same mark. What is the smallest possible number of pairs of students who are awarded the same mark? If you liked this problem, here is an NRICH task which challenges you to use similar mathematical ideas.
# Peak Oil and World Population Discussion in 'Earth Science' started by jmpet, Feb 8, 2011. 1. ### Me-Ki-GalBannedBanned Messages: 4,634 I am sure everyone is going to jump right on that one Billy . Great Idea though 3. ### TrippyALEA IACTA ESTStaff Member Messages: 10,890 Thermal Depolymerization on Wiki I have a long term plan regarding dairy effluent, but it is (at best) in its nascent stages. 5. ### Me-Ki-GalBannedBanned Messages: 4,634 Are you Australian Trippy ? We got the carousel equipment from a company in Australia . Westphalia was the branding I believe . I probably got the plans still . State of the art as far as milk production goes , but it is the collection method of the circular pit and gutter system that makes it all wash and work efficiently for a short crew of Dairy Farmers . Cows have a way of shitting all over everything . Aussies are smart fuckers when it comes to milking the cows and scraping away the poop. State of the Art . Thanks for the link 7. ### Me-Ki-GalBannedBanned Messages: 4,634 Looks like gas prices will have to go up more before there is financial viability for the polymer stuff to take hold . I see a cross roads coming though , so it seems to be worth an investment of time and energy . To be in the right place at the right time Trip 8. ### Billy TUse Sugar Cane Alcohol car FuelValued Senior Member Messages: 23,198 “… A Thermal Depolymerization demonstration plant was completed in 1999 in Philadelphia by Thermal Depolymerization, LLC, and the first full-scale commercial plant was constructed in Carthage, Missouri, about 100 yards (91m) from ConAgra Foods' massive Butterball turkey plant, where it is expected to process about 200 tons of turkey waste into 500 barrels(21,000 US gallons or 80 m³) of oil per day. … The CWT company claims that 15 to 20% of feedstock energy is used to provide energy for the plant. The remaining energy is available in the converted product. Working with turkey offal as the feedstock, the process proved to have yield efficiencies of approximately 85%; in other words, the energy contained in the end products of the process is 85% of the energy contained in the inputs to the process (most notably the energy content of the feedstock, but also including electricity for pumps and natural gas or woodgas for heating). If one considers the energy content of the feedstock to be free (i.e., waste material from some other process), then 85 units of energy are made available for every 15 units of energy consumed in process heat and electricity. This means the "Energy Returned on Energy Invested" (EROEI) is (6.67) … By comparison, the current processes used to produce ethanol and biodiesel from agricultural sources have EROEI in the 4.2 range, when the energy used to produce the feedstocks is accounted for (in this case, usually sugar cane, corn, soybeans and the like). These EROEI values are not directly comparable, because these EROEI calculations include the energy cost to produce the feedstock, whereas the above EROEI calculation for thermal depolymerization process (TDP) does not. …” From: http://en.wikipedia.org/wiki/Thermal_depolymerization SUMMARY: Dead bodies can be converted to fuel and other useful / valuable products at considerable profit and with (6.67 / 4.2) = 1.59 times greater energy efficiency than making alcohol or diesel fuel from vegitable imputs that must be grown and take up land that can be used for food production! Furthermore doing this reduces or totally eliminates the air and soil pollution current body disposal techniques produce. PS: some years ago, I was swimming in the near shore ocean, when suddenly my body was being given dozens of soft blows every second. I immediately realized a dense school of small fish was swimming past me. Less than one second later I worried that a shark or two was feeding on them, and might like me more. In two seconds or so I knew the best thing I could do would be to continue strong efficient swimming. In ten seconds or so the impacts stopped and I began to consider death by shark. It would be quick, and possibly only painful for a faction of a minute. (From films of pride of lionesses making a kill I think intense pain / huge unusual neural stimulation, can cause loss of consciousness.) After a couple of minutes I had concluded that when dead, the best thing to do with my fresh body (if no parts were required by others) would be to sink it in a coastal sea about 100 meters deep (feed some fish)*. Now I think converting it to profit and helping others via Thermal Depolymerization, if available, is more reasonable. ------------ * That only seems fair - I have eaten several times my weight in fish already - it should be their turn when I die. Last edited by a moderator: Jul 21, 2011 9. ### Me-Ki-GalBannedBanned Messages: 4,634 Great Billy ! I will stuff you in Me truck as soon as you have been liquefied. I hope I don't fight your relatives as they may want to burn you too! 10. ### ElectricFetusSanity going, going, goneValued Senior Member Messages: 18,477 A) Billy I don't think those energy return ratios take into account energy wasted to make the feedstocks for hydrous pyrolysis while for ethanol they do thus bring ethanol's ratios down significantly. B) Dead bodies and stuff represent a tiny portion of our organic matter waste and even if we include agriculture waste (the largest amount of organic waste that we produce by far) we could only replace a minority petroleum usage. C) There is considerable disbelief of their energy return ratios and product quality. In short hydrous pyrolysis end up producing a large amount of secondary products, such as natural gas, reducing primary product yield (octane if we want directly replace gasoline) of course optimizing the process to produce a specific product had been in constant researched, but its very feedstock dependent. I'm for recycling as many of our waste streams back into useful products as possible, logistically it will not be able to replace geological petroleum, in combination with other energy inputs its a winner but alone its not going to grow fast or take a significant percentage of the market. 11. ### TrippyALEA IACTA ESTStaff Member Messages: 10,890 Seconded. I had (have) a couple of schemes for multiple waste streams. My focus on Dairy effluent is purely because that's the predominant form of agriculture locally, and utilizing it would have dual benefits (if I pay dairy farmers to give me their effluent, it becomes a valuable resource to them, so they will institute bettter eflfuent management practices, and clean up the environment). However Hydrocarbons of any form was never going to be my sole marketable product. 12. ### Billy TUse Sugar Cane Alcohol car FuelValued Senior Member Messages: 23,198 That is true and clearly stated. The cost of growing the organic matter (the body) is not included, but the cost of fertalizer, etc. for growing corn is. Currently thoushand of dollars are paid to dispose of the organic matter - all than cost saving should be treated as additional saving for the ecologically better process. I am surprised at your conventional attitude Electric. Are you also dropping your "trans human" ideas? Ceasing to be rational and letting conditioned emotions overrule your normal logic? Again true but we already have an energy source mix, no need to demand that one source meet all of the energy requirements. Why not use what we have, especially when it is more ecologically friendly? As I read the article, they used the combustable gases as the sole fuel - I.e. they were the 15% percent of the total source energy not recovered in the end products. Are not the steady and dependable supply of bodies a "waste stream" that currently is not recycled for man's benefit and a great cost per pound to disposes of? You are not being consistent with your stated policy. Even it it supplied only 0.000,001% of the market's needs it should be done, if economically feasible as it eliminates a very significant dollar and ecological cost. 13. ### ElectricFetusSanity going, going, goneValued Senior Member Messages: 18,477 They were with there very specific feedstock, other feedstockes will produce a very different ratio. It cannot produce alkanes longer then the source organic matter carbon chains, and they were using animal fat (turkey by-product) which is 16-22 carbons long. If we were to use say cellulose we would not get much bigger the hexane on account the sugar molecules of cellulose only 5 and 6 carbons long. The ligin in cellulose will carbonize to solid carbon. FT synthesis can fix all of that with the drawback of being more complex and expensive. No, they are dependable but they are cheaper in more countries to bury or worse send to 3rd world countries to be salvaged, pollute and toxify the people over there. Aah, if economically feasible, generally if its supplies such a small percentage of the market that not a very good sign of economic feasibility. 14. ### Billy TUse Sugar Cane Alcohol car FuelValued Senior Member Messages: 23,198 That may well be true, but I was not suggesting that dead trees be used as the high energy content source. I suggested dead human bodies be used to make less damage to the enviroment (no formalhide leaking into the soil or CO2 released without displacing oil, etc.)... Turkeys, pigs, and people are all going to yield about the same set of simpler chemicals. wrong on two counts: (1) No one is suggesting shipping dead bodies to Africa, etc. & (2) the processes REDUCES the pollution, especially in places like India where bodies are placed on huge huge stacks of more CO2 producing firewood for cremation, but even if done with natural gas or an electric furnace, cremation is a CO2 source and waste of energy instead of a source of fuel to reduce the petroleum which is now burnt. Also this process is currently used to destroy toxins, like dioxens etc. that are hard to chemically destroy. "... A Thermal Depolymerization demonstration plant was completed in 1999 in Philadelphia by Thermal Depolymerization, LLC, and the first full-scale commercial plant was constructed in Carthage, Missouri, about 100 yards (91m) from ConAgra Foods' massive Butterball turkey plant, where it is expected to process about 200 tons of turkey waste into 500 barrels (21,000 US gallons or 80 m³) of oil per day. ..." (At $100/barrel, that oil is earning$50,000 /day for plant owner. No reason why a couple of dozen bodies per day from the local area could not be thrown in also for greater yield.)
# PracticeM 最近の更新履歴 yyasuda's website ## 全文 (1) ### Practice Questions for Midterm Subject: Advanced Microeconomics I (ECO600E) Professor: Yosuke YASUDA 1. True or False Answer whether each of the following statements is true (T) or false (F). You do NOT need to explain the reason. (a) A binary relation % is said to be a preference relation if it is “complete” and “transitive.” (b) If consumer’s choice satis…es the weak axiom of revealed preferences, we can always construct a utility function which is consistent with such choice behav- iour. (c) If a consumer problem has a solution, then it must be unique whenever the consumer’s preference relation is convex. (d) Suppose % is represented by utility function u( ). Then, u( ) is concave if and only if % is convex. 2. Sets Prove the followings (DeMorgan’s Law): (S \ T )c = Sc[ Tc (S [ T )c = Sc\ Tc Hint: You should use the de…nitions of union, intersection, and complement of sets. Drawing …gures (Venn diagrams) is not enough. 3. Cocavity Construct a monotone function f : R2+ ! R, which is quasi-concave but NOT a concave function. 4. Preferences Suppose % is a preference relation on X. That is, % satis…es completeness and transitivity. Then, show the followings. (a) For any x; y; z 2 X, if x y and y % z, then x % z. (b) For any x; y; z 2 X, if x y and y z, then x z. 1 (2) where and are de…ned as follows: a b , a % b and b % a a b , a % b and not b % a 5. Choice Consider a consumer problem. Suppose that a choice function x(p; !) satis…es Walras’s law and WA. Then, show that x(p; !) is homogeneous of degree zero. 6. Lagrange’s Method You have two …nal exams upcoming, Mathematics (M) and Japanese (J), and have to decide how to allocate your time to study each subject. After eating, sleeping, exercising, and maintaining some human contact, you will have T hours each day in which to study for your exams. You have …gured out that your grade point average (G) from your two courses takes the form G= 4 7(2 pJ +pM), where J (/ M ) is the number of hours per day spent studying for Japanese (/ Math- ematics). You only care about your GPA. Then, answer the following questions. (a) What is your optimal allocation of study time? (b) Suppose T = 10. If you follow this optimal strategy, what will be your GPA? 7. Kuhn-Tucker Condition Consider the following problem: Maximize W (x; y) = ln(x) + ln(y) subject to the following constraints: x a; y 0; x + y 10 where a is a non-negative parameter. Then, answer the following questions. (a) Solve this problem by using Kuhn-Tucker conditions (you can assume second order conditions are satis…ed), and derive the maximum derive function M (a). (b) Now substitute a = 2. Derive the bordered Hessian and verify that your solution is a global maximum. 2 Updating... ## 参照 Updating... Scan and read on 1LIB APP
AMBIENT # Hỏi đáp Unit 3 Tiếng Anh lớp 9 phần Speak Lý thuyết ## Danh sách hỏi đáp (32 câu): • ### Complete the sentences with will or will not ,In a few years everyone.... know how to use the internet. bởi thi trang 23/11/2018 complete the sentences with will or will not 1,In a few years everyone..................know how to use the internet Theo dõi (0) • ### READ THE PASSAGE . WRITER T FOR TRUE SENTENCES , F FOR FALSE. It is great to have pen pals. In my opinion, friendship is among the meaningful relations in our life. 28/11/2018 READ THE PASSAGE . WRITER T FOR TRUE SENTENCES , F FOR FALSE SENTENCES.WHY? It is great to have pen pals.In my opinion,friendship is among the meaningful relations in our life.Many of my friends have pen pals and they correspond regularly.A friend of mine once got a letter from the school mailbox and told me a lot of interesting things about her pen pul.That made me excited and I was eager to have one.Threefore,I got online and did some chatting.I was lucky to get to know a very nice Australian girl.Her name's Jenny.She and I are the same age and we have a lot of things in common.Although my English was not very good at first,we were able to understand each other quiet well.My English has improved a lot.Jenny has never been to Viet Nam,I will take her to some interesting palaces,especially our World Heritage Sites,such as Ha Long Bay,Hoi An Ancient Town,My-Son Tower.Through her mail,she tells me about her country.Thanks to Jenny,I know more and more about Australia.I hope to be an exchange student in Australia some day and we will be able to met. 1.The writer's English is now better than before.__________ 2.The writer has never been Australia before._____________ 3.Jenny has visited many World Heritage Sites as Viet Nam.______________ 4.The pen pal mentioned is as old as the writer.________________ 5.They correspond in Vietnamese.______________ 6.The writer has met Jenny recently.___________________ Theo dõi (0) • ### điền giới từ: He often goes to work.........motorbike,but yesterday he went .... his friend's car. bởi Tra xanh 19/12/2018 điền giới từ 1.He often goes to work.........motorbike,but yesterday he went.............his friend's car 2..........mistake, he put ........a black shoe and brown shoe 3.Mrs Smith is ............ charge..........this class 4.I'm very gratefull...............you ...............all your support 5.He insisted on complaining............the boss.........the bad manner of the salesgirl. Theo dõi (0) • ### Rewrite these sentences using "wish" mệnh đề ước muốn bởi Thanh Truc 23/01/2019 Rewrite these sentences using "wish" mệnh đề ước muốn 1) i don't know your new address. -> i wish.. 2) she doesn't have enough money to buy the beautiful house. -> she wishes... 3) she can't apply for the job because she isn't good at English. -> she wishes.. 4) I'm sorry that i don't finish my housework. -> i wish... 5) it's too dark for me to read.-> i wish... 6) we are going out but it is raining heavily now.-> i wish... 7) i can't go to the place because i don't have a map. -> i wish... 8) i hate living the city. It's very hot. -> i wish... 9) what a pity! I don't know his name. -> i wish... 10) you can't see my doctor now. He has just gone out. -> i wish... 11) i didn't go to the market yesterday. -> i wish... 12) last sunday, it was raining. -> i wish... 13) i had a lot of friends. -> i wish.. Theo dõi (0) • ### Not until darkness fell………. he hadn’t done half of this work. 10/03/2019 Not until darkness fell………. he hadn’t done half of this work. • A. that he realized • B. that he didn’t realize • C. did he realize • D. didn’t he realize Theo dõi (0) • ### Chọn một từ có phần gạch chân phát âm khác các từ còn lại: A. hottest B. hostel C. hour D. happy bởi Anh Trần 27/07/2019 I. Chọn một từ có phần gạch chân phát âm khác các từ còn lại. Khoanh tròn A, B, C hoặc D ứng với từ chọn như ví dụ (câu 0) đã làm. (0.4p) 0. A. hottest B. hostel C. hour D. happy 1. A. starts B. books C. hopes D. rains 2. A. floor B. moon C. soon D. food II. Chọn một từ có trọng âm chính rơi vào vị trí âm tiết khác các từ còn lại. Khoanh tròn A, B, C hoặc D ứng với từ chọn như ví dụ (câu 0) đã làm. (0.6p) 0. A. mother B. brother C. machine D. beauty 1. A. deny B. prefer C. protect D. visit 2. A. active B. consist C. section D. happy 3. A. travel B. admit C. exchange D. relax Theo dõi (0) • ### give the correct form of verb? bởi My Hien 21/08/2019 1. if you are hungry, I (make) you something to eat. 2. if I see Mai, I (invite) her our for dinner. 3. Ill visit Nga if I (go) to Xuan Thanh villge. 4. if she (ask) me, Ill help her. 5. if you (not get) to bed now, you cant get up early tomorrow morning. 6. I dont think I will join you if it (keep) raining like this. 7. I (not take) the bus if it is too late. 8. if you go on playing truant, the teacher (not let) you sit the final exam. 9. if I (see) Tom, I will tell him. 10. if you (go) away, please write to me. Theo dõi (0) • ### Chuyển câu trực tiếp sang gián tiếp mother said Nam, why don't you go to bed? bởi Lê Minh Hải 30/09/2019 chuyển câu trực tiếp sang gián tiếp : Mother said:' Nam, why don't you go to bed'' Theo dõi (0) • ### Give the correct form of word the story begins with a of the author is native village (describe)? 30/09/2019 1) Chia từ trong ngoặc : a) The villagers wellcomed the visitors .........(warm) b) She looks ........... in her new coat ( attract ) c) He did not tell me the .............(true) d)Nam said he would leave the ..........day (follow) e ) The story begins with a .......... of the author is native village (describe) 1) hoàn thành câu dùng từ gợi ý : a) somewhere /Jim /I /a few /remembered /months /meeting /ago. b)Da Lat /she /to /to /take /decided /a bus. c) walking /her dog /free /in /she /with /enjoys /her /time Theo dõi (0) • ### Write a short paragraph about the topic how to protect the environment our natural environment and surrounding provides us with everything that we ever need? 09/10/2019 Write a short paragaph about the topic how to protect the environment . HELP ME !!!!! Theo dõi (0) • ### Fill in the gap how do you usually get to your home? bởi Van Tho 09/10/2019 a. What are you going to do on your vacation? b. What is your hometown like? c. Who lives there? d. How far is it from here to your home? e. How do you usually get to your home? f. How often do tou go to your home? g. Do you love your hometown? h. Which one do you prefer, the country or the city? 1. .......................................................................... I usually travel by train. 2.......................................................................... I'm not sure. I think it's bout 850 km. 3. ......................................................................... I'm going home. 4. ......................................................................... I prefer the country. 5. ......................................................................... Oh yes, I really love it. 6. ......................................................................... It's a small beautiful village. 7. ......................................................................... My grandparents, my parents 8. ......................................................................... Twice a year. Theo dõi (0) • ### Rewrite without changing the meaning the composition was so bad that i couldn't read it? 09/10/2019 1. The composition was so bad that I couldn't read it (too..to) 2. The book is so interesting that we have read it many times (such...that) 3. He drives too fast for me to call (so..that) 4. The food is too hot for the old woman to eat ( enough ..to) 5. This folk song is simple . Everybody can sing it 6. The little girl looks miserable. We all feel sorry for her The litte girl...................................................................... Theo dõi (0) • ### Viết lại câu he said that he was sorry he hadn't told me the truth before? bởi trang lan 11/10/2019 * Viết lại câu 1. He said that he was sorry he hadn't told me the truth before (apologized) ~> 2. If only I had attended the professor's lecture ( regret) ~>​ ​* Viết lại câu sao cho nghĩa không đổi ​1. Last year she rang much better than she does now ​~> Now she ................ ​2. She asked Michael what he liked about her new dress (( Câu này đề nó ghi thế , nhưng mình nghĩ what phải là how chứ nhỉ ? Nếu what đúng thì chuyển sao ? ​~> " What ............ ​3. I would rather she didn't go to the movie tonight ~> I'd prefer Theo dõi (0) • ### Write a letter, using the following words or phrases how family i well parents? bởi hà trang 11/10/2019 X- Write a letter, using the following words or phrases : A/ Dear Tom, I am very pleased to receive your letter 2 days ago. 1- How / family ? / I / well / parents. .................................................................................................................................................................. 2- Live / countryside / North / Vietnam. .................................................................................................................................................................. 3- Life / quiet / peaceful / people / friendly / honest. .................................................................................................................................................................. 4- Like / come / see / summer ? .................................................................................................................................................................. 5- Look forward to / see / soon. .................................................................................................................................................................. Sincerely, Tam. B/ Dear Tom and Alex, 1- I / glad / when / know / you / going / visit / my country. .................................................................................................................................................................. 2- You / not tell / me / when / you / come. .................................................................................................................................................................. 3- I / be going to / finish / second semester / the college / and / I / hope / .................................................................................................................................................................. 4- When / exactly / you / coming ? .................................................................................................................................................................. 5- You / go / air ? .................................................................................................................................................................. 6- Tell / date and time / you arrive / so that / I / get / the airport / receive you. .................................................................................................................................................................. 7- Not live / a hotel. Stay / my home. .................................................................................................................................................................. 8- It / be small / but / we / arrange / you / have / your own room. .................................................................................................................................................................. 9- I / tell / my parents / your arrival / and / we / be / look forward / see you. .................................................................................................................................................................. 10- This / be / good chance / us / show / hospitality / you / and I / take / around / city. .................................................................................................................................................................. Hope to hear you soon. Love, Hoang. C/ 1- Tom / Jack / go / movies / last week. .................................................................................................................................................................. 2- Film / be good / but / it / be longer / they thought. .................................................................................................................................................................. 3- When / come out / the cinema / last bus / had gone. .................................................................................................................................................................. 4- They / not know / how / get home. .................................................................................................................................................................. 5- Tom / want / get a taxi / but / Jack / not agree. .................................................................................................................................................................. 6- Final / they / start / walk home. .................................................................................................................................................................. 7- It / be / very long walk. .................................................................................................................................................................. 8- They / be getting / tired / suddenly / a car / stop / next to them. .................................................................................................................................................................. 9- To their surprise / it / be / neighbor of theirs. .................................................................................................................................................................. 10- They / be / happy / see / him / because / get a ride home. .................................................................................................................................................................. Theo dõi (0) • ### Give the correct form of word Paris is for the Eiffel Tower(fame)? bởi Mai Anh 12/10/2019 Bài 1 . 1. It’s…………………………to cross the avenue. (danger) 2. The country’s…………………………resources include forests, coal and oil. (nature) 3. Paris is…………………………for the Eiffel Tower. (fame) 4. Last night, the TV program was very ………………………….(interest) Theo dõi (0) • ### Fill in the gap how would you buy something to eat at the restaurant in foreign country if you country language? 12/10/2019 How would you buy something to eat at the restaurant in foreign country if you (1).. country language? In the most countries , you would have to (2).. or point to items on the menu and take your chances. This is not true in Japan In the Japan, restaurant windows or showcase display samples of every food the restaurant (3).. To make a selection, a customer simply looks the food over ,(4) ..a decision ,and points to the desired item The samples might look (5).. to eat, but you'd better not try them .These mouth-watering dishes are made of plastic! The Japanese first (6).. fake food in the 1920s to introduce people (7) .. unfamiliar Western dishes .Now fake also introduces Western to underfamiliar or exotic Japanese dishes. Thefake food isn't inexpensive to make . A single shrimp might cost $2 and a larger dish of food$15 ,however, unlike real food ,the plastic food (8).. forever 1. a. don't know b. didn't know c.haven't known d. hadn't known 2. a.go hungry b. went hungry c.go hungried d. went hungried 3. a.to serve b.serving c.serves d. severed Theo dõi (0) • ### Nối A với B bring into one place or group? bởi Sasu ka 12/10/2019 A B 1. grocery 2. collect 3. entrance 4. part time 5. reach 6. shrine 7. gather 8. feed 9. sightseer 10. route a. bring into one place or group b. arrive at a place c. give food to eat d. bring things together e. where people buy food, small things f.way from a place to another g. shorter or less than standard time h. place where sacred things are kept i. where you into a place j. person who goes around to see objects or places of interest Theo dõi (0) • ### Choose the correct answer i have broken my pencil? bởi con cai 12/10/2019 choose the underlined part among A,B,C or D that needs correcting. 1. I have broken my pencil . May I borrow one of your? A B C D 2. john used to going to school by bus. Now he goes by bicycle. A B C D 3. We get used to live in the countryside before we moved to this city. A B C D 4. When I was on holiday last summer , I was doing to the beach every day. A B C D 5. Do you ever wish you live in a castle? A B C D Theo dõi (0) • ### Rewrite without changing the meaning they often went swimming in the afternoon? 12/10/2019 1.they often went swimming in the afternoon -they used to....................................... 2.she is typing her assignment - her assignment ............................. 3. the teacher will explain the lesson until all the students understand it -the lesson................................... 4. they sell jeans all over the world - jeans.................................... 5. peple are going to build a new to library in the area - a new library.................................. 6. the clown made us laugh a lot - we ............................................ 7.my father used to take us to the circus when we lived in the city - we ................................................ 8. i usually got up late lats year , but this year i often get up early -i used to 9. you have to finish your homework in time your home work ............................... 10. I last saw her three year ago - i haven't Theo dõi (0) • ### Rewrite without changing the meaning it's no good repairing that old calculator? bởi Lê Minh Trí 12/10/2019 Viết lại câu sao cho nghĩa ko đổi 1.it's no good repairing that old calculator 2she had just got up when someone rang the doorbell 3.she has never entered such a huge building as that 4.they started painting 5 years ago Theo dõi (0) • ### Rewrite without changing the meaning it's not a good idea to use the village's land to build new roads? bởi Hoai Hoai 13/10/2019 Complete the second sentence so that it has a similar meaning to the first meaning to the first one, using the words in brackets 1. It's not a good idea to use the village's land to build new roads (woudn't) -> I.......................................................... 2. You should visit the historical places of the area (worth) -> It is........................................................ 3. He suggested seeing Trang An, a natural wonder of our area (visit) 4. The sleepy villages are expected to mushroom into crowded towns two years (supposed) -> The sleepy villages.................................... 5. It is important to educate children to preserve traditional values (neccesary) -> It is............................................................ Rewrite the following sentences using adjective or adverbs showing degree of change. 1. the school facilities have been improved a lot within the last three years. -> There have been........................................... 2. there is a minor increase in the number of children going to school this year. -> the number.................................................... 3. The number of nuclear families in the countryside has risen little by little. -> the number..................................................... Theo dõi (0) • ### Complete the sentences with the correct form of the verb in brackets Terry has decided (look for) a new job? bởi Tuấn Huy 13/10/2019 Complete the sentences with the correct form of the verb in brackets. 1. Terry has decided(look for) a new job. 2. Will you two please stop(argue)? I'm trying to work. 3. Jill's hair looks nice. Has she had it(cut)? 4. Did you leave the cooker on? I can smell something(burn). 5. It's Tom's birthday today. Did you remember(post) his card? 6. Could you let me(know) what time the meeting starts? 7. What do you enjoy(do) in your spare time? 8. The teacher told the students(meet) at the station at 9 a.m. Theo dõi (0) • ### Give the correct form of word nobody owned up (take) the bag? bởi Spider man 13/10/2019 24. Nobody owned up ………… (take) the bag. Theo dõi (0) • ### Rewrite without changing the meaning it’s not worth making him get up early? 14/10/2019 Viết lại câu với nghĩa tương đương: 1.It’s not worth making him get up early 2. I don’t live in the courtryside anymore 3.As soon as I left the house, he appeared 4.This is the first time I have seen so many people crying at the end of the movie. 5.Nana often cried when she meets with difficulties. 6.I prefer going shoping to playing volleyball. 7. I like do collecting stamps. 8.She studies hard because she wants to pass the final examination. Theo dõi (0) • ### Rewrite without changing the meaning we didn’t have enough rain, so we could not grow rice? bởi Sam sung 14/10/2019 We didn’t have enough rain, so we could not grow rice. If 2. Ann took the job because she didn't know how difficult it was. If 3. I can’t go to the party because they don’t invite me. If 4. Today is not Sunday, so we can’t go swimming. If 11. You (play) football on the street because it’s very dangerous. (choose the correct verb form, using should or shouldn’t) 14. Today isn’t Sunday, so I can’t go to the cinema with you. If 15. He doesn’t get good health because he doesn’t get up early. If 16. We don’t visit you very often because you live so far away. If 17. They want to buy the house, but they haven’t got enough money. If 28. He wasn’t given that job because he didn’t know information. If 29. It’s raining, so we stay home. If 33. I don’t ride the bus to work every morning because it’s always so crowded. If 39. We got lost because we didn’t have a map. If 40. She missed the first train this morning so she was late for the meeting If she 46. You got into so much trouble because you didn’t listen to me. If 48. Mai is sick now because she didn’t follow the doctor’s orders. If 30. Some people were not careful when walking in Cuc Phuong national Park, so they got lost. If ………………………………………………. 35. He didn’t prepare for the interview, so he didn’t get the job. If ……………………………. ……………………………………. Theo dõi (0) AMBIENT ?>
# Algebra Examples Find All Integers k Such That the Trinomial Can Be Factored Find the values of and in the trinomial with the format . For the trinomial , find the value of . To find all possible values of , first find the factors of . Once a factor is found, add it to its corresponding factor to get a possible value for . The factors for are all numbers between and , which divide evenly. Check numbers between and Calculate the factors of . Add corresponding factors to get all possible values. Since divided by is the whole number , and are factors of . and are factors Add the factors and together to get . Add to the list of possible values. Since divided by is the whole number , and are factors of . and are factors Add the factors and together to get . Add to the list of possible values. Since divided by is the whole number , and are factors of . and are factors Add the factors and together to get . Add to the list of possible values. Since divided by is the whole number , and are factors of . and are factors Add the factors and together to get . Add to the list of possible values. Since divided by is the whole number , and are factors of . and are factors Add the factors and together to get . Add to the list of possible values. Since divided by is the whole number , and are factors of . and are factors Add the factors and together to get . Add to the list of possible values. We're sorry, we were unable to process your request at this time Step-by-step work + explanations •    Step-by-step work •    Detailed explanations •    Access anywhere Access the steps on both the Mathway website and mobile apps $--.--/month$--.--/year (--%)
# Beyond Fully-Connected Layers with Quaternions: Parameterization of Hypercomplex Multiplications with 1/n Parameters ### Abstract Recent works have demonstrated reasonable success of representation learning in hypercomplex space. Specifically, “fully-connected layers with Quaternions” (4D hypercomplex numbers), which replace real-valued matrix multiplications in fully-connected layers with Hamilton products of Quaternions, both enjoy parameter savings with only 1/4 learnable parameters and achieve comparable performance in various applications. However, one key caveat is that hypercomplex space only exists at very few predefined dimensions (4D, 8D, and 16D). This restricts the flexibility of models that leverage hypercomplex multiplications. To this end, we propose parameterizing hypercomplex multiplications, allowing models to learn multiplication rules from data regardless of whether such rules are predefined. As a result, our method not only subsumes the Hamilton product, but also learns to operate on any arbitrary nD hypercomplex space, providing more architectural flexibility using arbitrarily $1/n$ learnable parameters compared with the fully-connected layer counterpart. Experiments of applications to the LSTM and Transformer models on natural language inference, machine translation, text style transfer, and subject verb agreement demonstrate architectural flexibility and effectiveness of the proposed approach. Type Publication In ICLR Our paper received the Outstanding Paper Award (8 out of 860 accepted papers).
Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews # Authors SeokHwan Oh, Myeong-Gee Kim, Youngmin Kim, Hyuksool Kwon, Hyeon-Min Bae # Abstract In this paper, we present a scalable lesion-quantifying neural network based on b-mode-to-quantitative neural style transfer. Quantitative tissue characteristics have great potential in diagnostic ultrasound since pathological changes cause variations in biomechanical properties. The proposed system provides four clinically critical quantitative tissue images such as sound speed, attenuation coefficient, effective scatterer diameter, and effective scatterer concentration simultaneously by applying quantitative style information to structurally accurate b-mode images. The proposed system was evaluated through numerical simulation and phantom and ex-vivo measurements. The numerical simulation shows that the proposed framework outperforms the baseline model as well as existing state-of-the-art methods while achieving significant parameter reduction per quantitative variables. In phantom and ex-vivo studies, the BQI-Net demonstrates that the proposed system achieves sufficient sensitivity and specificity in identifying and classifying cancerous lesions. SharedIt: https://rdcu.be/cyhU6 # Reviews ### Review #1 • Please describe the contribution of the paper The paper describes a new neural network for quantitative ultrasound (QUS) of lesions. The input is RF ultrasound data and the output is QUS maps of speed-of-sound, attenuation coefficient, effective scatter diameter and effective scatterer concentration. Training is done on k-wave simulations and tests are done on physical ultrasound phantoms and ex vivo bovine muscle with artificial lesions to mimic cancer. Results are compared to “ground truth” which is unclear. • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting. QUS is a very hot topic in ultrasound and of interest to many MICCAI researchers. This is because quantitative ultrasound reduces the dependence on operator expertise which improves access to ultrasound capabilities in multiple clinical applications. This includes cancer detection which of huge importance. The four chosen metrics are of general interest. The proposed network appears to work so it is promising to have a complete neural network approach to QUS generation. • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work. The main weakness is the lack of validation on real tissue in vivo for a specific clinical task. The use of k-wave simulations is a good start but human tissue is well known to be non-linear and unlike simple simulations. The physical ultrasound phantoms are promising but they are not described in enough detail to fully understand them, nor are they likely to match human tissue. Finally, the imitation cysts are also not described clearly and also not likely to match cancerous lesions. This means it is hard to evaluate the success of the proposed method. Furthermore, such QUS methods can be measured with classical techniques which does not appear to have been done. The “ground truth” is still unclear to me: is it provided by the manufacturer (it looks like CIRS phantoms in Fig 3)? The main weakness of replacing a classical measure of QUS with a neural network is confidence and repeatability of performance which have not been addressed in the current paper. • Please rate the clarity and organization of this paper Good • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance The simulations and phantoms are not described in enough detail to be able to replicate the results. • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html The paper would be improved by including results on human tissue in vivo with ground truth provided by independent repeated measurements so that the level of uncertainly of the ground truth can be provided. This is critical for a paper proposing to use a neural network to provide quantitative measurements. Also defend why a phane wave pulse-echo sequence is used since focused beamforming is far more common. It is not clear why the high acquisition rate of plane-wave imaging would be helpful. The Introduction should also make it clear that these QUS measurements can also be done with standard algorithms and there is a body of literature on improvement of classical methods (which have the advantage of explainability over NN approaches). Also, if the focus is on cancer, then a cancer-specific approach to QUS is needed, i.e. describe how the QUS will be used and what accuracy is needed? The last sentence of “The proposed system … shows high potential for clinical purpose, especially in early detection and differential diagnosis of cancer.” is not justified by the results of this paper since no real cancer images were used. probably reject (4) • Please justify your recommendation. What were the major factors that led you to your overall score for this paper? There is no comparison the easy-to-implement classical QUS measurements so it is hard for the reader to have confidence in the results from the proposed NN. The impact is also limited by not using any real cancer images. • What is the ranking of this paper in your review stack? 4 • Number of papers in your stack 5 • Reviewer confidence Very confident ### Review #2 • Please describe the contribution of the paper The paper represents a nice application of the style transfer paradigm to give a potentially clinically useful and novel method for increasing specificity and perhaps sensitivity as well. They implement a novel feed-forward neural style transfer framework and a B-mode multi-resolution content encoder (images from RF data), fed into a quantitative image decoder to yield a spatially and contrast accurate image reconstruction. Four relevant tissue characteristic parameters are recovered with substantial improvement over more standard methods using simulated data. Phantom and ex-vivo measurements were also used to verify the accuracy of this method, which is usable with presently available transducers in-clinic. • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting. – the author presents a neural network based on B mode image to quantitative information NN transfer. The quantitative tissue characteristic are speed of sound (SOS), attenuation coefficient (AC) effective scatterer diameter (ESD) and effective scatter concentration (ESC). These measurements have clinical importance in determining whether a lesion is malignant or benign. The image is obtained by developing a B mode-data-to-quantitative-imaging network (BQI-Net) which performs multi-variable quantitative image reconstruction with enhanced specificity and sensitivity and other well-known image metrics. This creates clinically informative quantitative parameter images thus enhancing diagnostic capability. The architecture is multilevel. And consists of 1) B mode contents encoder extracting geometric image information from B-mode ultrasound images generated from RF signals, 2) a style encoder extracting designated quantitative information (SOS etc.) and 3) a decoder synthesizing a quantitative image from the encoders’ output. The B-mode content encoder gives semantic contents of tissue geometry using multiple resolution. First a standard B mode image is formed then successive content features are found by pooling layers after convolutional layers – at decreasing resolution 128 by 128 to 16 by 16 by successively halving. 2) the style encoder uses the conditional instance normalization defined by first shifting by the mean and scaling by the standard deviation of the input RF signal from the Beam former. This is followed by scaling and shifting by suitable factors and finally 3) the quantitative image decoder translates the contents from the B-mode content encoder into the quantitative image using the output of the Style encoder. This is done at all four of the resolutions: 128, 64, 32, 16 square resolution. This B-mode to quantitate image translation is achieved using spatially adaptive demodulation (SPADE) followed by a series of residual convolution blocks, all respecting the appropriate level of resolution. The associated scale and shift factors are learned . The multi-resolution subnetworks generate a detailed image superior to standard up-sampling methods. Appropriate regularizing terms and ADAM are used with known learning rate. Dropout with a probability of retention = 0.5 is used for better generalization. Numerical simulation, phantoms and ex-vivo measurements with a 5 MHz Verasonics linear array, of bovine muscles with insertions imitating cyst, benign and malignant tissue were used. The results of the numerical simulation showed consistent superiority to three other encoder-decoder pairs. One of them also had subnetworks for multi-scale representation. The phantom tests and results were compared with standard imaging methods and showed improvement, The results with the ex-vivo muscle measurements were impressive. The SOS, atten, ESD and ESC for the cyst were close to ground truth. • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work. It would have been valuable to know how the ground truth for SOS, AC, ESD, ESC were established for these cases. Also there are literature values available for bovine muscle speed of sound and attenuation – it would have been useful to compare the values obtained from the BQI net with these literature values. Also it would have been useful to see the performance of the other standard ED networks on the phantom and bovine data. • Please rate the clarity and organization of this paper Very Good • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance reproducible results, meet requirements • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html See 3 and 4 above. The paper is well written. The science is good. The idea appears novel. This is a nice application of the style transfer paradigm. strong accept (9) • Please justify your recommendation. What were the major factors that led you to your overall score for this paper? See 3 above. The paper is well written and this appears to be a novel application of the style transfer paradigm. The results of the simulations, phantom images and ex-vivo imaging are all well done. The description of the BQI-Net structure is good. • What is the ranking of this paper in your review stack? 2 • Number of papers in your stack 5 • Reviewer confidence Very confident ### Review #3 • Please describe the contribution of the paper This work presents a novel style-transfer based neural network for multi-variable ultrasound quantitative reconstruction. The proposed framework consists of a style encoder conditioned on the parameter (AC, SoS, ESD, ESC) label, a content encoder for extracting B-mode content and a decoder to estimate quantitative parameter map. The method achieves good results in quantifying lesions on ex-vivo and phantom data. • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting. 1) Using conditional style encoding and decoding to enable multi-variable quantitative imaging in one framework is an interesting and novel idea. The method also utilizes the B-mode geometric content to better localize the lesion location and shape. Compared to other widely used encoder-decoder based architectures, the proposed method archives better performance in estimating lesion shapes, while having less amount of network parameters. 2) The method is evaluated on both phantom and ex-vivo data with insertions imitating lesions and has achieved nice results in quantifying lesions in the demonstrated example. • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work. 1) Presenting quantitative parameter estimation problem as style transfer is confusing. For example, it is hard to understand what is “quantitative style”. 2) The geometric contents extracted from the B-mode images help to better localize the lesion locations and shapes. However, it is not evaluated nor discussed how the network will perform for the inclusions, which do not have clear geometric shape in B-mode, e.g. stiff inclusion. • Please rate the clarity and organization of this paper Good • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance The network architecture is described in the paper. The authors will also release the training code later, people should be able to train the proposed method on their own dataset. Important training hyper-parameters are defined in the paper. The models and training procedures of the competing methods are however not given. The training data are simulated and simulation parameters are specified. With the provided description, other people could simulate training data with a similar distribution. Since the test phantom/ex-vivo data are private, it is hard to exactly reproduce the evaluation results in the paper. • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html 1) It is hard to understand the term quantitative style, e.g. SoS style, AE style. To my understanding, the conditional “style encoder” extracts task-dependent information (estimation of SoS or AE etc.) and these information are used to supervise parameter map estimation from B-mode contents. The method is actually not aimed to do style transfer (appearance/texture matching using gan loss or style loss). This aspect should be made clear in the paper. 2) It is worth discussing in the paper how the network performs on the inclusions, which do not have clear geometric shape in B-mode, e.g. stiff inclusion, and verify, if this could be a potential limitation of the proposed framework. 3) The network is trained on simulated images. How is the overall generalization ability to unseen real data? 4) In the simulated training set, do the lesions differ in all of four parameters from the background regions? Probably accept (7) • Please justify your recommendation. What were the major factors that led you to your overall score for this paper? This paper presents a novel idea of doing multivariable US quantitative reconstruction using the style transfer techniques. The proposed method is well evaluated on simulated, phantom and ex-vivo data. However, the presentation of the quantitative imaging problem as style transfer is in my opinion misleading and the discussion on the potential method limitation is missing. • What is the ranking of this paper in your review stack? 1 • Number of papers in your stack 2 • Reviewer confidence Confident but not absolutely certain ### Review #4 • Please describe the contribution of the paper The paper demonstrates the novel BQI-Net that endows the B-mode image with one of quantitative ultrasound parameters by condition. BQI-Net is first constructed from HR-Net and then incorporated with CIN and SPADE to normalise and renormalise the quantitative parameters. The experiments indicate BQI-Net was capable of differentiating cancerous lesions and furthermore enabled identification of benign/malignant lesions. • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting. 1) Transferring a B-mode image to multiple quantitative ultrasound images is somewhat an interesting application for the conventional ultrasonography. 2) The proposed BQI-Net framework is relatively novel in terms of the conditional input style and the multi-resolution representations of the content. Normalisation and re-normalisation techniques with condition facilitate the succeed of BQI-Net in this process. 3) The experimental results provide a series of thorough analyses to BQI-Net. Especially, the phantom and the ex-vivo results reflect clinical significance. • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work. 1) It is doubtful that BQI-Net trained on the simulated data is generalisable to the clinical data. Indeed, the simulation models constituted by only a few ellipses may result in limited modes for the trained BQI-Net. 2) Not sure which factors principally cause the performance boost of BQI-Net over the baselines. Indeed, Table 2 implies the total number of network weights in BQI-Net is 144M, more than those in the U-Net and the HR-Net. Moreover, number of training data used in BQI-Net may be 4 times of those for the baselines. 3) Lack of error bars in the evaluation metrics shown in Table 2. Which level does the BQI-Net outperform the others in? 4) The reference images (B-mode and elastography) for the reconstruction of breast phantoms in Fig. 3 look vague and hypointense, which may hinder the assessment to the BQI-Net reconstructions. • Please rate the clarity and organization of this paper Good • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance The checklist has truly reflected the reproducibility of the paper. However, the following items may be still substantial to show in the paper: • A way to access the pre-trained models or the evaluation codes; • A way to the dataset; • A detailed plan and a comprehensive list for training both the baselines and BQI-Net; for example, how many training pairs were the baselines and BQI-Net fed in, respectively? • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html In addition to Point 4 and Point 6, I have some more concerns: 1) Is there a situation where the proposed BQI-Net may fail? 2) The font size in all the figures looks too small to read. 3) Please kindly consider using standard mathematical notations in the main text. For example, in Eq.3, it is vague to understand how to minimise an operator $G$. borderline reject (5) • Please justify your recommendation. What were the major factors that led you to your overall score for this paper? The paper presents a novel neural network, and a series of experiments have potential in differentiating the types of cancerous lesions. However, training the network on simulation data may hinder the generalisability. Not sure which factors principally contribute to the performance boost of BQI-Net over the baselines. It is hardly reproducible with no full access to the source codes and datasets. • What is the ranking of this paper in your review stack? 2 • Number of papers in your stack 6 • Reviewer confidence Confident but not absolutely certain # Primary Meta-Review • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal. 1. The method uses B-mode images to guide a AE network conditioned on 4 quantitative outputs, which the submission calls “style transfer”. An encoder extracts a quantitative parameter style from the raw data, which is then applied on B-mode image content. As some reviewers noted, this seems to assume that the contrast and features exist in B-mode image, where the pixel values need to be adjusted for the quantitative information. I suggest the authors to comment on this aspect. 2. Reviewers all ask about how the reported groundtruth values were obtained, both for the phantom but mainly for the ex-vivo samples, and how accurate these are. 3. The ex-vivo experimentation has been described with very little information. 4. For the numerical phantom results, the authors should also report standard deviations among the test results as well as statistical significance of the statements and conclusions made. 5. As asked in the reviews, could the authors also comment on which aspects of the proposed BQI-net are thought to help achieve the reported results? It would be great to substantiate any such hypotheses with ablation experiments. 6. Method input in Fig.2 says “beamformed RF” which probably should be raw pre-beamformed RF as it is then shown to enter a delay-and-sum beamforming process. Overall the paper touches on an important imaging question, with a holistic learning-based approach. I would suggest the authors consider and carefully respond to concerns and questions of the reviewers for the further consideration of this submission. • What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). 5 # Author Feedback <Reviewer #1, #3> Q1. The ground truth of the measurement is unclear. A1. Phantom experiments: The ground truth values of the breast and thyroid phantom are provided by the phantom manufacturer (CIRS Inc.). Ex-vivo experiments: The ground truth ESD and ESC are the actual diameter and concentration of the added scatter. In order to measure the reference AC and SOS of the phantom, two ultrasound systems were configured (probes facing each other) to gather the transmission data through the phantom. The AC and SOS are acquired by measuring attenuated amplitude and arrival time of traversed ultrasound waves (after calibrating the setup with just water). The reference AC and SOS values are measured 5 times for each insertion and the standard deviations are 0.032dB/cm/MHz and 2.75m/s, respectively. <Reviewer #1, #6> Q2. Concerns on the generalization of simulated data. Is the neural network applicable to real clinical data. A2. Through t-Stochastic Neighbor Embedding (T-SNE) data analysis, we verified that the trained synthetic data distribution includes that of real measurements gathered from bio-mimic phantoms, and breast cancer patients. To supplement the clinical usefulness of the study, additional experiments were performed in patients with benign and malignant breast cancer, and are introduced in : ** external link removed by PCs <Reviewer #1> Q3. Describe why a plane wave pulse-echo is used rather than focused beamforming. A3. In this study, quantitative features are obtained by analyzing reflected signals of multi-angle ultrasonic plane waves. If the multi-angle transmission is implemented by using conventional focused beamforming, intersectional regions insonified by multi-angle incident waves will be highly limited and the field of view will be reduced. As such, multi-angle plane waves are a proper choice for the chosen ROI [Feigin M et al., 2019]. <Reviewer #6> Q4. What contributes to the accuracy of the BQI-Net is unclear. A4. The accuracy of the BQI-Net is due to the boundary information provided by the B-mode contents. When the B-mode contents encoder and SPADE module are removed from BQI-Net, the ablated network becomes identical to HR-Net in Table 2, which demonstrates 26% reduction in RMSE and lower SSIM. Quantitative assessments are also provided in supplementary C, and verify that utilization of B-mode image enhances precise description of lesion shape. The BQI-Net and baseline models are trained with an identical number of input and label pairs for fair comparison. <Reviewer #5> Q5. How the network performs for the inclusions which do not have a clear geometric shape in B-mode. A5. In BQI-Net, the B-mode image is not the only factor that determines the geometric shape of the quantitative image. Rather the B-mode is used as a supplementary information to enhance the precision of lesion shape. Figure 3.b shows representative results where the B-mode has an unclear boundary. In this case, the B-mode can not provide clear lesion boundary, but the BQI-Net retrieves the shape of the lesion from the raw RF data. However, precision boundary delineation, in this case, is compromised. Last phantom measurement in Supplementary C proves that the BQI-Net demonstrates comparable performance with HR-Net, when the B-mode does not provide lesion geometry. <Reviewer #6> Q6. The authors should also report standard deviation. A6. In the final manuscript, we will gladly add standard deviation in the tables. Q7. The ex-vivo experimentation has been described with very little information. A7. We will add more analysis/discussion of ex-vivo experiments including how ground truth of inclusion is acquired, and reconstruction of bovine muscle quantitative value compared to literature values as reviewer #3 suggested. In the final manuscript, we will correct “beamformed RF” and font size in Fig.2 and other details that were pointed out # Post-rebuttal Meta-Reviews ## Meta-review # 1 (Primary) • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision. Several comments are addressed in the rebuttal. Although there still stays the concern of how generalizable and applicable the introduced methods would be for in-vivo imaging and pathology, I believe the submission has value and can invoke interesting discussions at MICCAI. • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal. Accept • What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). 3 ## Meta-review #2 • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision. The authors propose a style-transfer based network to estimate quantitative information from RF data. The network learns the mapping between channel data and the associated quantities from simulations. The trained network is then applied to simulations, phantom and ex-vivo data, being able to recover physical properties with good accuracy. The authors have addressed the main concerns raised by the reviewers satisfactorily, particularly clarifying some unclear aspects including choice of parameters and details about the data used. The rebuttal to one major question (generalization from simulations to real data) is not quite satisfactory though, however the authors promise to include results that they have which prove this. • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal. Accept • What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). 79 ## Meta-review #3 • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision. The rebuttal fails to address the improvement over prior QUS methods or classical QUS methods (Rev 1 comment). All the baseline architectures are either developed for computer vision applications or have not been previously used for QUS generation. Therefore, although important, the baseline comparison does not provide any value to judge the success of the proposed method over prior QUS methods. The authors are also not discussing this concern in their rebuttal. • After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal. Reject • What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers). 10
Growth mode control of the free carrier density in SrTiO3−δ films @article{Ohtomo2007GrowthMC, title={Growth mode control of the free carrier density in SrTiO3−$\delta$ films}, author={Akira Ohtomo and Harold Y. Hwang}, journal={Journal of Applied Physics}, year={2007}, volume={102}, pages={083704} } • Published 2007 • Physics • Journal of Applied Physics We have studied the growth dynamics and electronic properties of SrTiO3−δ homoepitaxial films by pulsed laser deposition. We find that the two dominant factors determining the growth mode are the kinetics of surface crystallization and of oxidation. When matched, persistent two-dimensional layer-by-layer growth can be obtained for hundreds of unit cells. By tuning these kinetic factors, oxygen vacancies can be frozen in the film, allowing controlled, systematic doping across a metal-insulator… Expand Metal-to-insulator transition in anatase TiO2 thin films induced by growth rate modulation We demonstrate control of the carrier density of single phase anatase TiO2 thin films by nearly two orders of magnitude by modulating the growth kinetics during pulsed laser deposition, under fixedExpand Dramatic mobility enhancements in doped SrTiO3 thin films by defect management • Materials Science, Physics • 2010 We report bulk-quality n-type SrTiO3 (n-SrTiO3) thin films fabricated by pulsed laser deposition, with electron mobility as high as 6600 cm2 V−1 s−1 at 2 K and carrier density as low as 2.0×1018 cm−3Expand Surface and Interface Engineering and Observation of Quantum Transport in ZnO Based Heterojunctions • Chemistry • 2008 We have observed the quantum Hall-effect (QHE) of ZnO/MgxZn1−xO bilayers grown on ScAlMgO4 substrates by using pulsed-laser deposition. Two-dimensional electron gas was spontaneously formed in theExpand Narrow growth window for stoichiometric, layer-by-layer growth of LaAlO 3 thin films using pulsed laser deposition • Materials Science • 2016 Abstract We study the structure and surface morphology of the 100 nm homoepitaxial LaAlO 3 films grown by pulsed laser deposition in a broad range of growth parameters. We show that there is a narrowExpand Two components for one resistivity in LaVO3/SrTiO3 heterostructure. • H. Rotella, +6 authors W. Prellier • Materials Science, Medicine • Journal of physics. Condensed matter : an Institute of Physics journal • 2015 A series of 100 nm LaVO3 thin films have been synthesized on (0 0 1)-oriented SrTiO3 substrates using the pulsed laser deposition technique, and the effects of growth temperature are analyzed.Expand Growth control of oxygen stoichiometry in homoepitaxial SrTiO3 films by pulsed laser epitaxy in high vacuum • Materials Science, Physics • Scientific reports • 2016 This work shows that, through proper control of the plume kinetic energy, stoichiometric crystalline films can be synthesized without generating oxygen defects even in high vacuum, and expands the utility of pulsed laser epitaxy of other materials as well. Expand Growth diagram of La0.7Sr0.3MnO3 thin films using pulsed laser deposition An experimental study was conducted on controlling the growth mode of La0.7Sr0.3MnO3 thin films on SrTiO3 substrates using pulsed laser deposition (PLD) by tuning growth temperature, pressure, andExpand Fractionally δ-doped oxide superlattices for higher carrier mobilities. • Materials Science, Physics • Nano letters • 2012 By using fractional δ-doping to control the interface's composition in La(x)Sr(1-x)TiO(3)/SrTiO (3) artificial oxide superlattices, the filling-controlled 2D insulator-metal transition can be realized. Expand Tailoring the Hole Mobility in SnO Films by Modulating the Growth Thermodynamics and Kinetics • Chemistry, Materials Science • 2019 Obtaining semiconducting properties that meet practical standards for p-type transparent oxide semiconductors is challenging due to the balance between the defects that generate hole and electronExpand Direct Nanoscale Analysis of Temperature-Resolved Growth Behaviors of Ultrathin Perovskites on SrTiO3. • Materials Science, Medicine • ACS nano • 2016 A comparison of temperature-dependent surface structures of SrRuO3 and SrTiO3 films suggests that the peculiar growth mode switching from a "layer-by-layer" to "step-flow" type in a SrRu O3 films arises from a reduction of surface migration barrier, caused by the change in the chemical configuration of the interface between the topmost and underlying layers. Expand References SHOWING 1-10 OF 25 REFERENCES Growth mode mapping of SrTiO3 epitaxy • Physics • 2000 We have mapped the growth mode of homoepitaxial SrTiO3 thin films as a function of deposition rate and substrate temperature during pulsed laser deposition. The transition from layer by layer growthExpand GROWTH-RELATED STRESS AND SURFACE MORPHOLOGY IN HOMOEPITAXIAL SRTIO3 FILMS • Physics • 1996 The lattice parameter and surface morphology of homoepitaxial SrTiO3 films were found to depend on the ambient oxygen pressure during growth. The homoepitaxial layers were grown by pulsed laserExpand Step-flow growth of SrTiO3 thin films with a dielectric constant exceeding 104 The use of SrTiO3 films in cryogenic high-frequency applications has been limited by the low dielectric constant er of thin films (≈103) when compared to the bulk value of over 104. We show that theExpand Oxidation kinetics in SrTiO3 homoepitaxy on SrTiO3(001) Using an oblique-incidence optical reflectivity difference technique, we investigated kinetic processes in SrTiO3 homoepitaxy on SrTiO3(001) under pulsed-laser deposition conditions. Depending uponExpand OBSERVATION OF THE FIRST-ORDER RAMAN SCATTERING IN SRTIO3 THIN FILMS We have studied lattice dynamic properties of SrTiO3 thin films from 5 to 300 K using metaloxide bilayer Raman scattering. First-order zone-center optical phonons, symmetry forbidden in singleExpand Surface depletion in doped SrTiO3 thin films • Physics • 2004 Strong effects of surface depletion have been observed in metallic La-doped SrTiO3 thin films grown on SrTiO3 substrates by pulsed-laser deposition. The depletion layer grows with decreasingExpand Field-effect transistor on SrTiO3 with sputtered Al2O3 gate insulator • Physics • 2003 A field-effect transistor has been constructed that employs a perovskite-type SrTiO3 single crystal as the semiconducting channel. This device functions as an n-type accumulation-mode device. TheExpand Atomic-scale imaging of nanoengineered oxygen vacancy profiles in SrTiO3 • Chemistry, Medicine • Nature • 2004 The successful fabrication, using a pulsed laser deposition technique, of SrTiO3 superlattice films with oxygen doping profiles that exhibit subnanometre abruptness are reported, which open a pathway to the microscopic study of individual vacancies and their clustering, not only in oxides, but in crystalline materials more generally. Expand Artificial charge-modulationin atomic-scale perovskite titanate superlattices • Materials Science, Medicine • Nature • 2002 It is found that a minimum thickness of five LaTiO3 layers is required for the centre titanium site to recover bulk-like electronic properties, and this represents a framework within which the short-length-scale electronic response can be probed and incorporated in thin-film oxide heterostructures. Expand Nonstoichiometry in SrTiO3 • Materials Science • 1981 The defect chemistry of polycrystalline has been studied by means of the equilibrium electrical conductivity as a function of temperature, oxygen activity, Sr/Ti ratio, and impurity additions.Expand
# geometric progression if the first and 8th term of a g.p.is x^-4 and x^52 and the second term is x^t then t: 1. 👍 0 2. 👎 0 3. 👁 465 1. seven steps and 56 powers of x ... so 8 powers per step t = -4 + 8 1. 👍 0 2. 👎 0 2. given: a= x^-4 ar^7 = x^52 use substitution: (x^-4)r^7 = x^52 r^7 = x^59 r = x^(59/7) 2nd term = ar = (x^-4)(x^59/7) = x^(31/7) matching it with x^t ---> t = 31/7 1. 👍 0 2. 👎 0 ## Similar Questions 1. ### Math The sixth term of an A.P. is 5 times the first term and the eleventh term exceeds twice the fifth term by 3 . Find the 8th term ? 2. ### geometric progression the 3rd term of a gpis 10 and the 6th term is 80 find the common ratio No2 if the 2nd term of a gp is 4 and the 5th term is 1over 16,the7th term is_ 3. ### maths if the 8th term of an AP is 37 and the 5th term is 15 more than the 12th term. find the AP. hence find the sum of the first 15 terms of the AP. 4. ### Math the first term of a linear sequence is 3 and 8th term is 31 .find the common difference and hence find the 20th term 2. ### maths The 8th term of an Ap is 5 times the 3rd while the 7th term is 9 greater than the fourth. Write the first five terms of the Ap 3. ### Math The fifth term of an arithmetic progression is three times the second term,and the third term is 10.a)What is the first term,b)the common difference and c)the 15th term? 4. ### Geometry The 8th term of a(GP) is 640. If the first term is 5, find the common ratio and the 10th term? 1. ### FURTHER MATHS The 8th term of a linear sequence is 18 and 12th term is 26.find the first term,common difference and 20th term 2. ### Maths (srithematic progression) If the 4th term of an A.P is twice the 8th term, prove that the 10th term is twice the 11th term. 3. ### math The 3rd term of a geometric progression is nine times the first term.if the second term is one-twenty fourth the 5th term.find the 4th term. solution ar^2=9a r=sqr of 9 r=3.pls help me on how to get the first term(a) 4. ### Mathematics If the 8th term of an AP is 36 and the 16th term is 68. Find: a) the first term b) the common difference c) The 20th term
# SUSY QM 1: Superalgebra A couple months ago, I stumbled across an amusing bit of academic woo: “Quantum Mind and Social Science.” The misrepresentations, false dichotomies and nons sequitur of that piece prompted me to wonder what a good litmus test for knowing quantum mechanics might look like. Joshua offered a simple criterion: be able to pick the Schrödinger Equation out of a line-up. At a slightly higher level, I suggested being able to describe in the Heisenberg picture the time evolution of a harmonic oscillator coherent state, and explaining why states of the hydrogen atom with the same n but different angular momentum number l are degenerate. You can’t discuss the relationship between classical and quantum physics without bringing up coherent states eventually, and a good grounding in the basics should include the Schrödinger and Heisenberg pictures. (That’s why I wrote problem 5 in this homework assignment.) The excited states of the hydrogen atom are our prototype for understanding how the periodic table works, and it’s often the first place one runs into the mathematics of angular momentum. Unfortunately, too many standard treatments of introductory QM say that hydrogen has “accidental degeneracies”: these states have the same energies as those states for no spectacularly interesting reason. But we are trained to associate degeneracies with symmetries — when two sets of eigenstates have the same eigenvalues, we expect some symmetry to be at work. So, is there a symmetry in the hydrogen atom above and beyond the familiar rotational kind, a symmetry which They haven’t been telling us about? I’d like to explore this topic over a few posts. First, I’ll build up some very general machinery for solving problems, and then I’ll apply those techniques to the hydrogen atom; by that point, we should have a fair amount of knowledge with which we can move in any one of several interesting directions. To begin, let’s familiarize ourselves with the behavior of a superalgebra. INTRODUCTION In fundamental quantum mechanics, we learn that an algebra of operators is defined by commutation relations among those operators. For example, the canonical operators of position and momentum have the commutator $\comm{x}{p} = i\hbar$. A more intricate case is the algebra of angular momentum operators, which we encountered when exploring the rotational symmetries of 3D space. To generalize this concept, we define an anticommutator, which relates operators in the same way as an ordinary commutator, but with the opposite sign: $\{A,B\} \equiv AB + BA.$ If operators are related by anticommutators as well as commutators, we say that they are part of a superalgebra. Let’s say we have a quantum system described by a Hamiltonian $\mathcal{H}$ and a set of $N$ self-adjoint operators $Q_i$, each of which commutes with the Hamiltonian. We shall call this system supersymmetric if the following anticommutator is valid for all $i,j=1,2,\ldots,N$: $\{Q_i,Q_j\} = \mathcal{H}\delta_{ij}.$ If this is the case, then we call the operators $Q_i$ the system’s supercharges. $\mathcal{H}$ will be termed the SUSY Hamiltonian, SUSY being a convenient abbreviation for whichever variation of “supersymmetry” is grammatically appropriate. A SUSY algebra is characterized by its number of supercharges, which we typically denote $N$. Because the $N = 2$ case exemplifies many properties of general SUSY theories, it is worthwhile to work it out in some detail. We require two supercharges, $Q_1 = Q_1^\dag$ and $Q_2 = Q_2^\dag$. The SUSY algebra we defined a moment ago implies the following relations: $Q_1 Q_2 = – Q_2 Q_1,\ \mathcal{H} = 2Q_1^2 = 2Q_2^2 = Q_1^2 + Q_2^2.$ It is sometimes more convenient to work with a “complex” supercharge that is not self-adjoint. (The convention we choose depends upon the given information we have to work with!) If we make linear combinations of our supercharges, $Q = \frac{1}{\sqrt{2}}(Q_1 + iQ_2),\ Q^\dag = \frac{1}{\sqrt{2}}(Q_1 – iQ_2),$ then the SUSY algebra implies $\{Q},Q^\dag\} = \mathcal{H}$. To make this a little more concrete, we can realize a specific incarnation of the superalgebra: let $H_1$ be some Hamiltonian of interest, and suppose that we can factor $H_1$ into the product of an operator and its adjoint: $H_1 = A^\dag A.$ Note that this is almost the form of the harmonic oscillator Hamiltonian, except for an energy shift: $H_{\rm SHO} = \hbar\omega\left(a^\dag a + \frac{1}{2}\right).$ So, this is not an unfamiliar form for a Hamiltonian. Swapping the order of the factors gives another operator, which you can verify is also Hermitian: $H_2 = AA^\dag.$ With $A$ in hand, define the two operators $Q = \left(\begin{array}{cc} 0 & 0 \\ A & 0 \\ \end{array}\right)$ and $Q^\dag = \left(\begin{array}{cc} 0 & A^\dag \\ 0 & 0 \\ \end{array}\right).$ Matrix arithmetic verifies that $\{Q,Q^\dag\} = \left(\begin{array}{cc} A^\dag A & 0 \\ 0 & AA^\dag \\ \end{array}\right),$ so we can say that the anticommutator of our two charges gives a Hamiltonian $\mathcal{H}$ which is block diagonal, $\mathcal{H} = \left(\begin{array}{cc} H_1 & 0 \\ 0 & H_2 \\ \end{array} \right).$ $H_1$ and $H_2$ can be considered two Hamiltonians acting on subspaces of the original Hilbert space associated with $\mathcal{H}$. PARTNER POTENTIALS What exactly is so special about operators of the forms $A^\dag A$ and $AA^\dag$? Given a Hamiltonian for some system, $H_1$, if it can be factored into the product of two operators $A^\dag A$, then we can construct another Hamiltonian $H_2 = AA^\dag$ which has almost exactly the same energy eigenvalue spectrum. These “isospectral” Hamiltonians may not describe the same physics, and their respective potentials $V_1(x)$ and $V_2(x)$ may look radically different. As usual, a degeneracy in the energy levels corresponds to a symmetry; in this case, the symmetry is the SUSY between our two Hamiltonians. First, let’s take a look at the eigenstates of Hamiltonian number 1. These states satisfy the relationship $H_1 \ket{\psi_n^{(1)}} = A^\dag A \ket{\psi_n^{(1)}} = E_n^{(1)} \ket{\psi_n^{(1)}}.$ Now, a surprising thing happens: the operator $A$ maps the eigenstates of Hamiltonian 1 into eigenstates of Hamiltonian 2. Look: $H_2 A\ket{\psi_n^{(1)}} = AA^\dag A \ket{\psi_n^{(1)}},$ but by the equation just above, this means that $H_2 A\ket{\psi_n^{(1)}} = E_n^{(1)} A\ket{\psi_n^{(1)}}.$ The same logic works in the opposite direction, connecting eigenstates of $H_2$ with those of $H_1$. The eigenstates behind door number 2 satisfy $H_2 \ket{\psi_n^{(2)}} = AA^\dag \ket{\psi_n^{(2)}} = E_n^{(2)}\ket{\psi_n^{(2)}},$ so by acting with the operator $A^\dag$, $H_1 A^\dag\ket{\psi_n^{(2)}} = A^\dag AA^\dag \ket{\psi_n^{(2)}} = E_n^{(2)}A^\dag \ket{\psi_n^{(2)}}.$ We have shown that $H_1$ and $H_2$ are isospectral. For every eigenstate of one, there lurks an eigenstate of the other with the same energy. One exception is important: if $A \ket{\psi_n^{(1)}} = 0,$ that is, if $H_1$ has a zero-energy ground state, the proof does not work, and there is no need for $H_2$ to have a zero-energy ground state. In fact, as we’ll see momentarily, only one of $H_1$ and $H_2$ may have a zero-energy ground state; for consistency, we usually arrange matters so that $H_1$ has the extra eigenstate. SUPERPOTENTIALS Most of the time, we find ourselves dealing with Hamiltonians of the form $H = \frac{p^2}{2m} + V(x),$ which, knowing that $p = -i\hbar \partial_x$, we can also write as $H = -\frac{\hbar^2}{2m} \partial_x^2 + V(x).$ If we want to factor this $H$ into an operator and its adjoint, we should probably start with an operator which is linear in the derivative of $x$, thus: $A = \frac{\hbar}{\sqrt{2m}} \partial_x + W(x).$ Here, $W(x)$ is some real function of $x$ which we shall call the superpotential. Taking the adjoint of $A$ flips the sign on the derivative; you should deduce why by observing that the momentum $p$ is observable and therefore self-adjoint. $A^\dag = -\frac{\hbar}{\sqrt{2m}} \partial_x + W(x).$ We can connect the superpotential to $H_1$’s ordinary potential, $V_1(x) = W^2(x) – \frac{\hbar}{\sqrt{2m}} \partial_x W(x).$ This relationship is known as the Riccati Equation. If we reverse the order of our operators, it turns out that $H_2$ is a Hamiltonian with a new potential $V_2(x)$, given by $V_2(x) = W^2(x) + \frac{\hbar}{\sqrt{2m}} \partial_x W(x).$ We recognize this as the Riccati Equation with a change of sign. $V_1(x)$ and $V_2(x)$ are known as partner potentials, related through the superpotential $W(x)$. With one more step, we can relate the superpotential to the ground state wavefunction. Note that the ground state of $H_1$ is annihilated by $A$, satisfying the relation $A\ket{\psi_0^{(1)}} = 0.$ Looking back at the form of $A$, we see that this is a first-order differential equation, and we can write its solution as an exponential. $\psi_0^{(1)}(x) \propto \exp\left[-\frac{\sqrt{2m}}{\hbar} \int_0^x W(y) dy\right].$ Note that the zero-energy ground state of $H_2$ would be annihilated by $A^\dag$ and would therefore be proportional to $\exp\left[\frac{\sqrt{2m}}{\hbar} \int_0^x W(y) dy\right].$ Only one of these two expressions can give a normalizable state: if one behaves nicely, the other will blow up. That’s why only one of the two partner Hamiltonians can have a ground state of zero energy. Often, these equations are shown in “natural units” where $\hbar = 2m = 1$. This can always be done by changing the units of $x$, and it makes successive steps in the calculations much cleaner. I think it nice to see the equations with all the original units in place at least once; in following sections, however, when units are not illuminating I will set unnecessary constants to unity. SUSY QM SERIES ## 3 thoughts on “SUSY QM 1: Superalgebra” 1. Jeremy Henty says: It’s not true in general that only one of H_1, H_2 can have a zero eigenvalue, consider A = (0 0 ; 1 0) , A^dag = (0 1 ; 0 0) , H_1 = (1 0 ; 0 0) , H_2 = (0 0 ; 0 1). (These are 2×2 matrices with the ‘;’ separating the rows.) 2. Jeremy Henty says: Whoops, OK, I see that H_1 and H_2 can’t both have zero eigenvalues when A is given by the superpotential Ansatz you write down later. Didn’t see that coming.
Lemma 94.14.2. Up to a replacement as in Stacks, Remark 8.4.9 the functor $p : \mathcal{G}\textit{-Torsors} \longrightarrow (\mathit{Sch}/S)_{fppf}$ defines a stack in groupoids over $(\mathit{Sch}/S)_{fppf}$. Proof. The most difficult part of the proof is to show that we have descent for objects. Let $\{ U_ i \to U\} _{i \in I}$ be a covering of $(\mathit{Sch}/S)_{fppf}$. Suppose that for each $i$ we are given a $\mathcal{G}|_{U_ i}$-torsor $\mathcal{F}_ i$, and for each $i, j \in I$ an isomorphism $\varphi _{ij} : \mathcal{F}_ i|_{U_ i \times _ U U_ j} \to \mathcal{F}_ j|_{U_ i \times _ U U_ j}$ of $\mathcal{G}|_{U_ i \times _ U U_ j}$-torsors satisfying a suitable cocycle condition on $U_ i \times _ U U_ j \times _ U U_ k$. Then by Sites, Section 7.26 we obtain a sheaf $\mathcal{F}$ on $(\mathit{Sch}/U)_{fppf}$ whose restriction to each $U_ i$ recovers $\mathcal{F}_ i$ as well as recovering the descent data. By the equivalence of categories in Sites, Lemma 7.26.5 the action maps $\mathcal{G}|_{U_ i} \times \mathcal{F}_ i \to \mathcal{F}_ i$ glue to give a map $a : \mathcal{G}|_ U \times \mathcal{F} \to \mathcal{F}$. Now we have to show that $a$ is an action and that $\mathcal{F}$ becomes a $\mathcal{G}|_ U$-torsor. Both properties may be checked locally, and hence follow from the corresponding properties of the actions $\mathcal{G}|_{U_ i} \times \mathcal{F}_ i \to \mathcal{F}_ i$. This proves that descent for objects holds in $\mathcal{G}\textit{-Torsors}$. Some details omitted. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
# Primes in short intervals with a preassigned frobenius Edited after mistake in the first version. It is known since Selberg that under the Riemann Hypothesis, given an $\epsilon>0$, there is a prime between $x$ and $x+O(x^\epsilon)$ for all $x$ in a set of asymptotic density one (Selberg's result is actually more precise: one can take $x+O(f(c) \log^2 x)$ where $f(x)$ is any function that tends to $x+\infty$ with $x$). Here a set of density one is a subset $S \in \mathbb R_+$ such that $\mu(S \cap [0,y])/y \longrightarrow 1$ when $y \longrightarrow \infty$. I have heard that this result has been generalized for primes in arithmetic progressions under GRH. I would like to know if more generally that result has been generalized for primes in a Frobenian set? More precisely, Given a fixed Galois number field $L/\mathbb Q$, $G=Gal(L/\mathbb Q)$, $C$ a conjugacy class in $G$, is it true that for every $\epsilon>0$, there is a prime between $x$ and $x+O(x^\epsilon)$ such that $Frob_{p,Gal(L/\mathbb Q)} \in C$ for every $x$ in a set of asymptotic density one (under GRH, and Artin's conjecture if you wish)? I am looking for any reference discussing that question... If there is none available (as it seems at first glance according to my looking in mathscinet), I would be also interested to the clearest references you know treating the case of arithmetic sequence. In any case, thanks... - You're both right. I will reformulate my question... – Joël Jun 7 '13 at 20:54 I think the first result in AP was: K. Prachar, Über den Primzahlsatz von A. Selberg, Acta. Arith., 28 (1975), pp. 277–297. eudml.org/doc/205389 – v08ltu Jun 8 '13 at 3:07 One reference where a Hoheisel type result (right number of such primes in every interval $(x,x+x^{1-\delta})$ for some $\delta>0$) is proved unconditionally is the paper by Balog and Ono "The Chebotarev density theorem and some questions of Serre" (see http://www.mathcs.emory.edu/~ono/publications-cv/pdfs/062.pdf). Selberg's method is very robust (all it uses is that the number of zeros in intervals of length $1$ is bounded by some constant times log(conductor)), and it would be a simple matter to take the explicit formula and bound the variance of primes in short intervals on GRH.
# Commutator in the Darwin Term Tags: 1. Jun 25, 2017 ### Sigma057 1. The problem statement, all variables and given/known data I am trying to fill in the steps between equations in the derivation of the coordinate representation of the Darwin term of the Dirac Hamiltonian in the Hydrogen Fine Structure section in Shankar's Principles of Quantum Mechanics. $$H_D=\frac{1}{8 m^2 c^2}\left(-2\overset{\rightharpoonup }{P}\cdot \left[\overset{\rightharpoonup }{P},V\right]+\left[\overset{\rightharpoonup }{P}\cdot \overset{\rightharpoonup }{P},V\right]\right) =-\frac{1}{8 m^2 c^2}\overset{\rightharpoonup }{P}\cdot \left[\overset{\rightharpoonup }{P},V\right]$$ 2. Relevant equations What Shankar calls the "chain rule for commutators of product" I think he means $$[\text{AB},C]=A[B,C]+[A,C]B$$. On the same page he mentions the identity $$\left[p_x,f (x)\right]=\text{-i\hbar }\frac{df}{dx}$$ 3. The attempt at a solution One way this equality could be satisfied is if $$\left[\overset{\rightharpoonup }{P}\cdot \overset{\rightharpoonup }{P},V\right]=\overset{\rightharpoonup }{P}\cdot \left[\overset{\rightharpoonup }{P},V\right]$$ In component form this means $$\left[P_x^2+P_y^2+P_z^2,V\right]=\left(\overset{\wedge }{x}P_x+\overset{\wedge }{y}P_y+\overset{\wedge }{z}P_z\right)\cdot \left[\overset{\wedge }{x}P_x+\overset{\wedge }{y}P_y+\overset{\wedge }{z}P_z,V\right]$$ $$=\left(\overset{\wedge }{x}P_x+\overset{\wedge }{y}P_y+\overset{\wedge }{z}P_z\right)\cdot \left(\overset{\wedge }{x}\left[P_x,V\right]+\overset{\wedge }{y}\left[P_y,V\right]+\overset{\wedge }{z}\left[P_z,V\right]\right)$$ Or $$\left[P_x^2,V\right]+\left[P_y^2,V\right]+\left[P_z^2,V\right]=P_x\left[P_x,V\right]+P_y\left[P_y,V\right]+P_z\left[P_z,V\right]$$ One way this equality could be satisfied is if $$\left[P_i^2,V\right]=P_i\left[P_i,V\right]$$ WLOG let's compute $\left[P_x^2,V\right]$ in the coordinate basis acting on a test function $\phi(x)$ $$\left[p_x^2,V\right]\phi =\left(p_x\left[p_x,V\right]+\left[p_x,V\right]p_x\right)\phi =p_x\left[p_x,V\right]\phi +\left[p_x,V\right]p_x\phi$$ $$=\text{-i\hbar }\frac{d}{dx}(\text{-i\hbar }\frac{dV}{dx})\phi+\text{-i\hbar }\frac{dV}{dx}(\text{-i\hbar }\frac{d}{dx})\phi = \text{-\hbar ^2}\frac{d}{dx}(\frac{dV}{dx}\phi)\text{-\hbar ^2}\frac{dV}{dx}\frac{d\phi}{dx}$$ $$= \text{-\hbar ^2}(\frac{d^2V}{dx^2}\phi+\frac{dV}{dx}\frac{d\phi}{dx}) \text{-\hbar^2}\frac{dV}{dx}\frac{d\phi}{dx} =\text{-\hbar^2}\frac{d^2V}{dx^2}\phi-2\text{\hbar^2}\frac{dV}{dx}\frac{d\phi}{dx}$$ In comparison $$p_x\left[p_x,V\right]\phi = (\text{-i\hbar}\frac{d}{dx})(\text{-i\hbar}\frac{dV}{dx})\phi = \text{-\hbar^2}\frac{d}{dx}(\frac{dV}{dx}\phi) =\text{-\hbar^2}(\frac{d^2V}{dx^2}\phi+\frac{dV}{dx}\frac{d\phi}{dx}) =\text{-\hbar^2}\frac{d^2V}{dx^2}\phi\text{-\hbar^2}\frac{dV}{dx}\frac{d\phi}{dx}$$ I can't figure out where I've gone wrong. 2. Jun 25, 2017 ### TSny The notation is a bit confusing in the text. On the right side of the above equation, the text actually writes $$-\frac{1}{8 m^2 c^2}\left[\overset{\rightharpoonup }{P}\cdot \left[\overset{\rightharpoonup }{P},V\right] \right]$$ I think this is to be interpreted as a double commutator. $$\left[\overset{\rightharpoonup }{P}\cdot \left[\overset{\rightharpoonup }{P},V\right] \right] = \overset{\rightharpoonup }{P}\cdot \left[\overset{\rightharpoonup }{P},V\right] - \left[\overset{\rightharpoonup }{P},V\right] \cdot \overset{\rightharpoonup }{P}$$ They should probably have included another comma in the double commutator $$\left[\overset{\rightharpoonup }{P}\cdot , \left[\overset{\rightharpoonup }{P},V\right] \right]$$ 3. Jun 28, 2017 ### Sigma057 Thank you so much for your reply! Without it I would have probably skipped to the next equality and missed a valuable learning opportunity. I'll post my solution to the problem to assist future readers. With your clarification I'll rewrite the problem statement as $$H_D=\frac{1}{8 m^2 c^2}\left(-2\overset{\rightharpoonup }{P}\cdot \left[\overset{\rightharpoonup }{P},V\right]+\left[\overset{\rightharpoonup }{P}\cdot \overset{\rightharpoonup }{P},V\right]\right) =-\frac{1}{8 m^2 c^2}\left[\overset{\rightharpoonup }{P},\left[\overset{\rightharpoonup }{P},V\right]\right]$$ Using the convention $$\overset{\rightharpoonup }{A} \overset{\rightharpoonup }{B}\equiv \overset{\rightharpoonup }{A}\cdot \overset{\rightharpoonup }{B}$$ I will also make use of the chain rule for commutators of vector operator products, which follows easily from the chain rule for scalar operator products as a consequence of the definition of the dot product and the linearity of the commutator. $$\left[\overset{\rightharpoonup }{A}\cdot \overset{\rightharpoonup }{B},C\right]=\overset{\rightharpoonup }{A}\cdot \left[\overset{\rightharpoonup }{B},C\right]+\left[\overset{\rightharpoonup }{A},C\right]\cdot \overset{\rightharpoonup }{B}$$ I can now finally fill in the steps between these equations. $$H_D=\frac{1}{8 m^2 c^2}\left(-2\overset{\rightharpoonup }{P}\cdot \left[\overset{\rightharpoonup }{P},V\right]+\left(\overset{\rightharpoonup }{P}\cdot \left[\overset{\rightharpoonup }{P},V\right]+\left[\overset{\rightharpoonup }{P},V\right]\cdot \overset{\rightharpoonup }{P}\right)\right) =\frac{1}{8 m^2 c^2}\left(-\overset{\rightharpoonup }{P}\cdot \left[\overset{\rightharpoonup }{P},V\right]+\left[\overset{\rightharpoonup }{P},V\right]\cdot \overset{\rightharpoonup }{P}\right)=-\frac{1}{8 m^2 c^2}\left(\overset{\rightharpoonup }{P}\cdot \left[\overset{\rightharpoonup }{P},V\right]-\left[\overset{\rightharpoonup }{P},V\right]\cdot \overset{\rightharpoonup }{P}\right)=-\frac{1}{8 m^2 c^2}\left[\overset{\rightharpoonup }{P},\left[\overset{\rightharpoonup }{P},V\right]\right]$$
# A roughness-dependent model ## Two-equation eddy viscosity model $\nu _t = C_{\mu} {{k^2 } \over \varepsilon }$ (1) where: $C_{\mu} = 0.09$ ## One-equation eddy viscosity model $\nu _t = k^{{1 \over 2}} l$ (2) ## Algebraic eddy viscosity model $\nu _t(y) = {C_{\mu}}^{{1 \over 4}} l_m(y) k^{{1 \over 2}}(y)$ (3) $l_m$ is the mixing length. ### Algebraic model for the turbulent kinetic Energy $k^{{1 \over 2}}(y) = {1 \over {C_{\mu}}^{{1 \over 4}}} u_\tau e^{\frac{-y}{A}}$ (4) $u_\tau$ is the shear velocity and $A$ a model parameter. ### Algebraic model for the mixing length, based on (4) [Absi (2006)] $l_m(y) = \kappa \left( A - \left(A - y_0\right) e^{\frac{-(y-y_0)}{A}} \right)$ (5) $\kappa = 0.4$, $y_0$ is the hydrodynamic roughness ### the algebraic eddy viscosity model is therefore $\nu _t(y) = \kappa \left( A - \left(A - y_0\right) e^{\frac{-(y-y_0)}{A}} \right) u_\tau e^{\frac{-y}{A}}$ (6) for a smooth wall ($y_0 = 0$): $\nu _t(y) = \kappa A \left( 1 - e^{\frac{-y}{A}} \right) u_\tau e^{\frac{-y}{A}}$ (7)
# Taylor and Marr's Proof - R. A. Johnson's Version ### Problem If the trisectors of the angles of a triangle are drawn so that those adjacent to each side intersect, the intersections are vertices of an equilateral triangle. ### Hint Morley's theorem or, as it is often referred to, Morley's Miracle has a long history and multiple proofs, many - if not all of which - have been documented at this site. Each of the proofs (and one that follows is no exception) sheds an extra light on Morley's wonderful discovery. Anyone who has an ambition to add to the collection, should exercise his or her ingenuity. Good luck and enjoy. ### Solution Let the trisectors adjacent to $A_{2}A_3$ be $A_2P_1$ and $A_3P_1,$ etc.; it is to be proved that $P_1P_2P_3$ is an equilateral triangle. Extend $A_2P_1$ and $A_1P_2$ to meet at $L.$ Draw the incircle of $\Delta A_1A_2L,$ whose center is obviously $P_3.$ Let $Q$ and $R$ denote its points of contact on $LA_2$ and $LA_1,$ respectively, and let $P_3R$ meet $A_1A_3$ at $K$, and $P_3Q$ meet $A_2A_3$ at $N$; let the tangent from $K$ to the circle touch it at $P,$ and meet $A_2L$ at $F.$ So, by the construction, $P_3R=RK,$ $\displaystyle P_3P=\frac{1}{2}P_3K,$ $\angle PP_3K=60^{\circ},$ $\angle P_3KP=30^{\circ},$ $\displaystyle\angle QP_3R=180^{\circ}-\angle QLR=120^{\circ}-\frac{2}{3}\angle A_3.$ It follows that \begin{align} \angle FNQ &=\angle FP_3Q=\frac{1}{2}QP_3P \\ &=\frac{1}{2}(\angle QP_3R-60^{\circ})=30^{\circ}-\frac{1}{3}\angle A_3. \end{align} Also, $\angle P_3NK=\angle P_3KN=\frac{1}{2}\angle QLR=30^{\circ}+\frac{1}{3}\angle A_3,$ implying $\angle FNK=\frac{2}{3}\angle A_3$ and $\angle FKN=\frac{1}{3}\angle A_3,$ so that $F,$ $K,$ $A_3,$ and $N$ are concyclic. Therefore, $F$ coincides with $P_1,$ and the tangent from $K$ passes through $P_1.$ By the same token, the tangent from $N$ to the same circle passes through $P_2.$ Also, by the construction, quadrilateral $P_3QLR$ (including the arc $QR$) and $\Delta P_3KN$ are symmetric with respect to the bisector $LP_3$ of $\angle A_1LA_2.$ By symmetry then, arcs $QP_1$ and $P_2R$ are equal, implying $\angle P_1P_3P_2=\angle PP_3R$ which, as has been shown, equals $60^{\circ}.$ It follows that $\angle P_1P_3P_2=60^{\circ}.$ The same obviously holds for the other two angles in $\Delta P_1P_2P_3.$ ### Acknowledgment The proof first appeared in the oft-quoted Taylor/Marr paper (Proceedings of Edinburgh Math. Society, 1914) which attributed it to W. E. Philip. It was also included in Johnson's book, Advanced Euclidean Geometry (Modern Geometry), pp. 253-254. I am grateful to Roger Smyth for bringing this proof to my attention.
# Autocorrelation Detection and Mitigation¶ A benchmark session needs to be long enough so that we can collect enough samples to calculate the CI at the desired confidence level. The more samples we have, the narrower the CI can be made. However, a crucial issue that is often overlooked in many published benchmark results is the autocorrelation among samples. Autocorrelation is the cross-correlation of a sequence of measurements with itself at different points in time. Conceptually, a high autocorrelation means that previous data points can be used to predict future data points, and that would invalid the calculation of CI no matter how large the sample size is. Most measurements in computer systems are autocorrelated because of the stateful nature of computer systems. For instance, most computer systems have one or more schedulers, which allocate time slice to jobs. The measured performance of such jobs would be highly correlated when they are taken within a single time slice, and would change significantly between time slices if the duration of a measurement unit is not significantly longer than the size of a time slice. The autocorrelation in the samples must be properly handled before we can go on to the next step to calculate the sample’s CI. Autocorrelation is measured by the autocorrelation coefficient of a sequence, which is calculated as the covariance between measurements from the same sequence as $R(\tau) = \frac{\operatorname{E}[(X_t - \mu)(X_{t+\tau} - \mu)]}{\sigma^2},$ where $$\tau$$ is the time-lag. The autocorrelation coefficient is a number in range $$[-1,1]$$, where $$-1$$ means the sample data are reversely correlated and $$1$$ means the data is autocorrelated. In statistics, $$[-0.1, 0.1]$$ is deemed to be a valid range for declaring the sample data has negligible autocorrelation [ferrari:78]. Subsession analysis [ferrari:78] is a statistical method for handling autocorrelation in sample data. $$n$$-subsession analysis models the test data and combines every $$n$$ samples into a new sample. Pilot calculates the autocorrelation coefficient of measurement data after performing data sanitizing, such as non-stable phases removal, and gradually increases $n$ until the autocorrelation coefficient is reduced to within the desired range. ferrari:78(1,2) Domenic Ferrari. Computer Systems Performance Evaluation. Prentice-Hall, 1978.
# Should I do many shallow quick cuts or a single deep slow one with my router? Most cutting tools have a certain removal rate of material at which they should operate. You don't want to go too slow because it takes forever. And if you go to fast the tool heats up too much. This question is concerned with the latter one on a router. I heard that the usual technique is to plunge the bit only a little and not the full depth of the cut. Then make several cuts plunging deeper every time. That makes sense. It reduces how much material is removed in one go. Couldn't I achieve the same thing by doing one single cut with the bit plunged to the final depth, cutting very slowly? The problems that I see with the first approach: • it only uses the top part of the bit, which will become more and more dull while the rest of the bit is not used that much • doing that same cut over an over again is could cause marks of the different cuts to be visible What's the better way of doing it? • Great question. One thing that I always do is hog out a big chunk of the material with the drill press first, then go full depth on the router table. It's kind of the best of both worlds, I've found. – dfife Apr 29 '15 at 16:09 Always make sure your bits are sharp and take small bites with the router. If the router bit travels too slowly, it can burn and/or burnish your workpiece. Heat from taking too large a bite or moving slowly can also destroy the temper on the router bit, causing it to dull faster and/or break. Carbide bits are more resistant to heat than high-speed steel bits, but both can dull and break under high heat. Also, larger-shank bits are able to dissipate the heat more readily than smaller-shank bits. For example, all else being equal, a bit with a 1/2" shank will stay cooler than the equivalent bit with a 1/4" shank. Often you should remove the bulk of the waste with a different tool first--for example, a saw or drill--then finish up with shallow passes with the router. Sometimes you might also use one router bit for the first "hogging out" pass, then switch to the bit that has the final profile you want. For example, when routing dovetails, a common recommendation is to remove the bulk of material with a straight bit, then finish up with the dovetail bit. • I was primarily concerned with the bit getting too hot, but sure enough this isn't good for the wood either – null Apr 28 '15 at 20:39 • I never thought of a shank as a heat sink. Great answer. – 3Dave May 11 '18 at 19:06 For cuts with a router, slow also can mean burnt material. If I have a complex profile to cut, I either break it into multiple passes with multiple bits, or I sneak up on it, going slightly deeper each pass. Some folks also make a deep pass at close to the final depth, then make another shallow, fast pass at final depth to finish up the cut. e.g. I've tried both with a router. Cutting a 3/8th in deep groove though a board is a lot more work and it is a lot easier to mess it up. I burned a lot more wood. You also have to push harder which means it is easier to wobble the router as you move it along. I generally go for 1/8" passes, for 1/2" or bigger bits. It doesn't really take long to make the pass, reset the bit to the next depth and run it again. Each time it is actually a little easier. I do find if you are going to go for broke, leaving enough to do a 'final' pass to clean everything up is a very good idea. • leaving enough to do a 'final' pass is a very valuable abvice – null Apr 28 '15 at 20:42 I always "knew" the answer, but I didn't know why. So I emailed the question to router expert Pat Warner. His web site contains a wealth of information on safe, efficient use of a router. His response was that in addition to creating burns and chatter in the cut, the deeper cut causes the motor to draw more amps, possibly even to the point of burning out the motor, if you don't break the bit first. The long bits are made for trimming the edge of a board, where you can take very light cuts. Pat recommends never making a cut deeper than 3/16" on an inside cut. • Pat unfortunately passed away in ‘17. His website is not available anymore but his books are. – dwery Jan 11 '20 at 18:58 One problem with a deep cut is the wood chips needs to be removed from the cut area, or it will be recut and the dust will add friction and heat. If you want to do a deep cut you can use a constant stream of compressed air to clear the wood chips and cool the cutter at the same time, which allows you to not burn the wood and also cut faster. More, shallow hits is generally a better philosophy. The other answers have made good points but something they haven't touched on is "chatter". If you use a single, deep pass (even with a slow "feed rate", that is the rate of travel of your router), due partially because of chip clearance problems and partially just because the bit will have a longer part of the edge "hitting" the timber with each revolution, the bit will tend to be pushed out away from the cutting face with each revolution. Each time the bit is pushed away from the timber you're going to end up with a judder mark left on the timber. This'll be lessened by using a spiral-cut router bit rather than a straight-fluted one, and also by taking shallower passes.
## Thursday, December 31, 2009 ### F(rench)-theory Ok, everybody made speculations about the meaning of the F in F-theory. Possibly the most accepted one was that it was due to Cumrumm Va(F)a. But an article appearing now in arxiv has shown it's real origin. The authors of the article are Adil Belha and Leila Medari. It is titled "Superstrings, Phenomenology and F-theory". the abstract reads: We give brief ideas on building gauge models in superstring theory, especially the four-dimensional models obtained from the compactification of F-theory. According to Vafa, we discuss the construction of F-theory to approach non-perturbative aspects of type IIB superstring. Then, we present local models of F-theory, which can generate new four-dimensional gauge models with applications to phenomenology. It is based on invited talks given by A. Belhaj in Oviedo, Rabat, Zaragoza. Untill here nothing seems to support my claim of the explanation of the origin of the name. But if you go and see the paper, available in: http://arxiv.org/pdf/0912.5295 one finds that it is written in French. That explains all it ;-). Fortunately I have a relatively good knowledge of French and I could make a quick reading of the article. It is a good introduction to the topic, from the very beguining explaining the basics of string theory, D-branes and all that. Later it explains the basics of F-theory, of local models and of local F-theory GUT models. All of it in a short article of 15 pages. Despite the name it doesn't dive too much into phenomenology. But still it gives a good introduction to many aspects of the subject for non initiated people. In that sense it is far better than the blog entry of Jackes Distler about the first big paper of Vafa. And, definitively, it looks like a good chance for Spanish people people interested in the subject but not speaking English and maybe speaking French. By the way, for those that didn't read the Spanish entry about the CDMS announcement just say that F-theory GUTS predicts that the LSP (lightest supersymmetric partner) is the gravitino, which is not a viable candidate for a WIMP. The CDMS two events finding (irrespective of how statistically significant it could be) is kind of a hint that the LSP is a WIMP (maybe a neutralino) so if confirmed the actual Vafa models of F-theory GUT would become invalidated. Possibly the experts on the subject could recook some aspects of the more phenomenological aspects of the theory (mainly the supersymmetry breaking mechanism) to fit the new data. But certainly the best aspect of the whole construction, reproducing the standard model and make concrete predictions, would go away. But, as Vafa said in the strings 2009 conferences. That's the bad point of making predictions, that they could be invalidated. If someone is interested in knowing it I must say that since the CDMS announcement I have decided to study in more detail what heterotic phenomenology can offer. It doesn't mean that F-theory is not interesting any more, but irrespectively of the CDMS I needed to pay more attention to heterotic theories. The CDMS is just a good excuse. Also I am reading (and in some cases rereading) a lot of articles in black holes (stringy and not stringy ones). You can read about it in my other blog (if you speak Spanish). Still I guess that I will also talk about the subject in this blog in a near future, when I have finished reading carefully a few bunch of articles. For example, today there is an article about the subject of B-h creation in particle collisions: http://arxiv.org/abs/0912.5481. Other interesting articles today in arxiv are: Unification of Residues and Grassmannian Dualities by Nima Arkani-Hamed, Jacob Bourjaily, Freddy Cachazo and Jaroslav Trnka. The article continuate the MHV program to give a twistorial technique to find scattering amplitudes. I must admit that although I recognize it's interest I am not following too much that developments. Still I think some readers can find it more attractive than me. Also I would note two papers in dark energy: Inverse problem - reconstruction of dark energy models Abstract: We review how we can construct the gravity models which reproduces the arbitrary development of the universe. We consider the reconstruction in the Einstein gravity coupled with generalized perfect fluid, scalar-Einstein gravity, scalar-Einstein-Gauss-Bonnet gravity, Einstein-$F(G)$-gravity, and $F(R)$-gravity. Very explicit formulas are given to reconstruct the models, which could be used when we find the detailed data of the development of the universe by future observations. Especially we find the formulas using e-foldings, which has a direct relation with observed redshift. As long as we observe the time development of the Hubble rate $H$, there exists a variety of models describing the arbitrary development of universe. The F(R) theories of the subject refers to approaches where one consider gravity theories with terms in the lagrangian that contain higher order terms in the curvature that appear as counterterms in the renormaliztion program of conventinal quantum gravity (the theory actually is not enormalizable because of the need of infinite diferent terms). There was recently a good review article about the subject and if I have time to read it I will post about that kind of theories. Also about dark energy is a paper by A. M. Polyakov: Decay of Vacuum Energy . Abstract: This paper studies interacting massive particles on the de Sitter background. It is found that in some cases (depending on even/odd dimensionality of space, spins, masses and couplings of the involved particles etc) the vacuum acts as an inversely populated medium which is able to generate the stimulated radiation. This "cosmic laser" mechanism depletes the curvature and perhaps may help to solve the cosmological constant problem. The effect is more robust in the odd dimensional space-time, while in the even case additional assumptions are needed. Polyakov is a very original thinker, and despite that sometimes it's ideas seems a bit non conventional it always worth reading him. Possibly there are more interesting papers in axiv today, but I'll stop here. Good new year to all readers. ## Friday, December 25, 2009 ### Relation betwen the Sokolov–Ternov effect and the Unruh effect I have been disucisong in my other (and in the miguis forum) the proposal of Crane to use a black hole as an starship impulsor, bases on his arxiv article: ARE BLACK HOLE STARSHIPS POSSIBLE?. You can read (if you understand spanish) the three post about the suject: 1 , 2 and 3. While discusing that papers I have ben reading in wikipedia about it's litle brother, the Unruh effect. As explained there in detaill that effect consist of the observance of thermal radiation by an acelerated observed of what is vaccum for an stationary observer. The temperature of the radiation is proportional to the aceleration: $$T=ha/4\pi^2ck$$ (k is the bolstman constant, the other quantities have their obvious meaning). To my surprise in the entry is mentioned that there is a claim that the radiation has been observed. In particular it has been claimed to be observed in the Sokolov–Ternov effect: the effect of self-polarization of relativistic electrons or positrons moving at high energy in a magnetic field. The self-polarization occurs through the emission of spin-flip synchrotron radiation. and, in particular: it was shown that if one takes an accelerated observer to be an electron circularly orbiting in a constant external magnetic field, then the experimentally verified Sokolov-Ternov effect coincides with the Unruh effect. This results date back to 2005, so they are not new at all. And I am almost sure that they are controversial or someone would have a nobel prize for it ;-). The whole thing is that despite I try to be informed, I have no idea about it. Maybe other readers of the blog also were unaware of it and they could be curious to know. ## Thursday, December 17, 2009 ### Dark matter live webcast Ok, a litle bit late, but still something is going on: Fermilab webcast in dark matter CDMSresults Or, if you prefer you can watch the other simultaneous conference: http://www-group.slac.stanford.edu/kipac/cdms_live.html As I am posting late just tell that the main announcement has been already made, two events. That means not a definitive discovering, because of statistical considerations, but certainlly something. Now they are preciselly discusing exactly how significant this is. Update: If you want to see a summary of the results by the CDMS team, get it here (it is a two pages pdf, without formulae, readable for most people). Quick summary, as said in CF: if these events are interpreted as signal, the lower bound on the WIMP mass for these recoil energies is roughly 0.5 GeV. I would add, a good guess (it gives the best possible cross-section) is a 70 GeV WIMP. DAMA claims of dark matter discovering, via inelastic dark matter (that is the WIMP has excited energy state) is compatible with CDMS results in a reasonable parameter range. I invite you to read the entries on the topic in many of the blogs in my link list (and possibly many others). Although not a discovering there will be a lot of discussion about these results in the near future. And new results are announced for the future, when the new superCDMS would be working. Update: You can see the recorded video of one of the conferences from this website: http://online.kitp.ucsb.edu/online/dmatter_m09/cooley/ The arxiv paper, still not submitted when I am posting this, is availabe here There are some discussion in the blogs about the actual relevance of the signal. The most accepted one is a 1.5 sigma result. The discrepancies differ in how to actually consider the background. The data of 1.3 goes with the blinded background (optimized background obtained without knowledge of the existence of the signals). If one use other background one could get as much as (almost) 3 sigmas, or as few as 0. By the way, the very use of "sigma" is more appropriate for gaussian distributions, but it is used commonly for non gaussian ones with the appropriated corrections. For the future I have read that before de superCDMS it is expected to have data from another experiment, the XENON100. They talk about "early in the 2010". It remains to see what "early" exactly means, and -more important- what the results are. If one wants to read an easy introduction to the detailss of how CDMS works one can read this entry in the old tomasso dorigo blog. Be aware that Dorigo dosn't like too much supersymmetry and it argues that the (previous) CDMS result convince him a little bit more about that. Curiously he hasn't any entry about this new CDMS dataset. I had not time to answer Matti to a question in the previous post. I leave here a link to his own view of these results as a compensation: http://matpitka.blogspot.com/2009/12/dark-matter-particle-was-not-detected.html#comments ## Tuesday, December 08, 2009 ### Se rumorea que se ha descubierto la materia oscura Pues si, pues sí. La famosa materia oscura que forma el noventa y tantos por ciento de la masa del universo cuya presencia se infiere por el comportamiento de la materia visible pero de la que no había evidencia directa parece que al final ha sido descubierta en uno de los numerosos experimentos de laboratorio que actualmente se dedican a su búsqueda. En realidad hay un grupo experimental italiano que responde a las siglas DAMA que llevan un tiempo diciendo haberla encontrado. Pero por una parte su evidencia es un tanto circunstancial, habiendo hallado variaciones estacionales de cierto tipo de eventos posiblemente relacionados con algunos candidatos posibles a materia oscura. Por otro lado experimentos con una sensibilidad igual, o superior, al DAMA no han encontrado nada. E realidad hay diferencias sutiles entre los diversos tipos de detectores y es posible -pero muy improbable- un cierto tipo de materia oscura que sea detectable por DAMA y no el resto de detectores. Pero no es DAMA lo que esta ahora en candelero ( a raíz de este post en el blog de Jesster, resonances) sino CDMS, siglas de cryogenic dark matter search. Este grupo ha puesto detectores en una mina de sodio enterrada profundamente en algún lugar de Minessota. En 2007 este grupo entrego un informe negativo dónde ponían unos límites experimentales a las características posibles que podía tener la materia oscura tipo WIMP (weakly interacting massive particles). Se esperaba que ya estuviese publicado el artículo con la nueva remesa de datos, más extensa y tomada con instrumentos de mejorada sensibilidad. Pero se han retrasado y han enviado l artículo a nature, y esta revista ha aceptado el artículo, lo que hace pensar que pueda ser importante. Nature es una de las pocas revistas que quedan actualmente que tiene un contrato de confidencialidad (o como quiera traducirse disclosure) y hasta el 18 de este mes no estará disponible el artículo. Posiblemente ese mismo día también haya otro artículo paralelo en arxiv (libre de descarga para todo el mundo por consiguiente). Realmente esta sería una estupenda noticia para todo el mundo, excepto tal vez el físico de cuerdas Cunrum Vafa y colaboradores, que en los últimos dos años habían desrrollado un excelente y elaborado modelo basado en teoría de cuerdas que reproducía el modelo standard de partículas sin aditamentos exóticos comunes en otros modelos fenomenológicos, y, aparte, hacía algunas predicciones. Entre ellas que la materia oscura esta formada principalmente por el gravitino (compañero supersimétrico del gravitón), que no es una partícula tipo WIMP. Si se confirma el hallazgo habría que ver si pueden reacomodar su modelo para incorporar este hallazgo sin destruir el resto de características buenas de su teoría. Por lo que yo tengo comprendido de la teoría F la mayoría de restricciones que utiliza para hacer predicciones se basa en su modelo de ruptura de la supersimetria. Allí usan un modelo de mediación gauge (una variante de algo conocido como modelo de guidicce-massiero usado en modelos de mediación gravitatoria), dónde el mensajero es un bosón asociado a una simetría gauge tipo Peccei-Quin,asociada al axión de la QCD. Es un modelo bastante minimalista dónde casi no hay "sector oscuro" supersimétrico y en ese sentido parece muy buena idea. Pero claro, si ahora deben acomodar un WIMP como partícula supersimétrica mas ligera deberían revisar las cosas -si ello es posible- y posiblemente ese mecanismo de ruptura de la supersimetría sea lo que mas se presta a ello. Otra posibilidad, que a mi me parece muy remota, es que ya que tiene un WIMP -el neutralino mas ligero (un neutralino es una combinación del zino, fotino y higgsino) como posible NLSP (aunque la mejor opcion es un stau)- tal vez haya un mecanismo extraño de decay que pueda llevar a que haya WIMPS sueltos por ahí, y que el gravitino siga siendo el LSP (y por tanto el componente mayoritario de la materia oscura). Por las características del CDMS no podría detectar el gravitino. En fin, estas son especulaciones rápidas, y posiblemente con mi aún paupérrimo entendimiento de esas partes de la teoría F quizás sean demasiado arriesgadas. Por si acaso le he preguntado a motl (que también ha posteado la noticia en su blog http://motls.blogspot.com/2009/12/cdms-dark-matter-directly-detected.html), y en su respuesta parece estar de acuerdo con de lo que yo digo. Como quiera que sea, le pese a quien le pese, si realmente se ha descubierto la materia oscura estamos ante un acontecimiento histórico. Es más, podría tener consecuencias para el experimento del LHC pues posiblemente este debería ser capaz de producir esta partícula recién observada, y así tendríamos una doble confirmación (aparte de una guía muy exacta de como afinar los detectores del LHC, lo cuál hará mas fácil la detección). ## Tuesday, November 24, 2009 ### Introducción a la supersimetría II: El modelo de Wess-Zumino Había escrito, hace ya tiempo, una entrada sobre supersimetría, esta. Continuo el tema introduciendo una realización de dicha supersimetría en términos de un lagrangiano sencillo, lo que se conoce como el modelo de Wess-Zumino. Quien no tenga muy recientes sus conocimientos de teoría cuántica de campos, y en particular los tipos posibles de spinores, puede leer sobre ello en esta entrada de mi otro blog. Este va a constar de dos campos, un campo escalar complejo $\phi$ formado por dos campos reales A y B, $\phi=(A+iB/\sqrt{2})$ y un campo spinorial de Majorana $\psi$. Ambos campos van a carecer de masa. El motivo para ello es que en la naturaleza no se ha observado la supersimetría, lo cuál indica que caso de existir, la supersimetría debe estar rota. Se supone que las partículas supersimétricas de las partículas conocidas habrán adquirido masa a trvés de un proceso de ruptura de esta supersimetría. Con estos ingredientes el término cinétco de nuestro lagrangiano será. 1.$L= \partial^{\mu} \phi^*\partial_{\mu}\phi ~ + ~ 1/2i\bar\Psi\displaystyle{\not} \partial \Psi$ Ese lagrangiano es invariante bajo una tranformación SUSY global: ## Monday, July 13, 2009 ### Strings 2009: the slides This year the annual conference in string theory, celebrated at Roma, has not had an internet live TV broadcast as it happened the last year. Because of that reason I didn't do a post about the topic. I have waited until the slides where out and I could have read some of them. The slides of conferences, if they are detailed enough, are a good thing because they are addressed to non specialists in that particular field, so they can be easily read, and they condense a great amount of information from various papers. You can get access to the lists of talks, wth the corresponding slides, here. I have read a few ones already. The first was the one given by Howava. I was greatly interested in reading how he defended his theory against the recent papers with showed the problems of renormalizability it seems to actually has, despite of being power counting renormalizable. Well, I didn't see any mention of it. The slide talks about the "foundational" papers on the subject and explains it's relation to the M2 brane of M-theory, to the CDT (causal dynamics triangulations) result that in the short length the effective dimension of space time is near 2, and that his theory resembles that, and a few other topics. I find specially curious that one of the motivations for his theory is that string theory violates Lorentz symmetry. Well, I am not sure why he says that, but certainly said without further explanation looks weird. It is a pity that there was not live streaming, nor non-live videos, of the talks so one can't see what questions people made him. About the F-theory GUT's there were three talks. One from Vafa. It's ppt (than not pdf) is very schematic and without some previous knowledge on the subject I am not sure how much information one can get from it. Anyway, if one reads the papers I cited in my post about F-theory for non experts maybe he could get a much better understanding. Vafa makes a decent work explaining the two foundational papers, the paper in cosmology, and the paper in LHC footprints, that I have read. It also talks about some papers I haven't read, as for example the ones in gauge mediation (although I had read some resumes of the results). The conclusions seem to be that there are two clear predictions from their models. One, in cosmology, is that the dark matter candidate is the gravitino. that rules out models on WIMPS and implies that ATIC, PAMELA and similar results that seems to indicate an anomalous ratio of positrons over electrons over certain ranks of energies would have astrophysical origins. Or not exit at all. Recent results from FERMI/GLAST seem to contradcit ATIC and PAMELA (see, for example this post by Jester, in resonances blog) would agree with this prediction. The other prediction mentioned on the slide is that there will be some charged track on the LHC leaving the detector. It would be due to the NLSP whose lifetime, 10^1-4 secs, is long enough to allow it scape from the detector. There are two more talks about F-theory. One by Sakura Schafer-Namek. I have read it but from all the part related to spectral covers I coudn't get any useful informrmation. I simply don't know enough form that mathemathical topic. The other paper in F-theory is the one by Jonathan Heckman. It is centred in flavor hierarchies for quarks and leptons. Well, an interesting topic for sure, but not my favourite one. Anyway the slide is good enough to get some general idea of the topic from it. Another paper I read is the one of Strominger about the KERR/CFT correspondence. About that topic I only had read a paper dated from the last summer. Well, I am not sure if too much progress has been achieved so far neither I have clear whether the whole field is terribly significant, but possibly that is my fault. Possibly the most awaited paper was the one from Nima-Arkani-Hamed about twistors and the S-Matrix. There are rumorology out there saying that it's not a paper in string theory but an attempt to create some kind of supersymmetric GUT diferent from string theory. I haven't still read the slide and I can't say anything about. But for sure it is a theory that many people will discuses sooner or later, possibly when the actual paper on the subject would be out. I'll possibly read more slides later, but I am not sure if I will post about them. But everybody can try to rad the linked slides by themselves. There are good choices that anyone with a decent basic on high energy physics could get some amount of info from them. UPDATE: In a thread in physicis forums someone, seemengly well informed, said that actually Horava recognized the problems recently found in his theory in his talk as strings 2009. Also the same physic forums poster explained that the actual problems where that one couldn't decouplee the gosths from the theory. Curiosulsly that has lead to a posible reinterpretation of that gosths as dark matter. I have not read the relevant papers but at first sight that looks very bizaree. Gosths are negative norm states tht usually appear in the quantizationo of gauge theories as intermediate states that can be shown not to appear in external legs, i.e., are no observabbles. Toclaims thatusually unwanted negative normed states can go in external lines and actually represent viable particles (in the form of dark matter) seems like one could try to do the same thing for any theory and one wouldn't need gauge theories. I suppose that there will be something special in that gosths that make them diferent from the usual ones and permits people doing such conjectures, but, as I said, looks an a priory contravied claim. P.S. I am looking for an easier way to use LaTeX in this blog that the one I am using (writing the latex code in the url of an image generated by an external LaTeX server). If I don't find a good solution I would seriously consider the option to migrate this blog to wordpress where writing LaTeX is "natively" supported (that's the reason I make an extensive use of it in my other blog). ## Thursday, July 09, 2009 ### Vixra, the arxiv mirror symmetric In Kea Monad/Marni Dee Sheppeard blog there has been recently a few entries about the freedom to publish scientific results. As a result Tomasso Dorigo suggested her a bizarre idea. s a result of comments exchange it resulted into another idea, the birth of a new archive for scientific publication. In a really fast movement a new domain was registered and the site is already available. The name for the new site is arxiv written in the opposite direction, that is vixra, which, with some minor licences can be considered as a mirror symmetric of arxiv. The actual link for the website is: vixra.org/. Note that at the date of writing this It is in a very beta status. I leave here the manifest that justifies it's creation and it's purpose, as declared by the creator: Why viXra? In 1991 the electronic e-print archive, now known as arXiv.org, was founded at Los Alamos National Laboritories. In the early days of the World Wide Web it was open to submissions from all scientific researchers, but gradually a policy of moderation was employed to block articles that the administrators considered unsuitable. In 2004 this was replaced by a system of endorsements to reduce the workload and place responsibility of moderation on the endorsers. The stated intention was to permit anybody from the scientific community to continue contributing. However many of us who had successfully submitted e-prints before then found that we were no longer able to. Even those with doctorates in physics and long histories of publication in scientific journals can no longer contribute to the arXiv unless they can find an endorser in a suitable research institution. The policies of Cornell University who now control the arXiv are so strict that even when someone succeeds in finding an endorser their e-print may still be rejected or moved to the "physics" category of the arXiv where it is likely to get less attention. Those who endorse articles that Cornell find unsuitable are under threat of losing their right to endorse or even their own ability to submit e-prints. Given the harm this might cause to their careers it is no surprise that endorsers are very conservative when considering articles from people they do not know. These policies are defended on the arXiv's endorsement help page A few of the cases where people have been blocked from submitting to the arXiv have been detailed on the Archive Freedom website, but as time has gone by it has become clear that Cornell have no plans to bow to pressure and change their policies. Some of us now feel that the time has come to start an alternative archive which will be open to the whole scientific community. That is why viXra has been created. viXra will be open to anybody for both reading and submitting articles. We will not prevent anybody from submitting and will only reject articles in extreme cases of abuse, e.g. where the work may be vulgar, libellous, plagiarius or dangerously misleading. It is inevitable that viXra will therefore contain e-prints that many scientists will consider clearly wrong and unscientific. However, it will also be a repository for new ideas that the scientific establishment is not currently willing to consider. Other perfectly conventional e-prints will be found here simply because the authors were not able to find a suitable endorser for the arXiv or because they prefer a more open system. It is our belief that anybody who considers themselves to have done scientific work should have the right to place it in an archive in order to communicate the idea to a wide public. They should also be allowed to stake their claim of priority in case the idea is recognised as important in the future. Many scientists argue that if arXiv.org had such an open policy then it would be filled with unscientific papers that waste peoples time. There are problems with that argument. Firstly there are already a high number of submissions that do get into the archive which many people consider to be rubbish, but they don't agree on which ones they are. If you removed them all, the arXiv would be left with only safe papers of very limited interest. Instead of complaining about the papers they don't like, researchers need to find other ways of selecting the papers of interest to them. arXiv.org could help by providing technology to help people filter the article lists they browse. It is also often said that the arXiv.org exclusion policies dont matter because if an amateur scientist were to make a great discovery, it would certainly be noticed and recognised. There are two reasons why this argument is wrong and unhelpful. Firstly, many amateur scientists are just trying to do ordinary science. They do not have to make the next great paradigm shift in science before their work can be useful. Secondly, the best new ideas do not follow from conventional research and it may take several years before their importance can be appreciated. If such a discovery cannot be put in a permanent archive it will be overlooked to the detriment of both the author and the scientific community. Another argument is that anybody can submit their work to a journal where it will get an impartial review. The truth is that most journals are now more concerned with the commericial value their impact factor than with the advance of science. Papers submitted by anyone without a good affiliation to a reasearch institution find it very difficult to publish. Their work is often returned with an unhelpful note saying that it will not be passed on for review because it does not meet the criteria of the journal. In part viXra.org is a parody of arXiv.org to highlight Cornell University's unacceptable censorship policy. It is also an experiment to see what kind of scientific work is being excluded by the arXiv. But most of all it is a serious and permanent e-print archive for scientific work. Unlike arXiv.org tt is truly open to scientists from all walks of life. You can support this project by submitting your articles now. What do I think of this. Well, there is a famous phrase of Richard Feynman about physics (valid for science in general), and it's role as a practical discipline: "Physics is like sex. It can have practical consequences sometimes but that is not the reason we do it". Well, that's the idea. And publishing would be part of the fun. But seemengly to publish (as well as otherparts of a scientifi carrer) have become a game where many factors ouor of the pure siceintifc content play a role as least as important as the quality of papers. Still worst, it is not very clear what the rules of that game are. That converts publishing in a very risky busines and an error can bban one from arxiv (the papers that people actually read, peer to peer reviews have become invisible). In fact I personally think that I could find some endorser for froseable future papers. But in the actual state of the subject it is too much presure bor bboth, me and the endorser. For that reason an alternative as arxiv is a goo option. One can publish ideas and exchange them with other people. It is important the concept of exchange. There are some kind of papers when one can have, or almost have,the secutiry that they are right. But there are other that are subjecto to many uncertainties. And, possibly, one can save only a limited amount of the difficuties he face. Possibly if one has round him people working on that field he could discusse that ideas privately. but it is not always possible (even if you are in a academic position). In that sense to publish ideas in a preliminar state of development that you are not sure you can pursue further, that maybe they could be usefull. That's the idea of scientific exchange as fr as I see. And if one is wrong, well, that's always a possibilitie. Of course one would do the usual homework to try to search as much as possible similar ideas beofre publishing rubish results. Definitively is good to publish that kind of papers in a site where if somone is wrong doesn't he (and his family, friends and cat) become banned for the rest of his life I wuold say . Still better, as far as I see both archives wouldn't be mutually exclusives. One could publish "serious" papers in arxiv and more risked ones in vixra. Well, at least in theory. Surely someone will find good reasons to find incompatibilities among them ;-). ## Wednesday, July 08, 2009 ### F-theory GUT for non experts I have found a few papers that do a good job explaining the basics of F-theory in a relatively easy way. I could have posted them as un update of the previous post on the subject but I think it deserves an small separated post. One paper is : F-theory, GUTs and Chiral Matter. Another one, written by Hackman and Vafa is: From F-theory GUTs to the LHC Also I think that the interested reader would try to understand more basic settings, previous to the F-theory revolution. I am talking about the intersecting branes scenarios. A short and good review is: Progress in D-brane model building. The reason to investigate the last paper is that I find it is interesting to understand how one calculate family numbers, how chiral fermions arise and so that in more conventional D-brane models. In fact the firs paper I cite makes a good job explaining some of that aspects, but still. Also I recommend, once again, the original paper of Ibañez, Quevedo et all in local models D-Branes at Singularities : A Bottom-Up Approach to the String Embedding of the Standard Model. I have finished to read it and I find it very clear. As a plus it also has a brief chapter about F-theory. Certainly the last papers about D-brane model building are not required to understand the F-theory ones, but It is good to understand what existed previously to better understand the goodness of the new. In that sense the papers recommended in my entry about the prehistory of F-theory GUTS are also valuables and focuses in diferent aspects than the ones cited here. Anyway, if someone only wants a quick, but accurate, idea of the subject the two papers cited at the start of the post make a wonderful work ## Friday, June 26, 2009 ### Trántorian physics Trantor is a ficticial planet presented in the Isaac Asimov series of books about the foundation. It is the centrer of a galactic imperia. In that universe the king of sciences is psicohistory. By that name is referred a mathemathical model of human societies with detailed qualitative predictive power. Physics has become an obsolete discipline that had dead of success long time ago. Supposedly it had answered all the basic questions and no new important discovery had been made for hundreds of years. But still there were some physicists. The problem with them is that the lack of new experimental results had resulted in a vicious system where the quality of one particular physicist depended on the knowledge of the achievements and, maybe, his ability to reinterpret them in new, basically irrelevant, ways that didn't lead to new discovering. Well, that is fictional. But sometimes actual physics somewhat resemblances that trantorian physicists. Lot of people like to culprit string theory for that, but I don't agree at all, it is a problem common to all the alternatives. I mean, what actual observable predictions make alternative theories? LQG great achievement was the frequency depending speed of light. In fact Liouville strings also predicted that Well, FERMI/GLAST has almost ruled that possibility (although there is some discrepancy on the interpretation of results depending of who writes about it, for example Lubos Motl and Sabine Hossenfander disagree, as always). Horava's gravity, being as a classical theory slightly different from Einstein's gravity makes predictions not too hard to measure. But after the initial explosion of papers it is somewhat stopped now due to some papers that posed serious doubt about is goodness as a quantum theory despite being power counting renormalizable. It would have been nice to see how it was received in the actually developing strings 2009 conference, but this year there is no live broadcast nor, at least until now, none bloging about it. Nonconmutative theories are also almost dead, despite they had some time of glory (although today, afther many months, there is a paper in arxiv in the subject http://arxiv.org/abs/0906.4727). There are two types of NCT theories, field theoretic ones, and geometric ones. The fist are inspired in string theory. The last ones re mainly geometric and were promoted by the mathemathician Alain Connes. They mde a firm prediction, a value for the Higgs mass, that was ruled out (at lest in the original way, I am not sure whether some modifications have been suggested) last year by measures of the tevatron. So, basically, we have that despite many theoretical efforts in many different approaches to basic physic (i.e., particle physics) we have no new experimentally confirmed since the formulation of the standard model, in the last sixties and former seventies of the past century. The only new result was the confirmation that neutrinos have an small mass. The other experimental news come from cosmology, and, as I said in previous posts, are not so firm as laboratory experiments. Is this a problem of theoretical physicists. I hardly think so. String theory is a very rich framework. Different aspects of them actually are promising candidates for phenomenology. For example the mesoscopic extra dimensions suggested by Arkani-Hammed et all in the last nineties was a very original idea, that has led to cheap experiments that had put new bounds on the size of that dimensions. LQG, as said did a good prediction (shared by most Lorentz violating theories) and LQC is trying to do observable predictions about cosmology, maybe not rigorous ones, but if the were observed none would care too much about it ;). The big problem I see is not related to theory but to experiments. And, specially, to collider experiments. USA cancelled founds for a new linear accelerator in the nineties. The LCH schedule has seen almost five years of delay (that is, if finally beguines to operate in September, as expected). The tevatron has made it's bests, going beyond the expectations. It has showed that QCD at high temperatures behaves not as a quantum gas (as expected) but as a quantum liquid.That doesn't means new basic physics, but at least it gives clouds about the properties of QCD that are very hard to study mathematically and computationaly. And, hey, it has ruled out NCG ;-). Even there are some possibilities that a careful analysis of the collected data would find the Higss bosson. Not that bad for a recicled collider. If there is no serious money inverted in experiments researchers are going to spend time in increasingly twisted theories. Internal coherence is a good guide, but it is not clear that that alleged coherence is so free of doubts as some people try to present it. That goes for LQG and for string theory (and the alternatives). Again that is not a reason to go again string theory (or the alternatives, well, some of the alternatives are theorethically unlikely, but still). The ultimate justification of the theoretical developments is that they re made searching for compatibility with known physics and also guessing new phenomenology. What is seriously need is that experiments would be made. The LHC is, hopefully, next to operate, but there s no serious project for the post LHC. Maybe some people could think that there is no good reason to expend a lot of money in that expensive experiments. Specially not in the current economic crisis. In my opinion that is a narrow minded vision. Certainly other areas of physics are giving interesting results (solid state/condensed mater and the wide area know as nanotechnology) but they are based on very all basic physics. It is necessary to pursue the development of new physics. For example, one very important problem that the society need to face is the energy supply. There are discrepancies about how many fossil combustibles (specially at cheap prices) remain. In fact that depends heavily on the growth of demand. But sooner or later (and most likely sooner) they will extinct. The "ecological" alternatives (solar energy, wind, etc) are mostly propagandistic solutions. Nuclear energy has better chances, but it depends on a limited resource,uranium. Certainly there are proposals for fast breed reactors that could create fissible elements. But they are somewhat experimental. It is an open question where they will operate as expected. The other alternative, nuclear fusion is fine. But again governments are not spending enough money on it (as the fate of ITTER clearly shows). The thing is that when we are looking for energy sources the best thing we can is understand how the universe behaves at high energies. If one looks at the way on how "energy sources" work one sees a common pattern. One has a two energy state system separated by a barrier where the difference of energy between the two states is greater than the energy of the barrier. If one supply the system with energy enough to go over the barrier when the system goes to the lower energy state it returns more energy than the employed one. That is the way chemical combustibles work. And also the way nuclear fission and fusion works. Nuclear process involve higher energies and so they return more energy also (well, in fact it could be otherwise, but it would be very unnatural). Well, if we go to higher energies one expects that, somewhere, there will be some systems that share that property (a good name for it would be metastability).For example in some supersymmetric models there is, if R-symmetry is present, a lightest supersymmetric partner, LSP, which is stable, and a candidate for dark matter. And also there is the possibility of a NLSP (next to light supersymetric partner) that would be metastable. Well, that is the kind of thing we like. One would expect that there is a big energy difference among them. If they are found and it is discovered a way to force the decay of the NLSP into the LSP we would have an energy source. Moreover, dark matter represent the, 75%, 90%? of the mass of the universe. That could men that there is a lot of it out there. One could argue that if we are not able to do nuclear fusion, using known elements we badly could develop a technology to extract energy from something that is still hypothetical. But the truth is that we don't know. Maybe it is a lot easier to extract energy form dark matter (let it be (N)LSP, WIMPS or whatever) that from known sources. Still there are other possibilities. There is an small possibility that if the LHC creates black holes it could also create wormholes. Wormholes (lorentzian ones) have received a lot of attention in SF as a tool for interstellar travel or even as time machines. But there are other interesting uses for them if they would actually exist. If one mouth of the wormhole is posed in a very energetic environment it could drive that energy onto the other mouth by a direct way. For example one could put one mouth deep inside the earth and the other in the surface. That would be a good way to extract geotermic energy. Of course one could think that is a lot more likely to use more conventional ways to get that energy, but still it could be not. Other very energetic environment would be the sun. It is not totally clear how much energy requires to create a wormhole, but one would expect that if the outer distance between the mounts growth the same applies to the required energy. But it could again not to be so. Still there is a problem in using the sun, the gravitational interaction. The gravitational field of the sun would be transferred together with light and it could alter the earth orbit. There is a more interesting possibility for wormholes (or maybe we would call them warmholes, or not, depending on how one would worry about double meanings of words xD). If they are created at the LHC that would probably mean that the reason behind it is that mesoscopic extra dimensions exist. In string theory there are various ways to realize that sceneries. A common feature of many of them is that the would mean that we leave in a three dimensional (or effectively three dimensional) brane. But it is possible the existence of additional branes. It could be that some of them would have a high background energy. And it also could be that they would bee not too far away into that additional dimensions. Actually they could be so near that it wouldn't be improbable that a wormhole could be created with one mouth inside that hot brane and the other in ours. Still better, the sceneries with mesoscopic extra dimensions offer good possibilities for wormholes becoming stable. That would rise the possibilit to use that wormholes to extract energy from that hot branes. Depending on the details they could be a mean to solve all the energy requirements of the human kind at a level that exceeds all the actual xpectations. All the hipothethical energy sources that I have presented are related to string theory likely situations. Alternative theories maybe also would offer options. For example black holes in alternative theories could not evaporate completely and one could use the remanents to extract energy from them in Penrose lie process. A serious problem with it is that without mesoscopic dimensions there is no way to create black holes in the LHC so we woudn't have remanents either. By the way, black hole physics is a very good example of trantorian physisic. Specially the black holes inners. The gravity/LQG community has a, widely accepted, viewpoint of them where the radius behaves as a time coordinate. Well, in string theory there are very different proposals, none of them too friendly with that LQG viewpoint. Also the string theory strongly supports the complementary principle. Well, some people in LQG don'even know of it's existence (or at least not until they published a paper that was incompatible with that principle). My problem with this is that we don't have a near black hole to do experimental tests. In fact even if we would create them into the LHC it is not clear that we could make experimental tests about black hole inners. Neither is too clear how that black hole inners have any consequence into the behaviour of the event horizon. Well, if naked singularities are allowed the thing would improve, but then they wouldn't be black holes ;-). Well, certainly in this post, apart of some sociological consideratins, I have presented very speculative ideas with two few details about them. Maybe that is what top notch physicist do in trantor. Not being there I hope to present more earth based physic in next entries ;-). By the way, if one is absolutely serious about it many proposal for alternative "ecological" energy sources are actually less unlikely to be good alternatives to oil that the ones I have proposed here. They look otherwise because they are based in things that laymen think that they understand, but if one goes into the details of the implied physics one really hope that wormholes actually exists xD. ## Thursday, June 04, 2009 ### String theory is good for...phenomenology of particle physics Yesterday the number of visits to this blog had a major increase. Most of the traffic came from this post in Miguis web/blog. The post was a translation to Spanish of an article in new scientist about the good points of string theory. I had seen a discussion of that article in Lubos blog, concretely here. Well, that article comes to say that string theory is nowadays a good theory because it´s math structure, through the AdS/CFT correspondence is useful in QCD and condensed matter physics. Well, I don't know too much about that applications but if the experts in that subjects say so is a good sign. But, actually, I don't think that that image is quite right nowadays. Readers of this blog know that I have played attention to many alternative theories. Some of the proponents of that theories make claims against string theory. Others, who don't actually offer any theory, that is, Peter Woit, claims that string theory "makes no predictions". In his blog he usually bring attention mostly to the most speculative articles written by string theorists. Well, I am following, as much as I can, the actual F-theory minirevolution. Doing so I have become very surprised, and impressed, by how close string theory has become of actual physics. Before going into it I must say that I somewhat understand the sceptics in string theory. If one reads the books on the subject one certainly gets the impression that actual predictions are far away. For example the 1999 (that is, not too old) book of Michio Kaku Introduction to superstrings and M-theory in its chapters about phenonenology show the results of heterotic compactifications. In that results the best one could get were the standard model plus some additional U(1) factors. Also it was stated that to achieve the right number of generations , given by n=1/2X(Cy), that is, one half of the Euler characteristic of the Calabi-Yau mainfold, was difficoult (if not almost impossible). Other books, as Polchinsky´s two volumes book and the Clifford Jones "D-branes" don't say too much about realistic compactifications. There are good reasons for that. The books are mostly concerned about the D-brane revolutions and its consequences, the black hole entropy calculation and the AdS/CFT conjecture. The most recent book of Becker-Becker-Schwartz makes more in deep cover of compactifications. But , with a good criteria, somewhat cares more about technical issues such as the moduli space of the compactification, mirror symmetries among type II A and type II B, and flux compactifications, which are relevants for the very important issue of moduli stabilization and the KKLT like models (related to the landscape). And , of course, the all make introductions to dualities, M theory, and , to a least extent, F-theory. In fact all that are important technical aspects, and it requires time to learn them (one must read some articles if he really wants to properly understand some aspects). But one gets the impression that everything is still to far from LHC phsyics and cosmology testable predictions. In fact there is a very recent book, by a Michel Dine which goes into phenomenology title "supersymmetry and superstrings". I must say that I find that book somewhat failed. It is to brief covering subjects that even with some previous knowledge are hard to appreciate properly. Well, in definitive, a lot of text books and no a clear signal of actual testable physics. Certainly discouraging. Divulgative books are not too different. That, certainly, can explain why some people has the impression that string theory is far from it's objectives. Blogs from string theorists try to say, to whoever listen them, that string theory is "the only game in town". In fact there are not many blogs in string theory with a decent publication rate. However I had also the idea that string theory was far of phenomenology, and I had not purchased too mcuh that topic. F-theory minirevolution has changed that. I have read at last the two big pioneer papers of Vafa (arXiv:0802.3391v1 and arXiv:0806.0102v1), and almost completed the reading of the F-theory Guts cosmology (arXiv:0812.3155v1). Also I have made partial readings of some subsequent papers, and a few previous papers needed to understand the formalism developed. Certainly are hard to understand papers. But once one gets familiar with them one sees what kind of physics is discussed. The first thing to say is one need to know the details of GUT's and symmetry (and supersymmetry) breaking. F-theory local models, with the right decoupling from gravity, can give an SU(5) model, without any exotics. They offer it's own way to break SU(5) into the MSSM, through an U(1) flux of hyperchrge, That mechanism avoids some of the problems presents in purely field theoretic models. In particular they can avoid problems with the observed lifetime of the proton. Ulterior papers get values in the CKM matrix that are good to get the observed asymetry oof baryons in universe. They offer ways to advoid the singlet-tripplet spliting problem of GUT's (That is, requiring the existence of Higgs doublets (1, 2)±1/2 leads necessarily also to color triplets. However, there exist rather uncomfortable lower bounds on the mass of these triplets). They offer a natural way to get small neutrino masses. In cosmology, trough a late decay of the saxion (whose lifetimem is predicted, that is, properly bounded, by the theory), they can avoid some of the problems that symmetry breaking bring to cosmology (the gravitino problem) and gives a righ way to obtain reheating after an inflactionary phase and some extra things that I haven't finished to read. As you can see these models are quite near the cutting edge phenomenology. They offer solutions to problems not available by other approaches. And F-theory is not alone. Seemingly M-theory is going also into the local models + gravity decoupling business, see for example the paper Hitchin’s Equations and M-Theory Phenomenology by Tony Pantev and Martijn Wijnholt. As I said I hadn't followed previously phenomenology with too much attention. But, in fact, more traditional approaches also had made some advances. For example this 2008 short review article of heterotic compactifications, From strings to the MSSM also cares about some of the previously mentioned aspects. Another very recent paper, Towards Realistic String Vacua From Branes At Singularities, by Joseph P. Conlon, Anshuman Maharana, Fernando Quevedo, use the D-brane approach to phenomenology, not related to the gravity decoupling approach. They offer the bonus of moduli stabilization (something more habitual in cosmological models). In the abstract the conclude saying: "We propose that such a gauge boson could be responsible for the ghost muon anomaly recently found at the Tevatron’s CDF detector". Well, there some serious doubts about the real existence of that anomalies (see the tomasso dorigo´s blog, linked in this web site, and search for discussion of that topic). Well, certainly there are a few bunch of models inspired by string theory, and not all of them (if any) can be truth at once. Also not all models make firm predictions. But the point is that they are actually reproducing the MSSM, GUT´s supersymmetric models, and mechanism to enhance the purely particle physics models. Also , in cosmology there are many different points where string is enhancing purely field theoretic models. But, such as I see it, string theory is actually dictating the construction of (at least some of) the models that are going to be checked in the near future. Also one must not forget about the RS models, inspired by string theory, where one could get black holes in the LHC (that models possibly are not compatible with F-theory GUTs). With all of this I think that string theory is doing exactly what one would expect from a traditional fundamental theory of physics such it has been made traditionally. Certainly I am talking about very, very, recent developments, most of them from this year and the previous one. But, anyway, it looks like if string theory is definitively "landing" into experimental physics, that is what it was expected from it. And, still, it is doing progress into clarifying it`s theoretical aspects, and the description of black holes (a topic not too easy to study in laboratory, except if LCH produce black holes, that is). I am not at all a radical and I understand if some people wants to keep doing alternative approaches. The point of this post is to say that, as far as I see, the "not even wrong" criticism of string theory doesn't make too much sense nowadays. And please, remember that I am not in a faculty position getting money from doing research in string theory. I have no economic, doctrinal or political reason to favour one theory or another. It is just that, according to what I know just now, string theory seems a perfectly good theory for doing high energy physics, and I have tried to explain why. ## Monday, June 01, 2009 ### Quick ideas to become a cosmology atheist As I said in the other post, and not for the first time, I don't take cosmology too seriously. I find that there are many uncertainties in the observed data and also in the interpretations. Because of that I hadn't bothered to think too much about that questions. In the last post Kea sugested me to read the Louise Riofrio theory, which resulted to be a version of the VSL (variable speeds of light) cosmologies. I have partially readed some of their statements, and also made the usual googling abbout the topic. The first thing that one finds is a mention of Von Riemman space. Well, I have no idea of what that is supposed to be. of course that could be because iI am not spetialist in the field so I googled for it and Ireached a physiscs forum's thread where other people also agreed that they didn't know it. Well, there som other points in her papers whose motivation I don't see clear. Beeing so I can't say too much else about the general theory Another apect where she seems to see a point, independent of the general model, favouring her theory of a VSL is the following argument. In some epoch ths sun , according the standard model o solar evolutions, radiates a 75% of the energy that it radiates now. Ok, according to that she claims that earth should be a ice ball contradicitng the fact that there was life in it. The VSl solves the problem because someway the VSl implies tht the sun luminosity should be corrected to the right factor. Without going into the detaill I must say that I find very unlikely that conventinal astronomy wouldn't have considered that possibbility before. Also there is another consideration. Earth is hot by itself. The friction energy that leaded to it's formation is accumulted inside it. In the XIX there was a controversy among the geologists and a prominent physics I don't remember for sure but I think that it was kelvin). The geological observations dated the antiquitie of earth in a number of years that was imcompatible with its temperature. Using the heat equation and the conventional data for earth materials one could see that earth would have frozen long time sooner that the age estimated bby the geologists. Later Somerfeld said that the reconciliation of the two viewpoints was the prsence of radiactive materials inside earth. Beeing sommerfeld such a well qualified physicist the argument was accepted as valid withouth criticism. Well, in fact if one does the actual calculationsit can be shown that the radiactive materials are not enought to achieve the hotting of earth,. The reaosn earth is still hot (inthe outside) is that the heat equation used by Kelvin was not right. One needs to consider also trasnport phenomena, that is, convection. Doing so it canbe shown that earth is hot bbecause of it's inner hot adquired within it's formatioin. Beeing so I am not sure of how much of the riofrio argument makes too much sense. Also I find that history interesting because it whos explicitly how cautous one must be with arguments not based in observations made inlaboratory controlled conditions. Simply there are too many uncertaintiees. Well, It has coincided that this mount the spanish edition of scientific american has an article where the cosmological arguments leading to the cosmological constant where revised. Being a divulgative article, that is, easy to read, I did so (it didn't take too mcuh time) The idea is that the observational reason why we belive univserse is expanding aceleratedly is that we see that far supernovaes light arrives to us with less intensity that what it would be expected from it's red shift it the univserse would be under a decelerating FRW (Friedman-Robertson-Walker) expansion. In the article they offert an alternative explanation. They say that it we would be in a particularly empty region of space-time the local decelartion of the universe would be slowest here that in distant points (for example the points near the observed supernovae. It contraicts the copernican principle that says that we are not in a particular place in space time. But tht can be circunvated in a natural way. If in the early universe there would e a random distribution of density inhomogenities that respected that principle the evolution would make that the less dense parts would increase it's size bby a factor ggreater than the more dense ones. In that way it would be mor probable that we would be in a relatively empty region of the universe. The last part of the argument is very similar to the nucleation mechanism that susskind used to explain the cosmological constant (but there are also diferences, of course). Well, afther reading all that I wondered if I myself could ideate a mechanism to go agains the conventional big bang + inflaction scenary. Well, indeed I could.
A look back at some famous discoveries in biology highlights the competitive nature of science, the trophy being priority by publication. Like it or not, the honor of being considered the discoverer of a new scientific idea is awarded to the scientist who gets the idea published first. The scientist makes a discovery, writes up the findings in a logical format, and submits this article to a scientific journal. The editors at the journal forward the article to several reviewers who are knowledgeable in the field, and these peer reviewers determine whether the methodology, statistical analysis, originality, and importance of the article warrant publication and whether revisions are necessary. If judged worthy, the date of publication marks the “discovery” and the author or authors are awarded priority, which is the honor of being considered the first to have made the discovery. Priority by publication date makes science a competitive field. Olympic gold medals are awarded to athletes who adhere to the motto of “Citius, Altius, Fortius,” and judges can easily compare speed, height, and strength with measurements. Gold and silver can be separated by a hundredth of a second, but only one competitor gets the gold. In the 15th century, colonies were marked with the flags of imperialist nations. The simple flag marked priority and huge territorial claims. In technology, patents recognize priority of invention, and with patents come exclusivity, financial reward, and sometimes fame. Glory in scientific research is gained through discovery, and priority to that discovery is determined by publication date. Such publication has two beneficiaries: progress in the field of science, and the author. For the author, priority is accompanied by enhanced reputation, public validation of original ideas, promotion, tenure, raises, and prizes, including the Nobel Prize, worth $1.2 million per full share in 2012 (Nobel Foundation, 2012). Many of our students, even at the high school level, feel pressure to get involved with research to fill a box on their resumes, and having their names appended to the list of authors on a paper is the tangible recognition of their efforts. There can be, however, a very human and dark side to all these papers, prizes, and progress, and I will set out a few examples. In 1827, Charles Darwin felt the thrill of discovery and priority at the age of 18. He had observed motile eggs of the bryozoan Flustra under the microscope, an observation that had not been made before (Nichols, 2003). He discussed this with his then mentor, Robert Grant, who chastised him for encroaching on his field of study, and Grant presented the new findings to the Wernerian Society without acknowledging Darwin’s contribution. Three days later, Darwin presented the findings to the less august Plinian Society, but he never forgot the sting, as his daughter Henrietta recounts here (Litchfield, 1871), with her shorthand left as is: Feb 1871 Just before publication of Man, my Father told me “I have just heard that a German book has come out apparently the very same as mine, “Sittlichkeit & Darwinismus”; whereupon I said “Well, at any rate nobody can say you’ve plagiarized.” “Yes, that is the only bother, that is very disagreeable. Otherwise I never have cared abt the paltry feeling of priority & it doesn’t signify a bit its coming out first. It is sure to be not exactly the same.” It is a good thing it is coming out when two men hit upon the same idea it is more likely to be true. I then made him repeat what he had told me before, namely his first introduction to the jealousy of scientific men. When he was at Edinburgh he found out that the spermatozoa of Flustra move. He rushed instantly to Grant afterwards Professor at University Coll who was working on the subject to tell him, thinking he wd be delighted with so curious a fact. But was confounded on being told that it was very unfair of him to work at Prof G’s subject & in fact that he shd take it ill if my Father published it. This made a deep impression on my Father & he has always expressed the strongest contempt for all such little feelings – so unworthy of searchers after truth. The plagiarism mentioned in the first paragraph probably refers to Darwin’s doubts about whether he acted correctly in dealing with Alfred Russel Wallace in 1858 (Figure 1). In 1842, Darwin had written a short synopsis of his idea that natural selection caused the appearance of design in nature and accounted for descent with modification. Whether out of fear or meticulousness, he did not publish this mechanism at that time. In 1858, he received a letter from Wallace in which Wallace clearly described Darwin’s own theory, including the concepts of variation, inheritance, selection, and adaptation. Wallace had sent Darwin the essay from the Malay archipelago, in hopes that Darwin would forward it on to Charles Lyell for publication. Darwin feared that his claim to priority would be lost. Lyell and Robert Hooker arranged to have Wallace’s essay and an abstract from Darwin, together with a letter to Asa Gray dated 5 September 1857, which outlined Darwin’s mechanism, read at a meeting on 1 July 1858 of the Linnean Society. The last letter was what would establish priority for Darwin. Nevertheless, these papers generated no excitement, and Darwin later concluded, in his autobiography, that “This shows how necessary it is that any new view should be explained at considerable length in order to arouse public attention” (Barlow, 1958). His On the Origin of Species, published in November 1859, did just that, but it is clear that Darwin, despite Henrietta’s recollection, had felt defensive about his claim. So much so that in 1861, he inserted an appendix into the third edition of Origin of Species in which he discussed all possible claims to priority, and why they came up short. Lamarck had descent with modification correct, but not the mechanism. Dr. W. C. Wells in 1813 had natural selection correct, but only applied it narrowly to some human traits. In 1831, Patrick Matthew clearly saw natural selection as the force behind change over time, but he published his idea in an appendix to the book Naval Timber and Architecture, which was not read widely by naturalists. Darwin also cites the publications of 20 others that preceded his 1859 opus, but gives reasons for each not having caught the attention or enthusiasm of those pursuing the topic (Costa, 2009). Figure 1. Alfred Russel Wallace. (Painting by Thomas Sims, 1863; © National Portrait Gallery, London.) Figure 1. Alfred Russel Wallace. (Painting by Thomas Sims, 1863; © National Portrait Gallery, London.) Gregor Mendel in his work with peas showed that inheritance was particulate and not a blending of traits (Figure 2). This was a crucial discovery in genetics. Unfortunately, he published his historic paper in the obscure journal Proceedings of the Natural History Society of Brünn, in 1866. In 1900, three different scientists, C. E. Correns, E. v. Tschermak, and H. de Vries, did experiments that “rediscovered” Mendel’s ideas. De Vries published first that year, without mentioning Mendel’s work, which he apparently knew about. Correns alluded repeatedly to Mendel in his paper (Correns, 1900), which starts out (italics original to Correns): Figure 2. Gregor Mendel (1822–1884). Figure 2. Gregor Mendel (1822–1884). THE LATEST PUBLICATION OF HUGO DE VRIES: Sur la loi de disjunction des hybrides which through the courtesy of the author reached me yesterday, prompts me to make the following statement: In my hybridization experiments with varieties of maize and peas, I have come to the same results as de Vries, who experimented with varieties of many different kinds of plants, among them two varieties of maize. When I discovered the regularity of the phenomena, and the explanation thereof – to which I shall return presently – the same thing happened to me which now seems to be happening to de Vries: I thought that I had found something new.But then I convinced myself that the Abbot Gregor Mendel in Brünn, had, during the sixties, not only obtained the same result through extensive experiments with peas, which lasted for many years, as did de Vries and I, but had also given exactly the same explanation, as far as that was possible in 1866. In a postscript to his paper, Correns seems to take de Vries to task by adding the following, referring to another paper that de Vries published shortly later (italics original to Correns): In the meantime de Vries has published in these proceedings (No. 3 of this year) some more details concerning his experiments. There he refers to Mendel’s investigations, which were not even mentioned in the “Comptes rendus.” Tschermark also fails to mention Mendel in his paper, but also is induced by Correns to add a postscript: Correns has just published experiments which also deal with artificial hybridization of different varieties of Pisum sativum and observations of the hybrids left to self-fertilization through several generations. They confirm, just as my own, Mendel’s teachings. The simultaneous “discovery” of Mendel by Correns, de Vries, and myself appears to me especially gratifying. Even in the second year of experimentation, I too still believed that I had found something new. The consensus is that de Vries and Correns undertook and understood experiments that confirmed Mendel’s laws, but that Tschermark did not understand the significance of his data (Monaghan & Corcos, 1986, 1987). Failure to receive priority due to publication in a lesser-known journal has a familiar example. In 1923, Georgios Papanikolaou gave a presentation on his ability to detect cancerous cells on examination of cells from a vaginal scraping. He was met with skepticism by physicians, and he did not publish until 1941, in the American Journal of Obstetrics and Gynecology, the most cited journal in that field. He had actually been scooped in this discovery, at least as far as publication date. A Romanian doctor, Aurel Babeş, had discovered the same usefulness for vaginal smears, and he had published in 1927 in the Proceedings of the Bucharest Gynecological Society. Like Mendel, Babeş had published in a lesser-known journal. If not for that fact, we might be talking about “bab” smears instead of “pap” smears. Aurel Babeş was busy with other projects and never made an issue of priority. As an aside, his uncle, Victor Babeş, demonstrates another benefit of priority. He discovered the protozoan etiology of, and is therefore the eponym for, the disease babesiosis. James Watson and Francis Crick are the names linked with the structure of DNA because of their publication in Nature on 25 April 1953 (Watson & Crick, 1953), but both estimated that Linus Pauling would have the structure of DNA within 6 weeks of his erroneous published structure in February 1953 (Watson et al., 2012). Crick estimated that Rosalind Franklin would have the structure in somewhere between 3 weeks and 3 months from that same time (Sayre, 1975; Figure 3). Watson and Crick, however, published first. The 1962 Nobel Prize in Medicine and Physiology was split among Watson, Crick, and Maurice Wilkins, each receiving a share worth$17,000. Franklin, the source of crucial information and images for Watson and Crick, had died in 1958 and was not eligible for the award, because the rules exclude posthumous awards. Would honest collaboration have brought quicker results with fewer hard feelings? From April 1953 on, there were allegations of misappropriation of the data of Wilkins and Franklin. Many of the scientists involved did not go public with their allegations until after the publication of Watson’s book The Double Helix in 1968. But an interesting exchange of letters between Watson in the United States and Crick in England later in 1953 demonstrates that Watson knew from the very beginning that there were suspicions about his use of Franklin’s data. The following exchange of letters also might indicate the competitiveness even between equal partners in priority, and this exchange incidentally shows the financial predicament of young researchers with families (Ridley, 2009). In 1953, Crick sent a letter to Watson asking if he would mind if Crick went on the BBC to give an interview about their discovery, for as Crick writes: Figure 3. Rosalind Elsie Franklin. (Photograph by Elliott and Fry, 1946; © National Portrait Gallery, London.) Figure 3. Rosalind Elsie Franklin. (Photograph by Elliott and Fry, 1946; © National Portrait Gallery, London.) It would bring in $50 to$100 which at the moment I could do with. Watson opposes the interview and writes: There are still those who think we pirated data…. If you need the money that bad, go ahead. Needless to say, I should not think any higher of you and shall have good reason to avoid further collaboration with you. Crick responded in kind: As you were so set against it I did not allow BBC to broadcast…although your name is mud in the Crick household because of this…. We live very quietly here mainly because we are so broke. Crick had some payback when he vehemently opposed publication of Watson’s book The Double Helix. He wrote to Watson in April 1967 (Crick, 1967): My objection, in short, is to the widespread dissemination of a book which grossly invades my privacy, and I have yet to hear an argument which adequately excuses such a violation of friendship. If you publish your book now, in the teeth of my opposition, history will condemn you, for the reasons set out in this letter. Crick gradually came around to accepting the bestseller as an interesting interpretation of the events for a lay audience, and their friendship endured. Sometimes, peer-reviewed articles never get published because of the jealousy or the politics of the reviewers or because the article’s conclusions threaten an orthodoxy. An example of the latter in biology is that of Lynn Margulis, who presented evidence that mitochondria were former free-living bacteria that developed a symbiotic relationship with another single-celled organism. Her paper on the theory of endosymbiosis was rejected by 15 different journals, and finally accepted by The Journal of Theoretical Biology in 1967 (Lane, 2006). Endosymbiosis is now considered a central tenet of biology. The lesson here is not to give up on a good idea. Often, when an idea is truly radical, reviewers will be quite skeptical and, wanting to preserve the integrity of the journal or the consensus orthodoxy, reject it. Awarding of priority and prestige has been made more difficult over the past 100 years by the number of authors on papers (Greene, 2007). First authorship is more prestigious than being the eighth author on a paper. The assumption is that author listings are ranked by amount of work done. Watson and Crick flipped a coin to determine first author listing in 1953. Science teachers love to see collaboration on projects, and they return for modification student papers that have no references. The best paper, however, would be a sole-authorship paper with absolutely no references. Sole authorship clearly establishes contributions, and legitimately having no references marks the paper as original thinking. Few such papers exist, but one is notable. In 1905, at age 26, Albert Einstein published several remarkable papers, one of which, on special relativity, had him as sole author and no references (Einstein, 1905). The era of one or two authors on a paper may be over because “Big Science” like the ENCODE project and the Large Hadron Collider requires multinational cooperation and resources, with appropriate distribution of the claims of discovery (Maher, 2012). Science is collaborative and competitive, done by human beings who can be gracious or jealous, noble or puerile. Should publication date be like the flags of territorial claim planted on foreign soils? There are unfortunate consequences to this. How do students think Scott felt when he arrived at the South Pole and found Amundsen’s Norwegian flag already set there? Darwin himself wrote to Lyell in 1856 about this: “I rather hate the idea of writing for priority yet I should be vexed if anyone were to publish my doctrines before me” (Darwin, 1887). Besides the loss of public prestige, there is also the loss of self-esteem about one’s originality of thinking when someone else comes up with the same idea. Darwin could not have been happy to read Wallace’s letter from Ternate in 1858, in which Wallace said to Darwin “that I hoped the idea would be as new to him [Darwin] as it was to me, and that it would supply the missing factor to explain the origin of species” (Wallace, 1905). Darwin’s ego and fame survived this mild challenge. There have been many hotter feuds in science: the personal one of Newton versus Leibnitz about priority for calculus; the destructive one of Marsh versus Cope over dinosaur bones; and the devious AC/DC one of Tesla versus Edison. Other potential disagreements never materialized, either because of a self-effacing personality as in Wallace’s case, or because one party did not feel there was a race at all, as in Franklin’s case (Pauling, 1973). Competition for priority can lead to student discussion. Is scientific rivalry stimulating or stunting for the progress of science? Do secrecy and use of materials without credit hinder the collaborative process of science? What would be the danger of a scientist putting a discovery on a webpage for public review and critique, in order to make his or her journal article even better than peer reviewers could make it? Would ideas be stolen? Publication as priority may be here to stay, and it surely will generate interesting conversations into the future. ## References References Barlow, N., Ed. (1958). The Autobiography of Charles Darwin 1809–1882. London, UK: Collins. Correns, C. (1900). G. Mendels Regel Über das Verhalten der Nachkommenschaft der Rassenbastarde. Berichte der Deutschen Botanischen Gesellschaft, 18:158–168. [First published in English as Correns, C. (1950). G. Mendel’s law concerning the behavior of progeny of varietal hybrids. Genetics, 35(5, pt 2), 33–41. Available online at http://www.esp.org/foundations/genetics/classical/holdings/c/cc-00.pdf.] Costa, J.T. (2009). The Annotated Origin: A Facsimile of the First Edition of On the Origin of Species. Cambridge, MA: Belknap/Harvard. Crick, F. (1967). The Francis Crick Papers. Available online at http://profiles.nlm.nih.gov/ps/retrieve/ResourceMetadata/SCBBKN. Darwin, F., Ed. (1887). The Life and Letters of Charles Darwin, Vol. 2 (p. 68). London, UK: John Murray. Available online at http://darwin-online.org.uk/. de Vries, H. (1900). Sur la loi de disjonction des hybrides. Comptes Rendus de l’Academie des Sciences (Paris), 130, 845–847. [First published in English as de Vries, H. (1950). Concerning the law of segregation of hybrids. Genetics, 35(5, pt 2), 30–32. Available online at http://www.esp.org/foundations/genetics/classical/holdings/v/hdv-00.pdf.] Einstein, A. (1905). Zur Elektrodynamik bewegter Körper, in Annalen der Physik. 17:891. [Annotated English translation, On the Electrodynamics of Moving Bodies, available online at http://www.fourmilab.ch/etexts/einstein/specrel/www/.] Greene, M. (2007). The demise of the lone author. Nature, 450, 1165. Lane, N. (2006). Power, Sex, Suicide: Mitochondria and the Meaning of Life. Oxford, UK: Oxford University Press. Litchfield, H. (1871). [On plagiarism and scientific jealousy]. The Complete Work of Charles Darwin Online. Available at http://darwin-online.org.uk/. Maher, B. (2012). ENCODE: the human encyclopaedia. Nature, 489, 46–48. Monaghan, F. & Corcos, A. (1986). Tschermak: a non-discover of Mendelism. I. An historical note. Journal of Heredity, 77, 468–469. Monaghan, F. & Corcos, A. (1987). Tschermak: a non-discover of Mendelism. II. A critique. Journal of Heredity, 78, 208–210. Naylor, B., Tasca, L., Bartziota, E. & Schneider, V. (2002). In Romania it’s the Méthode Babeş-Papanicolaou. Acta Cytologica, 46, 1–12. Available online at http://www.acta-cytol.com/historicalperspective/BabesPapanicolaou.pdf. Nichols, P. (2003). Evolution’s Captain: The Story of the Kidnapping That Led to Charles Darwin’s Voyage aboard the Beagle. New York, NY: Harper Perennial. Papanicolaou, G.N. & Traut, H.F. (1941). The diagnostic value of vaginal smears in carcinoma of the uterus. American Journal of Obstetrics and Gynecology, 42, 2. Pauling, P. (1973). DNA – the race that never was? New Scientist, 58, 560. Ridley, M. (2009). Francis Crick: Discoverer of the Genetic Code (p. 81). [Reprint edition.] New York, NY: Harper Perennial. Sayre, A. (1975). Rosalind Franklin and DNA. Toronto, Canada: George J. McLeod. Wallace, A.R. (1905). My Life, Vol. 1 (p. 363). London, UK: Chapman & Hall. Watson, J. & Crick, F. (1953). Molecular structure of nucleic acids: a structure for deoxyribose nucleic acid. Nature, 171, 737–738. Watson, J., Gann, A. & Witkowski, J. (2012). The Annotated and Illustrated Double Helix. New York, NY: Simon & Schuster.
# Math Help - hey 1. ## hey please show me how to do this! in 1900 blah blah set a world record of 1.65 meters and captured the gold. if there is 3.281 feet in one meter how high was blah blah 's world record breaking jump in feet?put answer as a decimal rounded to the nearest hundreth. 2. If there is 3.281 ft in one meter, then there is: 3.281 ft/1 m x 1.65 m = 5.41365 Think of this as a ratio. $\frac{3.281\ \text{feet}}{1\ \text{meter}} = \frac{x\ \text{feet}}{1.65\ \text{meters}}$ $x\ \text{feet} = \frac{3.281\ \text{feet}}{1\ \rlap{\text{meter}}{---}} \cdot 1.65\ \rlap{\text{meters}}{---}$ $x\ \text{feet} = 5.41365 \approx 5.41\ \text{feet}$ 3. Originally Posted by tika please show me how to do this! in 1900 blah blah set a world record of 1.65 meters and captured the gold. if there is 3.281 feet in one meter how high was blah blah 's world record breaking jump in feet?put answer as a decimal rounded to the nearest hundreth. $1.65 \, m \times \frac{3.281 \, ft.}{1 \, m} = ... $
# Limit of Wiener processes Let $W_t$ be Wiener process. I am trying to evaluate the following limit $$\lim\limits_{n \to \infty}~{\sum\limits_{i=1}^{n}W_{\frac{i-1}{n}+\frac{1}{2n}}\left( W_{\frac{i}{n}} - W_{\frac{i-1}{n}} \right)}$$ I've expanded parethesis and got $$\lim\limits_{n \to \infty}~ \left[ -W_0W_\frac{1}{2n} - W_\frac{1}{n}\left( W_\frac{3}{2n} - W_\frac{1}{2n} \right) - \cdots - W_\frac{n-1}{n}\left( W_\frac{2n-1}{2n} - W_\frac{2n-3}{2n} \right) + W_\frac{2n-1}{2n}W_1 \right]$$ I think I should apply central limit theorem here, but I don't understand how. - Central limit theorem won't help, central limit theorem is for a different type of convergence. – Artiom Fiodorov May 21 '12 at 20:34 HINT: Let $t_i = \frac{i}{n}$ be a partition of a unit interval. Then the sum can be rewritten as: $$\sum_{i=1}^n W\left( \frac{t_{i-1}+t_i}{2}\right) \left( W(t_i) - W(t_{i-1}) \right)$$ Compare this with the definition of Stratonovich integral in terms of Riemann sum. $\lim_{n\to\infty} \sum\limits_{i=1}^n W\left(\frac{i-1}{n} + \frac{1}{2n}\right) \left( W\left(\frac{i}{n}\right) - W\left(\frac{i-1}{n}\right)\right) = \int_0^1 W(s) \circ\! \mathrm{d} W(s) = \frac{1}{2} W(1)^2$ My notes say that Stratonovich would be $$\displaystyle \lim_{n->\infty}\sum_{k=0}^{[2^n t]-1}\big(\frac{W_{(k+1)2^{-n}}+W_{k2^{-n}}}{2}\big)(W_{(k+1)2^{-n}}-W_{k2^{-n}}‌​),$$ what am I missing? – Artiom Fiodorov May 21 '12 at 21:45 There is a sign typo in your notes, it should be $W_{(k+1) 2^{-n}} - W_{k 2^{-n}}$ – Sasha May 21 '12 at 21:48 Thank you, I just copy-pasted the latex. – Artiom Fiodorov May 21 '12 at 21:49
# Crystal Structure Atoms, ions or molecules in crystalline substances form a stable and orderly array known as a lattice. Crystalline solids have definite, rigid shapes with clearly defined faces. Small, 3-dimensional, repeating units called unit cells are responsible for the order found in them. The unit cell can be thought of as a box which when stacked together in 3-dimensions produces the crystal lattice. There are a few possible arrangements of atoms in a unit cell of a crystal: • Simple Cubic • Body-centred Cubic (bcc) • Face-centred Cubic (fcc) • Hexagonal Close-packed (hcp) ### Simple Cubic The simnple cubic unit cell is the simplest structure which consists of 8 corner atoms which form a unit cell. Useful Information: • Coordination number is 6. (Coordination number is the number of nearest neighbouring atoms) • Packing efficiency = volume of spheres within unit cell/ volume of cell \begin{aligned} &= \left[ \left( \frac{1}{8} \times 8 \right) \left(\frac{4}{3} \pi r^{3} \right) \right] \frac{1}{ \left(2r \right)^{3}}\\ &= \frac{\pi }{6} \\ &= 52.4 \% \end{aligned} ### Body-centred Cubic With the exception of polonium, atoms do not arrange themselves on a simple cubic pattern because they tend to pack together more closely. By examining the simple cubic arrangement, it is easy to see how it might accommodate more atoms. The most obvious empty space is that at the centre of the cube. Many metals such as iron have a bcc structure. Certain simple binary compounds have what is essentially the bcc arrangement in which the centre of the cube is occupied by one type of atoms and the corner sites are occupied by the others. Useful Information: – Coordination Number is 8 – Packing Efficiency = 68% Categories H3
# Exploring the IGM Experts Forum Data: Confidence and Consensus¶ This is a follow up exploring the data that I collected in an earlier post. All of the data is from the IGM Experts Forum, which surveys a group of 51 leading economists on a variety of policy questions. A csv of all the data is available here, as are separate datasets of the questions and responses. I'm especially interested in how confidence changes with the scale of a claim, so I use a few different techniques to look at that relationship. First, I look at confidence by vote type and find that economists seem to be more confident when they 'strongly agree' or 'strongly disagree'. Second, I find that confidence actually increases the further a view is from the median, although this is relationship is mainly driven by 25 votes out of a 7024 vote sample. An earlier paper [1, PDF] on a portion of this data by Gordon and Dahl found that males and economists that were educated at the University of Chicago and MIT seemed to be more confident. I find less evidence of this in the newer data, although I lack the knowledge of statistics to say whether any of these differences are significant. The main takeaway from this analysis is the amazing amount of consensus among leading economists. The mean and median distance away from the consensus responses are 0.63 and 0.45 points on a five point scale. Roughly 90 percent of the responses are within 1.5 units of the consensus for all questions. These results are consistend with Gordon and Dahl's earlier findings [2, URL]. ## Descriptive Statistics¶ First, let's look at some descriptive statistics for the responses: In [430]: %matplotlib inline import pandas as pd import matplotlib.pyplot as plt import matplotlib matplotlib.style.use('ggplot') #pd.set_option('max_colwidth', 30) pd.set_option('max_colwidth', 400) matplotlib.rcParams['figure.figsize'] = (10.0, 8.0) cols = ['name', 'institution', 'qtitle','subquestion', # Summary of the string columns df_responses.describe(include = ['O'])[cols] Out[430]: name institution qtitle subquestion qtext vote comments median_vote count 8402 8402 8402 8402 8402 8402 2921 8402 unique 51 7 134 3 199 9 2816 5 top José Scheinkman Chicago European Debt Question A Federal mandates that government purchases should be “buy American” unless there are exceptional circumstances, such as in the American Recovery and Reinvestment Act of 2009, have a significant positive impact on U.S. manufacturing employment. Agree -see background information here Agree freq 199 1466 162 5686 51 2902 92 3877 ## Confidence Grouped by Vote Type¶ Next, I use the Pandas dataframe grouping function to look at confidence by vote type. Note that the Disagree and Strongly Disagree votes are much less common. Agree is the most common vote, followed by Uncertain and Strongly Agree. Uncertain has the lowest mean confidence at 4.3, possibly because it's weird to say that you're 'confidently uncertain'. Overall, Strongly Agree and Strongly Disagree have higher mean and median confidences. One possible explanation is that economists are unwilling to step into the Strongly categories unless they feel that they have very good evidence. Another possibility is that this is an issue with the survey -- it's weird to say that you 'unconfidently strongly agree'. In [431]: # Initial grouping, just by vote. r_list = ['Strongly Disagree', 'Disagree', 'Uncertain', 'Agree', 'Strongly Agree'] filtered_vote = df_responses[df_responses['vote'].isin(r_list)] filtered_vote.boxplot(column='confidence', by='vote', whis=[5.0,95.0]) df_responses[df_responses['vote'].isin(r_list)].groupby('vote').agg( {'confidence': {'mean': 'mean', 'std': 'std', 'count': 'count', 'median': 'median'}}) Out[431]: confidence std count median mean vote Agree 2.044886 2902 6 5.560992 Disagree 2.006320 992 6 5.530242 Strongly Agree 1.764728 1347 8 8.152190 Strongly Disagree 1.886012 314 8 7.843949 Uncertain 2.405343 1469 5 4.304969 ## Vote Distance from the Median¶ The above results are interesting, but what I'm more interested in is confidence as a claim becomes more controversial. Below, I construct a measure of vote distance from the median vote, then look at confidence grouped by distance from that median. I use the Pandas apply function below to assign a value ranging from 0 (Strongly Disagree) to 4 (Strongly Agree) to both the vote and median_vote columns. I then take the absolute value of the difference between each vote and the median_vote for each question to calculate the distance. Confidence does increase the further the vote is from the median view, but relationship this is driven by 385 votes two points away, and 25 votes three points away out of a 7024 vote sample. It's possible that these confident yet controversial votes are from subject matter experts and have more information about a topic than the rest. In [432]: # Additional analyses, applying indicator column to quantify votes # and sum/mean/count the confidence, grouped by vote_distance def indicator(x): if x in r_list: return r_list.index(x) else: return None df_responses['vote_num'] = df_responses['vote'].apply(indicator) df_responses['median_num'] = df_responses['median_vote'].apply(indicator) df_responses['vote_distance'] = abs(df_responses['median_num'] - df_responses['vote_num']) grouped = df_responses.groupby('vote_distance').agg({'confidence':{'mean': 'mean', 'std': 'std', 'count':'count'}}) #Specifically note that there are only 25 votes with vote istance > 3, so this could be driven #by a few experts that know something others don't df_responses.boxplot(column='confidence', by='vote_distance', whis=[5.0,95.0]) #bootstrap=1000 # Mean and Standard deviation temp = df_responses.groupby('vote_distance').agg({'confidence': {'mean':'mean'}}) temp.plot.bar(yerr=grouped.loc[:, ('confidence', 'std')], color='#7eb2fc') grouped Out[432]: confidence std count mean vote_distance 0 2.375436 3539 5.663747 1 2.507784 3075 6.103089 2 2.429312 385 6.202597 3 2.406934 25 7.720000 ## Making a Continuous Vote Column¶ To add some granularity to the above data, I combine the vote number column and the confidence column into one incremental column called incr_votenum. So a vote of Agree (vote_num = 3) at a confidence of 5 leads to a incr_votenum of 3.454 (3 + 5/11). The assumption I am making here is that confidence is a continuous measure between two votes, with an Agree vote of confidence 10 measuring less than a Strongly Agree vote at confidence 0. I'm not sure if this is a safe assumption to make, but I'm going to run with it. I then calculate the median incr_votenum for each question, and the distance away from the median for each vote. A few example results are shown in the table below. In [434]: # Construct a continuous column, incorporating confidence into vote_num # Divide by 11 so 10 confidence of agree > 0 confidence of strongly agree df_responses['incr_votenum'] = df_responses['vote_num'] + df_responses['confidence'] / 11.0 # Median incr_votenum for each question: df_responses['median_incrvotenum'] = df_responses.groupby( ['qtitle','subquestion'])['incr_votenum'].transform('median') # Calculate distance from median for each econ vote, less biased by outliers. df_responses['distance_median'] = abs(df_responses['median_incrvotenum'] - \ df_responses['incr_votenum']) df_responses[df_responses['qtitle'] == 'Brexit II'][ ['qtitle','subquestion','vote_num','confidence', Out[434]: qtitle subquestion vote_num confidence incr_votenum median_incrvotenum distance_median 0 Brexit II Question A 2 4 2.363636 3.363636 1.000000 1 Brexit II Question B 3 5 3.454545 3.272727 0.181818 198 Brexit II Question A 3 5 3.454545 3.363636 0.090909 199 Brexit II Question B 3 5 3.454545 3.272727 0.181818 397 Brexit II Question A 3 3 3.272727 3.363636 0.090909 ## Visualizing the Spread of the Votes¶ The following boxplot shows the distance from the median for all votes, using the new incremental vote measure. It's pretty amazing that the mean and median distance away from the consensus are only 0.454 and 0.628 respectively. That's an impressive amount of consensus. The whiskers on the boxplot cover the 90 percent of the data that fall within roughly 1.6 points of the consensus vote on a scale from 0 to 5. The histogram also shows a suprising amount of consensus, although it also shows a second peak around a difference of 1.0, which is the difference between two bordering answers, e.g. the distance from Uncertain to Agree. In [457]: # Boxplot, showing all vote distances from median df_responses.boxplot(column='distance_median', whis=[5.0,95.0], return_type='dict') df_responses.hist(column='distance_median', bins=40) print 'Median: ' + str(df_responses['distance_median'].median()) print 'Mean: ' + str(df_responses['distance_median'].mean()) print 'Stdev: ' + str(df_responses['distance_median'].std()) Median: 0.454545454545 Mean: 0.628856906192 Stdev: 0.556692286667 ## Which answers are furthest from the median response?¶ In [451]: # Answers that are furthest from median: df_responses[df_responses['distance_median'] >= 2.75][['name', 'qtitle','subquestion', 'qtext', 'vote', 'confidence', 'median_vote', 'distance_median']].sort_values( by='distance_median', ascending=False) Out[451]: name qtitle subquestion qtext vote confidence median_vote distance_median 6497 Alberto Alesina Bureau of Labor Statistics Question A By providing important measures of US economic performance — including employment, consumer prices, wages, job openings, time allocation in households, and productivity — the Bureau of Labor Statistics creates social benefits that exceed its annual cost of roughly $610 million. Disagree 2 Strongly Agree 3.454545 1914 Angus Deaton Oil Price Speculation Question A Large movements in monthly oil prices, either up or down, are driven primarily by speculators, as opposed to changes in the current (and planned) supply or demand for oil. Strongly Agree 10 Disagree 3.363636 6498 Alberto Alesina Bureau of Labor Statistics Question B Cuts in BLS spending would likely involve net social costs because potential declines in the quality of data, and thus their usefulness to researchers and decision makers, would exceed any budget savings. Disagree 2 Strongly Agree 3.318182 2062 Angus Deaton Price Gouging Question A Connecticut should pass its Senate Bill 60,which statesthat during a “severe weather event emergency, no person within the chain of distribution of consumer goods and services shall sell or offer to sell consumer goods or services for a price that is unconscionably excessive.” Strongly Agree 9 Disagree 3.272727 4792 Caroline Hoxby Economic Stimulus Question A Because of the American Recovery and Reinvestment Act of 2009, the U.S. unemployment rate was lower at the end of 2010 than it would have been without the stimulus bill. Strongly Disagree 5 Agree 3.227273 7478 Liran Einav Diversification Question A In general, absent any inside information, an equity investor can expect to do better by choosing a well-diversified, low-cost index fund than by picking a few stocks. Disagree 7 Strongly Agree 3.090909 8371 Hilary Hoynes Bah, Humbug Question A Giving specific presents as holiday gifts is inefficient, because recipients could satisfy their preferences much better with cash. Strongly Agree 10 Disagree 3.090909 5287 Edward Lazear Carbon Tax Question A A tax on the carbon content of fuels would be a less expensive way to reduce carbon-dioxide emissions than would a collection of policies such as “corporate average fuel economy” requirements for automobiles. Disagree 5 Strongly Agree 3.045455 2064 Angus Deaton Ticket Resale Question A Laws that limit the resale of tickets for entertainment and sports events make potential audience members for those events worse off on average. Strongly Disagree 7 Agree 2.909091 6665 Alberto Alesina Economic Stimulus Question B Taking into account all of the ARRA’s economic consequences — including the economic costs of raising taxes to pay for the spending, its effects on future spending, and any other likely future effects — the benefits of the stimulus will end up exceeding its costs. Strongly Disagree 4 Uncertain 2.863636 1982 Angus Deaton Surge Pricing Question A Using surge pricing to allocate transportation services — such asUberdoes with its cars — raises consumer welfare through various potential channels, such as increasing the supply of those services, allocating them to people who desire them the most, and reducing search and queuing costs. Strongly Disagree 10 Agree 2.818182 6570 Alberto Alesina Surge Pricing Question A Using surge pricing to allocate transportation services — such asUberdoes with its cars — raises consumer welfare through various potential channels, such as increasing the supply of those services, allocating them to people who desire them the most, and reducing search and queuing costs. Strongly Disagree 10 Agree 2.818182 4793 Caroline Hoxby Economic Stimulus Question B Taking into account all of the ARRA’s economic consequences — including the economic costs of raising taxes to pay for the spending, its effects on future spending, and any other likely future effects — the benefits of the stimulus will end up exceeding its costs. Strongly Disagree 5 Uncertain 2.772727 ## Which questions are most controversial?¶ As a measure of how controversial a question is, I take the standard deviation of the incremental vote number (incr_votenum). I include both a table of the questions, and a boxplot below. In [442]: # Which questions are most controversial? # Calculating standard deviation, grouped by each question: grouped_incrvotenum = df_responses.groupby(['qtitle', 'subquestion','qtext'], as_index = False) \ .agg({'incr_votenum': {'std': 'std'}}) # Visualize the spread of responses using a boxplot qs = grouped_incrvotenum[grouped_incrvotenum.loc[:, ('incr_votenum', 'std')] > 1.05][['qtitle','subquestion']] qs_most = pd.merge(qs, df_responses, on=['qtitle', 'subquestion'], how='inner') qs_most.boxplot(column='incr_votenum',by=['qtitle','subquestion'], whis=[5.0,95.0], rot=90) # Show table of questions with a stdev of incr_votenum > 1.05 grouped_incrvotenum[grouped_incrvotenum.loc[:, ('incr_votenum', 'std')] > 1.05].sort_values( by=('incr_votenum','std'), ascending=False) Out[442]: qtitle subquestion qtext incr_votenum std 134 Poverty and Measurement Question A The association between health and economic growth in poor countries primarily involves faster growth generating better health, rather than the other way around. 1.149829 128 Oil Price Speculation Question A Large movements in monthly oil prices, either up or down, are driven primarily by speculators, as opposed to changes in the current (and planned) supply or demand for oil. 1.135301 135 Poverty and Measurement Question B The decline in the fraction of people with incomes under, say,$1 per day is a good measure of whether well-being is improving among low-income populations. 1.123439 162 Student Credit Risk Question A Conventional economic reasoning suggests that it would be a good policy to enact therecent Senate billthat would let undergraduate students borrow through the government Stafford program at interest rates equivalent to the primary credit rates offered to banks through the Federal Reserve's discount window. 1.117773 132 Patents Question B Within the software industry, the US patent system makes consumers better off than they would be in the absence of patents. 1.114383 50 Economic Stimulus Question B Taking into account all of the ARRA’s economic consequences — including the economic costs of raising taxes to pay for the spending, its effects on future spending, and any other likely future effects — the benefits of the stimulus will end up exceeding its costs. 1.111368 165 Supplying Kidneys Question A A market that allows payment for human kidneys should be established on a trial basis to help extend the lives of patients with kidney disease. 1.073536 33 Christmas Spending Question A An annual December spending surge on parties, gift-giving and personal travel delivers net social benefits. 1.064723 76 Fracking (revisited) Question A New technology for fracking natural gas, by lowering energy costs in the United States, will make US industrial firms more cost competitive and thus significantly stimulate the growth of US merchandise exports. (The experts panel previously voted on this question on May 23, 2012. Those earlier results can be foundhere.) 1.061360 48 Early Education Question A Using government funds to guarantee preschool education for four-year olds would yield a much lower social return than the ones achieved by the most highly touted targeted preschool initiatives. 1.060312 75 Fracking Question A New technology for fracking natural gas, by lowering energy costs in the United States, will make US industrial firms more cost competitive and thus significantly stimulate the growth of US merchandise exports. 1.056384 175 Ten-year Budgets Question A Because federalspending on Medicare and Medicaidwill continue to grow under current policy beyond the 10-year window of most political budget debates, it is easy for a politician to devise a budget plan that would reduce federal deficits over the next decade without really making the U.S. fiscally sustainable. 1.056000 ## Which questions are least controversial?¶ In [444]: # Which questions are least controversial? # Select all data for questions with qtitle and subquestion by merging qs_least = grouped_incrvotenum[grouped_incrvotenum.loc[:, ('incr_votenum', 'std')] < 0.6][['qtitle','subquestion']] qs_least_df = pd.merge(qs_least, df_responses, on=['qtitle', 'subquestion'], how='inner') # Visualize boxplot and table qs_least_df.boxplot(column='incr_votenum',by=['qtitle','subquestion'], rot=90, whis=[5.0,95.0]) grouped_incrvotenum[grouped_incrvotenum.loc[:, ('incr_votenum', 'std')] < 0.6].sort_values( by=('incr_votenum','std'), ascending=True) Out[444]: qtitle subquestion qtext incr_votenum std 197 Vaccines Question A Declining to be vaccinated against contagious diseases such as measles imposes costs on other people, which is a negative externality. 0.356966 84 Gold Standard Question A If the US replaced its discretionary monetary policy regime with a gold standard, defining a "dollar" as a specific number of ounces of gold, the price-stability and employment outcomes would be better for the average American. 0.417698 91 Healthcare Question A There are no consequential distortions created by the tax preference that favors obtaining health insurance through employers. 0.517269 85 Gold Standard Question B There are many factors besides US inflation risk that influence the current dollar price of gold. 0.525888 146 Rent Control Question A Local ordinances that limit rent increases for some rental housing units, such as inNew YorkandSan Francisco, have had a positive impact over the past three decades on the amount and quality of broadly affordable rental housing in cities that have used them. 0.551146 37 Congress and Monetary Policy Question A Legislationintroducedin Congress would require the Federal Reserve to "submit to the appropriate congressional committees…a Directive Policy Rule", which shall "describe the strategy or rule of the Federal Open Market Committee for the systematic quantitative adjustment of the Policy Instrument Target to respond to a change in the Intermediate Policy Inputs." Should the Fed deviate from the ru... 0.554602 108 Laffer Curve Question B A cut in federal income tax rates in the US right now would raise taxable income enough so that the annual total tax revenue would be higher within five years than without the tax cut. 0.556231 45 Dynamic Scoring Question A Changing federal income tax rates, or the income bases to which those rates apply, can affect federal tax revenues partly by altering people’s behavior, and thus their actual or reported incomes. 0.560967 120 Monetary Policy Question A All else equal, the Fed's new plan to increase the maturity of its Treasury holdings will boost expected real GDP growth for calendar year 2012 by at least one percentage point. 0.573926 174 Taxing Capital and Labor Question C Although they do not always agree about the precise likely effects of different tax policies, another reason why economists often give disparate advice on tax policy is because they hold differing views about choices between raising average prosperity and redistributing income. 0.581531 171 Taxi Competition Question A Letting car services such as Uber or Lyft compete with taxi firms on equal footing regarding genuine safety and insurance requirements, but without restrictions on prices or routes, raises consumer welfare. 0.584899 124 Nash Equilibrium Question A Behavior in many complex and seemingly intractable strategic settings can be understood more clearly by working out what each party in the game will choose to do if they realize that the other parties will be solving the same problem. This insight has helped us understand behavior as diverse as military conflicts, price setting by competing firms and penalty kicking in soccer. 0.585807 196 Universal Basic Income Question A Granting every American citizen over 21-years old auniversal basic income of \$13,000 a year— financed by eliminating all transfer programs (including Social Security, Medicare, Medicaid, housing subsidies, household welfare payments, and farm and corporate subsidies) — would be a better policy than the status quo. 0.586828 ## Which economists give more controversial responses?¶ In [445]: # Group by economist, calcluate mean distance from median grouped_econstd = df_responses.groupby(['name', 'institution']).agg({'distance_median': {'mean': 'mean'}}) grouped_econstd[grouped_econstd.loc[:, ('distance_median', 'mean')] > 0.75].sort_values( by=('distance_median','mean'), ascending=False) Out[445]: distance_median mean name institution Alberto Alesina Harvard 0.904429 Angus Deaton Princeton 0.891251 Caroline Hoxby Stanford 0.889181 Austan Goolsbee Chicago 0.791797 Luigi Zingales Chicago 0.770085 ## Which economists give the least controversial responses?¶ In [446]: # Which economists give the least controversial responses? grouped_econstd[grouped_econstd.loc[:, ('distance_median', 'mean')] < 0.50].sort_values( by=('distance_median','mean'), ascending=True) Out[446]: distance_median mean name institution James Stock Harvard 0.431818 Amy Finkelstein MIT 0.462121 Eric Maskin Harvard 0.471361 Raj Chetty Harvard 0.474530 ## Do any institutions give controversial responses?¶ Here's what Gordon and Dahl had to say about differences between institutions using the 2012 question sample [1, PDF]: Respondents are dramatically more confident when the academic literature on the topic is large. Not surprisingly, experts on a subject are much more confident about their answers. The middle-aged cohort (the one closest to the current literature) is the most confident, while the oldest (and wisest) cohort is the least confident. Men and those who have worked in Washington do show some tendency to be more confident. Respondents who got their degrees at Chicago are far more confident than the other respondents, with almost as strong an effect for respondents with PhDs from MIT and to a lesser extent from Harvard. Respondents now employed at Yale and to a lesser degree Princeton, MIT, and Stanford seem to be more confident. It doesn't seem like any institution sticks out based on this newer data, but maybe with more advanced statistical techniques it might be possible to find something significant. In [447]: # Group by institution, calculate mean and stdev of the distance from median response grouped_inststd = df_responses.groupby(['institution']).agg({'distance_median': {'mean': 'mean', 'std':'std'}}).sort_values( by=('distance_median','mean'), ascending=False) df_responses.boxplot(column='distance_median', by='institution', whis=[5.0,95.0]) grouped_inststd Out[447]: distance_median std mean institution Stanford 0.572456 0.673072 Chicago 0.561737 0.666043 Princeton 0.586784 0.662991 MIT 0.554587 0.621699 Harvard 0.558052 0.596441 Yale 0.537219 0.593209 Berkeley 0.526592 0.584685 ## Are any institutions more confident than others?¶ Again, although there are differences in the mean distance from the consensus view, all the standard deviations overlap, so I don't think there is anything significant here. Note that Gordon and Dahl also looked at where economists where educated, rather than just where they were employed, and found differences in confidence based on that metric. In [448]: #Are any institutions more confident than others? df_responses.boxplot(column='confidence', by='institution', whis=[5.0,95.0]) grouped_conf = df_responses.groupby(['institution']).agg( {'confidence': {'mean': 'mean', 'median':'median', 'std':'std'}}).sort_values(by=('confidence','mean'), ascending=False) grouped_conf Out[448]: confidence std median mean institution Princeton 2.168625 7 6.363905 Berkeley 2.360717 6 6.019286 Yale 2.457948 6 6.009033 MIT 2.094831 6 5.909300 Stanford 2.555858 6 5.859665 Harvard 2.330881 6 5.725877 Chicago 2.789926 6 5.580745 ## Are male economists more confident?¶ Gordon and Dahl (2012) noted more confidence among male economists: The only statistically significant deviation from homogeneous views, therefore, is less caution among men in expressing an opinion, perhaps due to a greater “expert bias.” Personality differences rather than different readings of the existing evidence would then explain these gender effects. This relationship seems to be less obvious with this expanded dataset. I'm not re-creating their analysis, though, so the difference might still be there if I were to use the controls that they do. In [449]: women = ['Amy Finkelstein', 'Hilary Hoynes', 'Pinelopi Goldberg', 'Judith Chevalier', 'Caroline Hoxby', 'Nancy Stokey', 'Marianne Bertrand', 'Cecilia Rouse', 'Janet Currie', 'Claudia Goldin', 'Katherine Baicker'] # Set true/false column based on sex df_responses['female'] = df_responses['name'].isin(women) # Boxplot grouped by sex df_responses.boxplot(column='confidence', by='female', whis=[5.0,95.0]) # Table, stats grouped by sex df_responses.groupby(['female']).agg({'confidence': {'mean': 'mean', 'std':'std', 'median':'median'}}) Out[449]: confidence std median mean female False 2.398254 6 5.929568 True 2.661413 6 5.729814
R not reading numbers from file 3 0 Entering edit mode 5.7 years ago nkinney06 ▴ 90 I am trying to read the following file into R variables: 5060803636482931868 83.3366666 0.0 0.0 0.0 15695800775901642752 0.0 81.0061726043 38.1837661841 0.0 12047011437325700351 0.0 38.1837661841 22.2036033112 7.07106781187 2610937148294873212 0.0 0.0 7.07106781187 30.1330383466 The first column are unique keys and the rest is a 4x4 matrix; I try reading with the following: fileContents <- as.matrix(read.table('./distanceMatrix.txt', header=FALSE, sep = "\t",strip.white=TRUE)) nameKey <- fileContents[,1] distMatrix <- fileContents[,-1] I get this result: > nameKey [1] 5060803636482931712 15695800775901642752 12047011437325701120 2610937148294873088 > distMatrix V2 V3 V4 V5 [1,] 83.33667 0.00000 0.000000 0.000000 [2,] 0.00000 81.00617 38.183766 0.000000 [3,] 0.00000 38.18377 22.203603 7.071068 [4,] 0.00000 0.00000 7.071068 30.133038 notice how the keys don't match the file. I need to be sure everything gets read in properly and make sure I can write it out properly. What am I doing wrong? R • 1.4k views 1 Entering edit mode why not: fileContents <- as.matrix( read.table( './distanceMatrix.txt', header=FALSE, sep = "\t", strip.white=TRUE, row.names = 1 ) ) 1 Entering edit mode How is this a bioinformatics question? 1 Entering edit mode 5.7 years ago h.mon 34k A couple of suggestions: 1) read everything as character and later convert to number fileContents <- as.matrix(read.table('distanceMatrix.txt', header=FALSE, row.names = 1, sep = "\t", strip.white=TRUE, colClasses = "character" ) ) nameKey <- rownames(fileContents) distMatrix <- as.numeric(fileContents ) dim(distMatrix) <- dim(fileContents) Ill try but in reality my matrix will be rather large and the number of NAs would have to be dynamically assigned I don't know how the file is being created, but maybe: 2) you can prepend a character to the first element of every row before reading the file into R - this can be accomplished in place with sed, without creating a copy of the file. 3) split the file into one file with row names, and other with the matrix numbers. 2 Entering edit mode 5.7 years ago To expand a bit on what h.mon correctly wrote, your issue is that you're not treating row names as row names, but rather converting them to numbers. Since they're HUGE numbers, they're presumably getting stored a floats or doubles, which means you're not going to get the exact value back. Of course, you don't need that as a value, just a name, so treat them accordingly (i.e., do what h.mon showed). 0 Entering edit mode 5.7 years ago nkinney06 ▴ 90 That makes sense, but I appear to have the same problem when I run: similarityMatrix <- as.matrix( read.table( './testMatrix.txt', header=FALSE, sep = "\t", strip.white=TRUE, row.names = 1 ) ) I get: > similarityMatrix V2 V3 V4 V5 5060803636482931712 83.33667 0.00000 0.000000 0.000000 15695800775901642752 0.00000 81.00617 38.183766 0.000000 12047011437325701120 0.00000 38.18377 22.203603 7.071068 2610937148294873088 0.00000 0.00000 7.071068 30.133038 and the matrix ( in particular the row names ) should be 5060803636482931868 83.3366666 0.0 0.0 0.0 15695800775901642752 0.0 81.0061726043 38.1837661841 0.0 12047011437325700351 0.0 38.1837661841 22.2036033112 7.07106781187 2610937148294873212 0.0 0.0 7.0710678118 30.1330383466 Is it possible to read the file twice, first as alphanumeric for column one only? 0 Entering edit mode add colClasses=c("factor", NA, NA, NA, NA) to the options. 0 Entering edit mode Ill try but in reality my matrix will be rather large and the number of NAs would have to be dynamically assigned 0 Entering edit mode It's likely that the readr package will help, it's better at not changing column names by default. 0 Entering edit mode I guess you are running the R code only after creating the file, so you can probably get the number of columns beforehand, so you could do: colClasses = c("character", rep("numeric",4) ) or colClasses=c("factor", rep(NA, 4) ) Or you could use scan or readLine to read just one line, get the number of columns, and then use that to set rep(NA, columns)
# The ploygon width parallel to the x axis as a function of the y ordonate? Considering a polygon with n vertices as input. I need to calculate the integral of the form $\int_A p(y) dA$ where $$p(y)$$ is a piecewise polynomial function of $$y$$. May be if I could find the expression $$b(y)$$ then the integral can be calculated by: $$\int_{0}^{y_\max} p(y) b(y) dy$$ The question is : "is there a straightforward method to find $$b(y)$$ given the coordinates of the polygon vertices? Or is there a simpler manner to calculate the integral above?" • Welcome Hafid! This looks like an application of Cavalieri's principle. You can calculate the (piecewise linear) function $b$ from the polygon's vertices after sorting them with respect to the $y$ coordinate. This should be the easiest way to evaluate the integral. (By the way, since this is a very mathematical question, it might be better suited for math stackexchange.) – user9485 Jul 6 at 19:49 • @Chris the Cavalieri's principle! I am not aware of this. I think I have to do some research with google. Thank you very much. – Hafid Boukhoulda Jul 6 at 19:53 • See this: computergraphics.stackexchange.com/questions/9943/… It's regarding degree 2 polynomials, but the procedure is similar minus the quadrature rule which won't work anymore. – lightxbulb Jul 6 at 21:34 • Yes the approach I linked is more general. If you want something specific to your problem then you have to split your integration into slabs divided by horizontal lines passing through the vertices. Let at some $y$ the left edge be between $l_1, l_2$ and the right between $r_1, r_2$. Any point on $l_1l_2$ is given by $p(\lambda) = (1-\lambda)l_1 + \lambda l_2$. You need $x$ expressed in terms of $y$ though, so $\lambda = \frac{y-l_{1,y}}{l_{2,y}- l_{1,y}}$, and you can substitute that in the expression for $x$ to get $x_l(y)$. Do the same for the right edge, then $b(y) = \|x_l(y) - x_r(y)\|$. – lightxbulb Jul 7 at 16:39 • Assuming that your polygon is convex, sort all of the vertices by $y$. Draw a line between the vertex with minimum $y$ and maximum $y$. All vertices to the left of this line will be considered on the left and all vertices to the right will be considered on the right. Start from the min $y$ vertex and walk along the left and right edges to get respectively the left and right edges. After you're done with 1 slab, move 1 vertex up (whichever's next in the ordering regardless if it is left or right). – lightxbulb Jul 7 at 17:27
# num_arrays_in_memory¶ ivy.num_arrays_in_memory()[source] Returns the number of arrays which are currently alive. Supported Frameworks:
57 254 Assignments Done 99% Successfully Done In February 2018 Your physics homework can be a real challenge, and the due date can be really close — feel free to use our assistance and get the desired result. Be sure that math assignments completed by our experts will be error-free and done according to your instructions specified in the submitted order form. # Answer on Algebra Question for tracey Question #17483 An investor sells a stock short for $36 a share. A year later, the investor covers the position at$30 a share. If the margin requirement is 60 percent, what is the percentage return earned on the investment? Redo the calculations, assuming the price of the stock is \$42 when the investor closes the position. Need a fast expert's response? Submit order and get a quick answer at the best price for any assignment or question with DETAILED EXPLANATIONS!
# Simple question on disproving a group isomorphism 1. Jul 27, 2008 ### jeffreydk I am trying to prove that the additive groups $\mathbb{Z}$ and $\mathbb{Q}$ are not isomorphic. I know it is not enough to show that there are maps such as, [tex]f:\mathbb{Q}\rightarrow \mathbb{Z}[/itex] where the input of the function, some $f(x=\frac{a}{b})$, will not be in the group of integers because it's obviously coming from rationals. I just don't know how to rigorously prove this, because just because a map is not isomorphic doesn't mean that the whole thing is not isomorphic. Thanks for any help, it is much appreciated. 2. Jul 27, 2008 ### morphism Suppose we managed to find an isomorphism $f:\mathbb{Q}\rightarrow \mathbb{Z}$. Then f(1/2) would be an integer; what happens when you look at f(1/2) + f(1/2)? 3. Jul 27, 2008 ### jeffreydk It would still be in integers right? That's why I'm confused because I can't seem to show that that property disallows the isomorphism. 4. Jul 27, 2008 ### n_bourbaki Morphism is sort of on the right lines. The property that you're looking for is called divisibility. Q has it and Z doesn't. The same idea also shows that Q\{0} under multiplication is not isomorphic to R\{0} under multiplication. (One can provide an elementary counter argument based purely on set theory, of course.) Alternatively, Z is cyclic. Can you prove that Q isn't? Actually it isn't that alternative, really. Try thinking about a map g from Z to Q. Any group hom is determined completely by where it sends 1 in Z. Can g(1)/2 be in the image? Last edited: Jul 27, 2008 5. Jul 27, 2008 ### jeffreydk Oh ok I didn't think of showing Q isn't cyclic, that's probably the simplest way to do it now that I think of it. Thanks a bunch. 6. Jul 27, 2008 ### morphism Since the "cat is out of the bag," I might as well point out that I wanted jeffreydk to notice that f(1)=nf(1/n) for all n (and why is this bad?). 7. Jul 27, 2008 ### n_bourbaki So your proof is based on the idea that if there is an isomorphism f:Q-->Z, then it must follow that f(1) is divisible by every integer? Hmm, not thought of that one before. It struck me as more obvious that Q isn't cyclic, i.e. look at maps from Z to Q. But it's always good to be able to do one thing in two ways, rather than two things in one way. 8. Jul 27, 2008 ### jeffreydk I definitely agree, it's good to able to prove it in a number of ways. Thanks you guys for both suggestions.
## The Annals of Statistics ### Parameter priors for directed acyclic graphical models and the characterization of several probability distributions #### Abstract We develop simple methods for constructing parameter priors for model choice among directed acyclic graphical (DAG) models. In particular, we introduce several assumptions that permit the construction of parameter priors for a large number of DAG models from a small set of assessments. We then present a method for directly computing the marginal likelihood of every DAG model given a random sample with no missing observations. We apply this methodology to Gaussian DAG models which consist of a recursive set of linear regression models. We show that the only parameter prior for complete Gaussian DAG models that satisfies our assumptions is the normal-Wishart distribution. Our analysis is based on the following new characterization of the Wishart distribution: let $W$ be an $n \times n$, $n \ge 3$, positive definite symmetric matrix of random variables and $f(W)$ be a pdf of $W$. Then, $f(W)$ is a Wishart distribution if and only if $W_{11} - W_{12} W_{22}^{-1} W'_{12}$ is independent of $\{W_{12},W_{22}\}$ for every block partitioning $W_{11},W_{12}, W'_{12}, W_{22}$ of $W$. Similar characterizations of the normal and normal-Wishart distributions are provided as well. #### Article information Source Ann. Statist. Volume 30, Number 5 (2002), 1412-1440. Dates First available in Project Euclid: 28 October 2002 http://projecteuclid.org/euclid.aos/1035844981 Digital Object Identifier doi:10.1214/aos/1035844981 Mathematical Reviews number (MathSciNet) MR1936324 Zentralblatt MATH identifier 1016.62064 #### Citation Geiger, Dan; Heckerman, David. Parameter priors for directed acyclic graphical models and the characterization of several probability distributions. Ann. Statist. 30 (2002), no. 5, 1412--1440. doi:10.1214/aos/1035844981. http://projecteuclid.org/euclid.aos/1035844981. #### References • ACZÉL, J. (1966). Lectures on Functional Equations and Their Applications. Academic Press, New York. • ANDERSSON, S. A., MADIGAN, D. and PERLMAN, M. D. (1997). A characterization of Markov equivalence classes for acy clic digraphs. Ann. Statist. 25 505-541. • BERNARDO, J. M. and SMITH, A. F. M. (1994). Bayesian Theory. Wiley, New York. • BUNTINE, W. (1994). Operations for learning with graphical models. J. Artificial Intelligence Research 2 159-225. • CHICKERING, D. (1995). A transformational characterization of equivalent Bayesian network structures. In Proceedings of Eleventh Conference on Uncertainty in Artificial Intelligence, Montreal 87-98. Morgan Kaufmann, San Francisco. • CHICKERING, D. (1996). Learning Bayesian networks from data. Ph.D. dissertation, Univ. California, Los Angeles. • COOPER, G. and HERSKOVITS, E. (1992). A Bayesian method for the induction of probabilistic networks from data. Machine Learning 9 309-347. • COWELL, R., DAWID, A. P., LAURITZEN, S. and SPIEGELHALTER, D. (1999). Probabilistic Networks and Expert Sy stems. Springer, New York. • DAWID, A. P. and LAURITZEN, S. (1993). Hy per-Markov laws in the statistical analysis of decomposable graphical models. Ann. Statist. 21 1272-1317. • DEGROOT, M. (1970). Optimal Statistical Decisions. McGraw-Hill, New York. • FRIEDMAN, N. and GOLDSZMIDT, M. (1997). Sequential update of Bayesian network structures. In Proceedings of Thirteenth Conference on Uncertainty in Artificial Intelligence 165-174. Morgan Kaufmann, Providence, RI. • GEIGER, D. and HECKERMAN, D. (1994). Learning Gaussian networks. In Proceedings of Tenth Conference on Uncertainty in Artificial Intelligence 235-243. Morgan Kaufmann, San Francisco. • GEIGER, D. and HECKERMAN, D. (1997). A characterization of the Dirichlet distribution through global and local parameter independence. Ann. Statist. 25 1344-1369. • GEIGER, D. and HECKERMAN, D. (1998). A characterization of the bivariate Wishart distribution. Probab. Math. Statist. 18 119-131. • GEIGER, D. and HECKERMAN, D. (1999). Parameter priors for directed graphical models and the characterization of several probability distributions. In Proceedings of Fifteenth Conference on Uncertainty in Artificial Intelligence 216-225. Morgan Kaufmann, San Francisco. • HECKERMAN, D. and GEIGER, D. (1995). Learning Bayesian networks: A unification for discrete and Gaussian domains. In Proceedings of Eleventh Conference on Uncertainty in Artificial Intelligence 274-284. Morgan Kaufmann, San Francisco. • HECKERMAN, D., GEIGER, D. and CHICKERING, D. (1995). Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning 20 197-243. • HECKERMAN, D., MAMDANI, A. and WELLMAN, M. (1995). Real-world applications of Bayesian networks. Comm. ACM 38. • HOWARD, R. and MATHESON, J. (1981). Influence diagrams. In The Principles and Applications of Decision Analy sis 2 (R. Howard and J. Matheson, eds.) 721-762. Strategic Decisions Group, Menlo Park, CA. • JÁRAI, A. (1986). On regular solutions of functional equations. Aequationes Math. 30 21-54. • JÁRAI, A. (1998). Regularity property of the functional equation of the Dirichlet distribution. Aequationes Math. 56 37-46. • KADANE, J. B., DICKEY, J. M., WINKLER, R. L., SMITH, W. S. and PETERS, S. C. (1980). Interactive elicitation of opinion for a normal linear model. J. Amer. Statist. Assoc. 75 845-854. • KAGAN, A. M., LINNIK, Y. V. and RAO, C. R. (1973). Characterization Problems in Mathematical Statistics. Wiley, New York. • MADIGAN, D., ANDERSSON, S. A., PERLMAN, M. D. and VOLINSKY, C. T. (1996). Bayesian model averaging and model selection for Markov equivalence classes of acy clic digraphs. Comm. Statist. Theory Methods 25 2493-2519. • PEARL, J. (1988). Probabilistic Reasoning in Intelligent Sy stems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo, CA. • PRESS, J. S. (1972). Applied Multivariate Analy sis. Holt, Rinehart and Winston, New York. • SHACHTER, R. and KENLEY, C. (1989). Gaussian influence diagrams. Management Sci. 35 527- 550. • SPIEGELHALTER, D., DAWID, A., LAURITZEN, S. and COWELL, R. (1993). Bayesian analysis in expert sy stems (with discussion). Statist. Sci. 8 219-283. • SPIEGELHALTER, D. and LAURITZEN, S. (1990). Sequential updating of conditional probabilities on directed graphical structures. Networks 20 579-605. • SPIRTES, P., GLy MOUR, C. and SCHEINES, R. (2001). Causation, Prediction, and Search. MIT Press. • SPIRTES, P. and MEEK, C. (1995). Learning Bayesian networks with discrete variables from data. In Proceedings of First International Conference on Knowledge Discovery and Data Mining 294-299. Morgan Kaufmann, San Francisco. • THIESSON, B., MEEK, C., CHICKERING, D. and HECKERMAN, D. (1998). Computationally efficient methods for selecting among mixtures of graphical models. In Bayesian Statistics 6 (J. M. Bernardo, A. P. Dawid and A. F. M. Smith, eds.) 631-656. Clarendon Press, Oxford. • VERMA, T. and PEARL, J. (1990). Equivalence and sy nthesis of causal models. In Proceedings of Sixth Conference on Uncertainty in Artificial Intelligence 220-227. Morgan Kaufmann, San Francisco. • REDMOND, WASHINGTON 98052-6399 E-MAIL: [email protected]
# Mean Motion (revs/day) Not Reviewed "Mean_Motion" = Tags: Rating ID MichaelBartmess.Mean Motion (revs/day) UUID a5f3e9bb-79d4-11e6-9770-bc764e2038f2 The Mean Motion (revs/day) calculator computes the mean motion based on the number of revolutions (e.g. orbits) in a a period of time. INSTRUCTIONS: Choose units and enter the following: • (R) Amount of rotation • (P)  Period of time associated with the amount of rotation. Mean Motion (rpd): The calculator returns the revolutions per day.  However, this angular velocity can be automatically converted to compatible units via the pull-down menu. #### The Math / Science This vCalc equation computes the angular speed required for a body to complete one orbit, which is called the Mean Motion Mean motion is expressed as number of revolutions or orbits per unit time. A unit revolution may be a revolution, 360 degrees, or 2pi radians. Period can be expressed in any units of time but more typically is in minutes, hours or days. Note: the two line element set used to represent an oribital trajectory of an Earth oribiting object represent mean motion in units of "revs"/"day". ## Orbit Classifications by Mean Motion or Period In many Earth orbital analysis systems, classification of orbits are defined by Near Earth Orbits (NEOs -- sometimes called Low Earth Orbits or LEOs) , Medium Earth Orbits (MEOs), Geosynchronous orbits (GEOs), or Deep Space Orbits.  These classes of orbits have arbitrary boundaries defined by their orbital period or mean motion.  Often for instance a NEO is defined as any orbit with a period less than 225 minutes (a mean motion of 6.4 revs/day). ## Also See Astrodynamics Collection
# Answer: Which is NOT one of the four main instrument groups in an orchestra? The Question: Which is NOT one of the four main instrument groups in an orchestra? Woodwind, Strings, Percussion, Keyboard
# Get help with algebra 2 equations and inequalities Recent questions in Equations Equations Equations Equations Equations Equations Equations Equations Equations Equations Equations Equations Equations Equations Equations Equations Equations Equations Equations Equations
# MathJax 構文一覧 TOP > 数学 >  MathJax構文一覧 ## MathJaxの構文一覧 ### 三角関数・log型演算子 $\arccos$ $\cos$ $\csc$ $\exp$ $\ker$ $\limsup$ $\min$ $\sinh$ $\arcsin$ $\cosh$ $\deg$ $\gcd$ $\lg$ $\ln$ $\Pr$ $\sup$ $\arctan$ $\cot$ $\det$ $\hom$ $\lim$ $\log$ $\sec$ $\tan$ $\arg$ $\coth$ $\dim$ $\inf$ $\liminf$ $\max$ $\sin$ $\tanh$
### Electronic Research Archive 2021, Issue 6: 4199-4213. doi: 10.3934/era.2021079 Special Issues # Path-connectedness in global bifurcation theory • Received: 01 April 2021 Revised: 01 August 2021 Published: 08 October 2021 • Primary: 47J15, 58E07, 35B32; Secondary: 54F15 • A celebrated result in bifurcation theory is that, when the operators involved are compact, global connected sets of non-trivial solutions bifurcate from trivial solutions at non-zero eigenvalues of odd algebraic multiplicity of the linearized problem. This paper presents a simple example in which the hypotheses of the global bifurcation theorem are satisfied, yet all the path-connected components of the connected sets that bifurcate are singletons. Another example shows that even when the operators are everywhere infinitely differentiable and classical bifurcation occurs locally at a simple eigenvalue, the global continua may not be path-connected away from the bifurcation point. A third example shows that the non-trivial solutions which bifurcate at non-zero eigenvalues, irrespective of multiplicity when the problem has gradient structure, may not be connected and may not contain any paths except singletons. Citation: J. F. Toland. Path-connectedness in global bifurcation theory[J]. Electronic Research Archive, 2021, 29(6): 4199-4213. doi: 10.3934/era.2021079 ### Related Papers: • A celebrated result in bifurcation theory is that, when the operators involved are compact, global connected sets of non-trivial solutions bifurcate from trivial solutions at non-zero eigenvalues of odd algebraic multiplicity of the linearized problem. This paper presents a simple example in which the hypotheses of the global bifurcation theorem are satisfied, yet all the path-connected components of the connected sets that bifurcate are singletons. Another example shows that even when the operators are everywhere infinitely differentiable and classical bifurcation occurs locally at a simple eigenvalue, the global continua may not be path-connected away from the bifurcation point. A third example shows that the non-trivial solutions which bifurcate at non-zero eigenvalues, irrespective of multiplicity when the problem has gradient structure, may not be connected and may not contain any paths except singletons. [1] Concerning hereditarily indecomposable continua. Pacific J. Math. (1951) 1: 43-51. [2] Snake-like continua. Duke Math. J. (1951) 18: 653-663. [3] Die Lösung der Verzweigungsgleichungen für nichtlineare Eigenwertprobleme. Math. Zeit. (1972) 127: 105-126. [4] (2003) Analytic Theory of Global Bifurcation,.Princeton University Press. [5] Chainable continua and indecomposability. Pacific J. Math. (1959) 9: 653-659. [6] Bifurcation from simple eigenvalues. J. Funct. Anal. (1971) 8: 321-340. [7] Bifurcation theory for analytic operators. Proc. Lond. Math. Soc. (1973) 26: 359-384. [8] Global structure of the solutions of non-linear real analytic eigenvalue problems. Proc. Lond. Math. Soc. (1973) 27: 747-765. [9] Bifurcation from simple eigenvalues and eigenvalues of geometric multiplicity one. Bull. Lond. Math. Soc. (2002) 34: 533-538. [10] J. J. Duistermaat and J. A. C. Kolk, Multidimensional Real Analysis I., Cambridge Studies in Advanced Mathematics 86. Cambridge University Press, Cambridge, 2004. doi: 10.1017/CBO9780511616716 [11] N. Dunford and J. T. Schwartz, Linear Operators, Part Ⅰ: General Theory, , With the assistance of W. G. Bade and R. G. Bartle. Pure and Applied Mathematics, Vol. 7 Interscience Publishers, Inc., New York; Interscience Publishers, Ltd., London, 1958. [12] The pseudo-circle is unique,. Trans. Amer. Math. Soc. (1970) 149: 45-64. [13] A brief historical view of continuum theory. Topology Appl. (2006) 153: 1530-1539. [14] Un continu dont tout sous-continu est indécomposable. Fund. Math. (1922) 3: 247-286. [15] On a topological method in the problem of eigenfunctions of nonlinear operators. Dokl. Akad. Nauk SSSR (N.S.) (1950) 74: 5-7. [16] Some problems in nonlinear analysis. Amer. Math. Soc. Trans. Ser. (1958) 10: 345-409. [17] . A. Krasnolsel'skii, Topological Methods in the Theory of Nonlinear Eigenvalue Problems, Pergamon Press, Oxford, 1963. (Original in Russian: Topologicheskiye Metody v Teorii Nelineinykh Integral'nykh Uravnenii., Gostekhteoretizdat, Moscow, 1956.) [18] Some global results for nonlinear eigenvalue problems. J. Funct. Anal. (1971) 7: 487-513. [19] Some aspects of nonlinear eigenvalue problems. Rocky Mountain J. Math. (1973) 3: 161-202. [20] H. L. Royden, Real Analysis, Third Edition, McMillan, New York, 1988. [21] Global bifurcation for $k$-set contractions without multiplicity assumptions. Quart. J. Math. Oxford Ser. (1976) 27: 199-216. [22] M. M. Vainberg, Variational Methods for the Study of Nonlinear Operators, With a chapter on Newton's method by L. V. Kantorovich and G. P. Akilov. Translated and supplemented by Amiel Feinstein Holden-Day, Inc., San Francisco, Calif.-London-Amsterdam, 1964. ###### 通讯作者: 陈斌, [email protected] • 1. 沈阳化工大学材料科学与工程学院 沈阳 110142 1.833 0.8 Article outline Figures(1) • On This Site
# Pseudotime domain joint diving-reflected FWI using graph-space optimal transport 2 EDP - Equations aux Dérivées Partielles LJK - Laboratoire Jean Kuntzmann Abstract : Reflection waveform inversion (RWI) updates the the P-wave velocity ($V_p$) macromodel beyond the depths sampled by diving waves, by exploiting wide scattering angle wavepaths in a reflective subsurface. Joint diving and reflection waveform inversion (JFWI) combines RWI and early-arrival waveform inversion (EWI), thereby constraining the shallow subsurface whilst enriching the low-wavenumber content of the deep $V_p$ model with reflections. In depth-domain $V_p$ inversion, ensuring consistency between reflectors positions and model kinematics comes at the cost of repeated least-square migrations, combined with carefully designed offset weighting. In order to efficiently address such co-dependency between reflective and kinematic parameters, we propose to cast JFWI in the pseudotime domain. As the velocity is updated, the reflectors are passively repositioned consistently with $V_p$, honoring the zerooffset two-way-time seismic invariant, and keeping the shortspread reflections in phase. By combining a pseudotime approach with a graph-space optimal transport (GSOT) objective function, we show that it's possible to reconstruct a complex velocity macromodel from short offset 2D reflection data containing surface-related multiples and ghosts, starting from a 1D initial guess; compared to a depth-domain inversion, the computing cost is reduced of one order of magnitude, associated with a significant saving in man-time, thanks to a simpler design of data weighting and inversion strategy. Document type : Conference papers Domain : https://hal.archives-ouvertes.fr/hal-03404578 Contributor : Ludovic Métivier Connect in order to contact the contributor Submitted on : Tuesday, October 26, 2021 - 5:06:28 PM Last modification on : Friday, January 14, 2022 - 3:41:20 AM Long-term archiving on: : Thursday, January 27, 2022 - 8:10:58 PM ### File SEG2021_pt_fs.pdf Files produced by the author(s) ### Citation Giuseppe Provenzano, Wei Zhou, Romain Brossier, Ludovic Métivier. Pseudotime domain joint diving-reflected FWI using graph-space optimal transport. First International Meeting for Applied Geoscience & Energy, Sep 2021, Denver, United States. pp.797-801, ⟨10.1190/segam2021-3583318.1⟩. ⟨hal-03404578⟩ Record views
10.4: The Strengths of Acids and Bases Learning Objectives • Describe the difference between strong and weak acids and bases. • Describe how a chemical reaction reaches chemical equilibrium. • Define the pH scale and use it to describe acids and bases. Acids and bases do not all demonstrate the same degree of chemical activity in solution. Different acids and bases have different strengths. Strong and Weak Acids Let us consider the strengths of acids first. A small number of acids ionize completely in aqueous solution. For example, when HCl dissolves in water, every molecule of HCl separates into a hydronium ion and a chloride ion: $\ce{HCl(g) + H2O(l) ->[\sim 100\%] H_3O^{+}(aq) + Cl^{-} (aq)} \label{Eq1}$ HCl(aq) is one example of a strong acid, which is a compound that is essentially 100% ionized in aqueous solution. There are very few strong acids. The important ones are listed in Table $$\PageIndex{1}$$. Table $$\PageIndex{1}$$: Strong Acids and Bases (All in Aqueous Solution) Acids Bases HCl LiOH HBr NaOH HI KOH HNO3 Mg(OH)2 H2SO4 Ca(OH)2 HClO4 By analogy, a strong base is a compound that is essentially 100% ionized in aqueous solution. As with acids, there are only a few strong bases, which are also listed in Table $$\PageIndex{1}$$. If an acid is not listed in Table $$\PageIndex{1}$$, it is likely a weak acid, which is a compound that is not 100% ionized in aqueous solution. Similarly, a weak base is a compound that is not 100% ionized in aqueous solution. For example, acetic acid (HC2H3O2) is a weak acid. The ionization reaction for acetic acid is as follows: $HC_2H_3O_{2(aq)} + H_2O_{(ℓ)} \rightarrow H_3O^+_{(aq)} + C_2H_3O^−_{2(aq)} \label{Eq2}$ Depending on the concentration of HC2H3O2, the ionization reaction may occur only for 1%–5% of the acetic acid molecules. Looking Closer: Household Acids and Bases Many household products are acids or bases. For example, the owner of a swimming pool may use muriatic acid to clean the pool. Muriatic acid is another name for hydrochloric acid [HCl(aq)]. Vinegar has already been mentioned as a dilute solution of acetic acid [HC2H3O2(aq)]. In a medicine chest, one may find a bottle of vitamin C tablets; the chemical name of vitamin C is ascorbic acid (HC6H7O6). One of the more familiar household bases is ammonia (NH3), which is found in numerous cleaning products. As we mentioned previously, ammonia is a base because it increases the hydroxide ion concentration by reacting with water: $NH_{3(aq)} + H_2O_{(ℓ)} \rightarrow NH^+_{4(aq)} + OH^−_{(aq)} \label{Eq3}$ Many soaps are also slightly basic because they contain compounds that act as Brønsted-Lowry bases, accepting protons from water and forming excess hydroxide ions. This is one reason that soap solutions are slippery. Chemical Equilibrium The behavior of weak acids and bases illustrates a key concept in chemistry. Does the chemical reaction describing the ionization of a weak acid or base just stop when the acid or base is done ionizing? Actually, no. Rather, the reverse process—the reformation of the molecular form of the acid or base—occurs, ultimately at the same rate as the ionization process. For example, the ionization of the weak acid HC2H3O2 (aq) is as follows: $HC_2H_3O_{2(aq)} + H_2O_{(ℓ)} \rightarrow H_3O^+_{(aq)} + C_2H_3O^−_{2(aq)} \label{Eq4}$ The reverse process also begins to occur: $H_3O^+_{(aq)} + C_2H_3O^−_{2(aq)} \rightarrow HC_2H_3O_{2(aq)} + H_2O_{(ℓ)} \label{Eq5}$ Eventually, there is a balance between the two opposing processes, and no additional change occurs. The chemical reaction is better represented at this point with a double arrow: $HC_2H_3O_{2(aq)} + H_2O_{(ℓ)} \rightleftharpoons H_3O^+_{(aq)} + C_2H_3O^−_{2(aq)} \label{Eq6}$ The $$\rightleftharpoons$$ implies that both the forward and reverse reactions are occurring, and their effects cancel each other out. A process at this point is considered to be at chemical equilibrium (or equilibrium). It is important to note that the processes do not stop. They balance out each other so that there is no further net change; that is, chemical equilibrium is a dynamic equilibrium. Example $$\PageIndex{1}$$: Partial Ionization Write the equilibrium chemical equation for the partial ionization of each weak acid or base. 1. HNO2(aq) 2. C5H5N(aq) SOLUTION 1. HNO2(aq) + H2O(ℓ) ⇆ NO2(aq) + H3O+(aq) 2. C5H5N(aq) + H2O(ℓ) ⇆ C5H5NH+(aq) + OH(aq) Exercise $$\PageIndex{1}$$ Write the equilibrium chemical equation for the partial ionization of each weak acid or base. 1. $$HF_{(aq)}$$ 2. $$AgOH_{(aq)}$$ Hydrofluoric acid $$HF_{(aq)}$$ reacts directly with glass (very few chemicals react with glass). Hydrofluoric acid is used in glass etching. Finally, you may realize that the autoionization of water is actually an equilibrium process, so it is more properly written with the double arrow: $H_2O_{(ℓ)} + H_2O_{(ℓ)} \rightleftharpoons H_3O^+_{(aq)} + OH^−_{(aq)} \label{Eq7}$ The pH Scale One qualitative measure of the strength of an acid or a base solution is the pH scale, which is based on the concentration of the hydronium (or hydrogen) ion in aqueous solution. $pH = -\log[H^+]$ or $pH = -\log[H_3O^+]$ A neutral (neither acidic nor basic) solution, one that has the same concentration of hydrogen and hydroxide ions, has a pH of 7. A pH below 7 means that a solution is acidic, with lower values of pH corresponding to increasingly acidic solutions. A pH greater than 7 indicates a basic solution, with higher values of pH corresponding to increasingly basic solutions. Thus, given the pH of several solutions, you can state which ones are acidic, which ones are basic, and which are more acidic or basic than others. Table $$\PageIndex{2}$$ lists the pH of several common solutions. Notice that some biological fluids are nowhere near neutral. Table $$\PageIndex{2}$$: The pH Values of Some Common Solutions Solution pH battery acid 0.3 stomach acid 1–2 lemon or lime juice 2.1 vinegar 2.8–3.0 Coca-Cola 3 wine 2.8–3.8 beer 4–5 coffee 5 milk 6 urine 6 pure H2O 7 (human) blood 7.3–7.5 sea water 8 antacid (milk of magnesia) 10.5 NH3 (1 M) 11.6 bleach 12.6 NaOH (1 M) 14.0 Weak acids and bases are relatively common. You may notice from Table $$\PageIndex{2}$$ that many food products are slightly acidic. They are acidic because they contain solutions of weak acids. If the acid components of these foods were strong acids, the food would likely be inedible. Key Takeaways • Acids and bases can be strong or weak depending on the extent of ionization in solution. • Most chemical reactions reach equilibrium at which point there is no net change. • The pH scale is used to succinctly communicate the acidity or basicity of a solution.
# Access Statistics for Omar Rifki Author contact details at EconPapers. The robustest clusters in the input–output networks: global $$\hbox {CO}_2$$ CO 2 emission clusters 0 0 0 1 0 0 3 14
## Why Persistent Identifiers are the Wrong Idea I think I will rewrite this. It seems to me it’s only half the solution…more coming shortly. This afternoon I was reading Elizabeth Eisenstein’s “The Printing Revolution in Early Modern Europe” when I came across this passage: To consult different books, it was no longer so essential to be a wandering scholar...The era of the glossator and the commentator came to an end, and a new "era of intense cross-referencing between one book and another" began. The point here is that after the printing press came about, there were more books available. An obvious point. As a result, cross-referencing became a feature, since it was possible to access and, consequently, reference other literature more readily. This made me think about the current discussions around persistent identifiers for scholarly content. It seems the current solution is to offer a layer of indirection: this enables a stable identifier to persist, and should the ‘actual location’ of the content be changed, then we can re-configure the redirect to point to the new location. Martin Fenner and Geoff Bilder point to this solution in their very good postings on this topic. However, this method does not overcome the real problem. What if either: • no one updates the redirection after the location has changed • the content really goes offline (through loss of domain, for example) It appears to me that we have the wrong solution. There is really no way to solve this issue with URIs. We can only minimise it. So, how to go about resolving this (so to speak). One way to get some insight into the issues is to wind back the clock and look at the way content was located in the age of the newly-born printing press. In this age, scholars were liberated because they didn’t have to wander the world looking for a particular book. Instead, identical printed copies proliferated, and it was just a matter of finding a copy of the work you were pursuing. Preferably you found a copy in a library or shop nearby. To find that work, one merely needed the cross-reference information to track it down. That “persistent identifier,” comprised of author’s name, book’s title, page of reference for the quoted material, publisher’s name and location, publication date, was commonly referred to as a citation (we still use that term). The citation helped the reader or researcher find a copy of the work cited. Not a particular printed book, but a copy of that book. So, how is it that in an age of digital media we have gone backwards? Where copying something is even easier than in the printed age, why are we still pointing to ‘one’ authoritative copy? In essence, we are still referencing a book by stating the exact book that sits in a specific institution, on a particular shelf, with the blue (not green) cover. It feels a little like the great leap backwards to me. A way to get around this problem would be simply to allow and encourage content to be copied. Let digital media do what it does best – copy and distribute itself. The ‘unique identifier’ would then not be an URL (with a layer of indirection) but would take the form of a checksum or hash. Finding the right work would then be a matter of searching for a copy of the material with the right checksum. I don’t know. I’m probably missing something. But it seems we have no problem tracking down YouTube videos when they spawn into the ether. We can also tell one version of software from another, no matter where it is and how it is labelled. Why not just let the content go and provide mechanisms to find a specific version of the content via hash search (also solving the issues of versioning URIs)? Colophon: written on a lovely Sunday afternoon in the Mission. Adam got up Monday morning with that nagging feeling… rethought it. Talked to Raewyn. Rewrite coming. Written using Ghost (free) software. ## Its not US and THEM, its a TEAM, stupid hi y’all I have just been reading some posts on Scholarly Kitchen about content creation and the next wave of authoring systems. It seems the STM sector has long been in need of developing a solution to get their publishing processes out of various traps. The most obvious trap is MS Word. A horrible format, to be sure, but it has long been the default file format for manuscript production, with a small tip of the hat to LaTeX for the technologically gifted. Their recent discussions have mainly been about online ‘authoring systems,’ going beyond MS Word to anticipate documents that are fully transparent to whatever combination of machine and human interactions play a part in understanding and processing the information. This ‘get-out-of-MS-Word-free’ card is a very attractive proposition. MS Word is basically a binary blob to most publishing systems (even though in actual fact the format of .docx is XML -thereby also abruptly ending the false argument that XML inherently brings structure). As a ‘binary’ (go with me on this for now) MS Word is not transparent to the publication system, there is no record of when the author has worked on it; finding out what they have done since version xxx.xxx and version xxxx.xxxxxxxx is very difficult; nobody else can work on it when the author is also working on it, and there is no control over structure etc etc etc So getting away from reliance on MS Word is the aim. But getting into (what I might rename as) an ‘authoring only‘ platform – is not the solution. What is interesting about the SK forum, is that there seems to be a very clear distinction in the minds of publishers between the worlds of the author and the publisher. Most of the comments make this split, and there is much talk of ‘authoring systems’. It seems a little bizarre to me, as I don’t think it’s wise to think about the author and the publisher as being distinct entities. It’s not a matter of author and publisher working on separate processes to shepherd a manuscript through to publication: it is very much a team effort. Authors and publishers work together in a way that should not be dichotomised: they are a team. If we don’t acknowledge that, then we will not be able to design good publication systems. There is a lot of unclear thinking around this topic at the moment. The “authoring system” model assumes that content is made in an authoring system by a writer, and then migrates to the publisher’s submission, processing and publishing system, where the publisher does some stuff, and then at various times pings the author back to make changes to metadata, submission information, the manuscript and attendant assets (eg figures)… In this model, next the author takes the manuscript out of the publisher’s system, ingests to the old authoring system, works on it, exports it, and re-ingests it into the publisher’s system… Hmmm…this cycle is exactly one of the pain points we were trying to avoid by getting away from Microsoft Word. It seems to me that the current trend to build better authoring systems is a mistake. It is based on the false assumption that ‘MS Word’ is the problem, without realising that there is more to it. Word has been seen as the problem only because it has been the only problem in town. We don’t need better ‘authoring systems’ that repeat the separation between writing and publishing that is inherent in reliance on MS Word. We shouldn’t invest in new authoring systems and believe in them purely because they are ‘not Microsoft Word’. Rather, we need documents to be contained within submission and processing systems for the entire duration of their life, and they need to be completely operational and transparent within that system to all parties that must work on them. Without understanding that need, we are merely mitigating the problem by small steps whilst fooling ourselves that we have solved the larger problem. We don’t want the author-publisher response/change cycle (a collaborative effort by the team which includes author and publisher) to be in separate systems. We want them working together in the same system. We need teams to work together in the most efficient way possible, and that is in the same (real world- or cyber-) space. Teams work best when they work in the same *space. Though I see the current efforts towards authoring system development to be interesting, unless they are integrated with processing and workflow features, they will sooner or later be made redundant. Colophon: written by Adam in 30 mins in a tizz. Tinkered with by Raewyn for another 30 mins. Written using Ghost software (free software!) ## Single Voice version 1.0 ‘not as raw’ During a Book Sprint, or when talking about Book Sprints, the question very quickly arises – ‘what about the author’s single voice?’ The fear is that collaboratively produced books will lose that personal, individual voice that we know so well from all the books we have read and loved. Wouldn’t Frankenstein be a little lumpy if it was written by a collective? Same goes for any Tom Clancy book (he famously said that “Collaboration on a book is the ultimate unnatural act”). Clancy’s books are not high art, but they do seem to contain a particular ‘Clancy’ style. What about good contemporary literature? Could, for example, the wonderful The Art of Fielding be as wonderful if written by anyone other than Chad Harbach? And what about poetry by the father of English literature – Chaucer? It’s unimaginable that his works could be produced by anyone other than Chaucer. We believe that both high and low literature would suffer if the works weren’t produced by a single author. There is only one Chaucer, one Clancy (thankfully), one Harbach, one Mary Shelley. We can tell their works apart because each contains a distinctive authorial voice. We know these writers. We know those voices. We can only imagine what a mess would be created if books were written by more than one person. They would lose the single point of view. That special perspective. That special voice. Well… first of all, it might be worth knowing that each of these examples actually had more than one contributing author, and each in its own interesting way. From Erick Kelemen’s work in the forensic field of textual criticism, there is good evidence that both Byron and Percy Shelly had a hand in at least some of Frankenstein. According to Kelemen, the extent of the collaboration is not exactly known, and we need to be aware that the discussion is also tainted by a good ole sexist lens. However, there is good evidence of collaboration, not just in the Preface (which some say is written entirely by Percy Shelly), but also in the content of the rest of the story. Tom Clancy, in his own mind the enemy of collaborative book production, actually collaborated with others on many of his books. Some of the books he has credit for were actually written mostly by others, a common practice amongst authors of best-selling thriller and mystery series for at least the past twenty years. And in fact, manuscripts produced at the time Chaucer was writing were shared documents, and it is extremely likely the exact words that we now consider to be Chaucer’s were not his at all. As Lawrence Liang has noted, in his discussion of the process of Chaucer’s canonisation, the process was essentially a gathering of manuscripts after Chaucer’s death by experts who decided which words were, and which were not, Chaucer’s, for all time. In the disclaimer before the Miller’s Tale for instance, Chaucer states that he is merely repeating tales told by others, and that the Tales are designed to be the written record of a lively exchange of stories between multiple tellers, each with different, sometimes opposing, intents. Interestingly, Chaucer seems not only to recognize the importance of retelling stories, but also a mode of reading that incorporates the ability to edit and write. If you want to understand the role of collaboration in single-author-culture right now, there is no better story to read than The Book on Publishing which provides a great tale about the publishing of Harbach’s The Art of Fielding and acknowledges the huge value an editor can play in re-writing and restructuring a book. There are two points here to keep in mind. Firstly, we don’t know much about how books are written, nor how models of the writing process have changed over time. Paper is not a good medium for preserving versioning, and we lack an on-paper-process mechanism like git blame that can backtrack to show how the text was created. A great pity. The lack of this kind of tool for the vast majority of publishing history means publishing has been able to propagate the very marketable myth of the single author. Collaboration has been obscured and de-valued. Worse, the extent and value of collaboration is not understood. We don’t even have a good language for talking about it. Secondly, we are left believing claims such as “books have a single voice because they are written by a single author” when this is demonstrably false. Almost every published book has had at least two authorial contributors – the author and the editor; and most books will have been improved during the drafting process by the contributions of test readers. Collaboration exists to improve works. It is why there are editors in publishing. Editors give feedback and shape the work to, amongst other things, strengthen the impression of the single authorial voice. It is very probably true that an effective single voice can only be achieved by 2 or more people collaborating. So next time you find yourself asking “how can an authoritative singular voice be preserved in collaborative book production?” it might be better to take a deep breath and ask yourself “how could a single voice ever be effectively realised without collaborating?” That is the real question at play. Colophon: version 1.0 Written in an hour by Adam Hyde. Raewyn Whyte then improved it (‘made it stronger’). Also, some references still need to be checked as the needed books are in storage in NZ somewhere! Written with Ghost Blog free software (MIT) https://github.com/tryghost/Ghost. ## Fantasies of the Library Fantasies of the Library is a book released last week by Berlin publisher k-verlag. There is an interview in it with me about the future of book publishing beyond the proprietary model. I also talk about my current work for the Public Library of Science and the relationship between Open Access and Open Source. The full interview is also online and can be read here. My favourite passage is this: Charles Stankievech: “But why should one value open source and open access? What are the political ramifications of such a philosophy and practice?” Adam Hyde: “Because both provide more value to humanity. Political ramifications are vast and complex. I like to think about the personal aspects of this choice, however. Living a life of open source and open access forces you to peel away layer by layer the proprietary way of thinking, doing, and being that we have all grown up with. It can be a very painful process, but it’s also extremely liberating and healthy. Largely, it actually means learning to live without fear and paranoia of people ‘stealing your ideas’. That’s quite a freedom in itself. ## Books are Evil, Really Evil pt1 Right now books are something of an ironic artefact for me. I am involved in the rapid production of books through a process known as a Book Sprint. We create books. We throw a bunch of people in a room for a week, and carefully facilitate them through a process, progressing them step by step, from zero to finished book, in 5 days or less.Write a book in a week?! An astonishing proposal. Most people who attend a Book Sprint for the first time think it is impossible. Create a book in a week?! Most think that maybe they can get the table of contents done in that time. Maybe even some structure. But a book? 5 days later they have a finished book and they are amazed. There are many essential ingredients to a Book Sprint. An experienced Book Sprint facilitator is a must. A venue set up just so… Lightweight and easy-to-use book production software. A toolchain that supports rapid rendering of PDF and EPUB from HTML. Good food… A writing team… and a lot more. One of the contributing factors to success is the terror caused by the seemingly impossible idea that the group will create a book. It is a huge motivator. Such is the enormity of the task in the participants’ minds that they follow the facilitator and dedicate themselves to extremely long hours, working on minute details even when exhausted. There is a lot of chemistry in there. Camaraderie and peer pressure are pushed to maximum effect as a motivational factor, as is fear of failure, especially fear of failure before your peers, both inside and outside the Sprint room. The pleasure of helping your peers is a strong motivator, as is the idea that together we will do this! But the number one motivator is the idea that we are going to produce a book. We all know that books these days, paper books, are published from a PDF. You send a PDF to the printer, and the final output is a perfect bound book. This happens for most Book Sprints – we send the final PDF to a printer for them to produce the printed book. So what we are creating is actually a PDF (along with an EPUB) …but imagine if we were to call the event “PDF Sprint”. At the beginning of the PDF Sprint we could announce that we have gathered everyone together…so that…at the end of the week…they will have….(gasp!)…a PDF! Nope. Doesn’t work. Doesn’t even nearly work. A book is the seemingly impossible outcome that Book Sprint participants have come to conquer. Even though the definition of ‘what a book is’ is completely up for grabs, it is abook they are determined to produce. A book is the pinacle of knowledge products, and writing a book is about equal in cerebal achievements to climbing Everest. A PDF is merely getting to base camp, or perhaps the equivalent to planning the trip from your armchair. So, what’s the problem? Books are good then! A great motivator for Book Sprints. Where exactly is the irony? How can I complain? Book Sprints are extraordinary events. The people are not just put into a room and left to write. They are led through a process where notions of single authorship and ownership of content just no longer make sense. Such ideas are unsustainable and nonsensical in this environment, and participants slowly deconstruct ideas of authorship over the 5 days. The participants actively collaborate during the event. Really collaborate. Book Sprints are a kind of collaborative therapy. Each participant learns to let go of their own voice so they can contribute to constructing a new shared voice with the rest of the team. They learn new ways to contribute to group processes, to communicate, to improve each other’s contributions, to synthesize, to empower and encourage others to improve the work without having to ask permission. The resulting book has no perceivable author. It has been delivered by what is now a community. And as a result, most of the books, about 99% I would say, end up being freely licensed. A book born by sharing is more easily shared. More easily shared than a book created with the notions of author-ownership. The idea of sharing is embedded in the DNA of the Book Sprint, part of the genesis of the product, and sharing more often than not becomes part of the life of the book after the Book Sprint is completed. ### But books are evil So, how is it possible I can take the position that books are evil? Where exactly is the irony? It is a lovely story I just painted. Lots of flowers and warm fuzzy feelings. Wow. Sharing, sharing, sharing… it’s a book love-in! Well… with some regret, I have to admit that most books do not come into the world this way. They are produced and delivered through legacy processes. Cultural norms shape the production and reception of books, and the ideas contained within them are not born into freedom. These books are, normatively, created by ‘single author geniuses’, born into All Rights Reserved knowledge incarceration, and you cannot recycle them. Try as we may, we are a little group of people. A small band of Book Sprinters, and it is unlikely that we can sway the mainstream to our way of doing things. We have many victories – Cisco released one of its Book-Sprinted books freely online! Whoot! That’s massive! But… as big as Cisco is, one Cisco book in the sea of publishing is merely a grain of salt in the Pacific. By adding our special grain of salt to this ocean we are by no means making our point more salient. Books are doomed to be the gatekeepers of knowledge. If you make a book, you are, more than likely, sentencing the words in it to life + 50 years (depending on where you live). Books are in fact the very artefacts that maintain proprietary knowledge culture. It comes down to these three issues for me: 1. books gave birth to copyright 2. books gave birth to industrialised knowledge production 3. books gave birth to the notion of the author genius These three things together are the mainstays of proprietary knowledge culture, and proprietary knowledge culture has been firmly encased and sealed, with loving kisses, between the covers of the book. Ironically these three things, through the process of the Book Sprint, are what we are trying to deconstruct. many thanks to Raewyn Whyte for improving this post ## Building Book Production Platforms p4 ### The renderer Note: this is an early version. It has been cleaned up some, but is still needing links and screenshots…. Apologies if the rawness offends you 🙂 This series is skipping around the toolchain, depending on what’s most in my mind at the moment. Today it’s file conversion, otherwise known as ‘rendering’. This is the process of converting one file type to another, for example, HTML-to-EPUB or Word-to-HTML, and so on. It’s important to have file conversion in the book production world because we often want to convert the HTML to a book format – like book-formatted PDF, or EPUB, mobi and so on, or to import into a new document existing content contained in a file like MS Word. ### Manual conversions It is, of course, quite possible to do all your file conversion manually. Should you wish to convert HTML into a nice book-formatted PDF, one possible strategy is to go out to InDesign or Scribus and lay it all out like our ancestors did as recently as 2014. Or, if you want to convert MS Word, for example, to HTML, you can just save it as HTML in Word… Yes, Word copies across a lot of formatting junk, but you can clean it up using purpose-built freely available software (such as HTMLTidy and CleanUp HTML), online services (like DirtyMarkup),or a handy app (such as Word HTML Cleaner)… Manual conversion is not too bad a strategy, as long as it doesn’t take you too long, and it is often more efficient and faster than those convoluted hand-holding technical systems which promise to do it for you in one step. Despite the utopian promises made by automation… you often get better results doing the conversion manually. I sometimes hear people in Book Sprints, for example, complain something to the tune of “why can’t I just click a button and import part of this paragraph from Wikipedia into the chapter, and then if the entry is updated in Wikipedia, I can just click the button again and it will be updated here”… I try not to sigh too loudly when I hear this kind of ‘I have all the solutions!’ kind of ‘question’. Some day that may be feasible, but in the meantime, all the knowledge production platforms I have built have an OS-independent trans-format import mechanism which allows those handy keyboard shortcuts ‘control c’ and ‘control p’… sigh. Don’t knock copy and paste! It can get you a long way. You can also build an EPUB by hand… But, who really wants to do any of this? Isn’t it better to just push a button and taaadaaa! out pops the format of choice! (I have all the solutions! haha). I think we can agree it is better if you are able to use a smart tool to convert your files, and the good news is that within certain parameters and for loads of use cases, this is possible. But don’t under-estimate the amount of tweaking for individual docs that might, at times (not always), be required. ### Import and export are the same thing The process of ‘importing’ a document is also sometimes known as ingestion. Before delving down into this, the first gotcha with file transformation is to avoid thinking about import and export as separate technical systems. That can, and has, caused a lot of extra work when building file conversion into a toolchain. Both import and export are, actually, file conversion. The formats might differ, import might solely be Word-to-HTML in your system and the export HTML-to-EPUB. However, the process of file conversion has many needs that can be abstracted and applied to both of these cases. A quick example – file conversion is often processor and memory intensive. So effective management of these processes is quite important, and in addition, fallbacks for errors or fails need to be managed nicely. These two measures are required independent of the filetypes you are converting from or to. So don’t think about pipelining specific formats, try and identify as many requirements as possible for building just one file conversion system, not an import system plus an export system. ### Ingestion In importing documents to an HTML system, the big use case is MS Word. Converting from MS Word is a road full of potholes and gotchas. The first problem is that there is no single ‘MS Word’ file format, rather there are many many different file formats that all call themselves MS Word. So to initiate a transformation, you need to know what variety of MS Word you are dealing with. Your life is made much easier if you can stipulate that your system requires one variety – .docx. If you do have to deal with other forms of Word, then it is possible to do transformations on the backend from miscellaneous Word file type X to .docx and then from .docx to HTML. Libreoffice, for example, offers binaries that do this in a ‘headless’ state (it can be executed from the command line without the need to fire up the GUI). However, the more transformations you undertake, the more errors in the conversion you are likely to introduce. Obviously, this then causes QA issues and will increase your workload per transform required. Another real problem with MS Word versions before .docx, is that .docx is transparent, actually is just XML. So you can view what you are dealing with. Versions before this were horrible binaries – a big clump of ones and zeros – and after that a bunch of gunk. That same problem also exists when you use binaries like soffice (the Libreoffice binary for headless conversions) as it is also a big bucket of numbers. You can’t easily get your head into improving transformations with soffice unless you want to learn to etch code into your CPU with a protractor. If you have to deal with MS Word at all, I recommend stipulating .docx as the accepted MS Word format. I am not a file type expert, far from it, but from people who do know a lot about file formats I know that .docx looks like it has been designed by a committee… and possibly, a committee whose members never spoke to each other. Additionally, Microsoft, being Microsoft, likes to bully people into doing things their way. .docx is a notable move away from that strategy, and does make it substantially easier to interoperate with other formats, however, there are some horrible gotchas like .docx having its own non-standard version of MathML. Yikes. So, life in the .docx lane is easier, but not necessarily as easy as it should be if we were all playing in the same sandbox like grownups. I have tried many strategies for Word to HTML conversion. There are many open source solutions out there, but oddly, not as many good ones as you would hope. Recently I looked at these three rather closely: • Calibre’s Python based ebook converter script • OxGarage • soffice (Libreoffice) There are others…I can’t even remember which ones I have looked at in detail over the years. I have trawled Sourceforge and Github and Gitorious and other places. But the web is enormous these days and maybe there is just the oh-so-perfect solution that I have missed. If you know it then please email it to me, I’ll be ever so grateful (only Open Source solutions please!). These three are all good solutions, but at the end of the day, I like OxGarage. I won’t go into too much detail about all of them but a quick top-of-mind whys and why-nots would include: • Calibre’s scripts are awesome and extendable if you know Python, however they don’t support MS MathML to ‘real’ MathML conversions. That’s a show stopper for me. • On the good side, though, Calibre’s developer community is awesome, and they are heroes in this field and deserve support, so if you are a Python coder or dev shop then, by all means, please pitch in and help them improve their .docx to HTML transforms. The world will be a better place for it. • soffice does an ok job but it’s a black box, who knows what magic is inside? It tends to make really complex HTML and it is also really heavy on your poor hardware. I have used it a lot but I’m not that big a fan. • OxGarage…well…I love OxGarage, so I really recommend this option… OxGarage was developed by a European Commission-funded project and then, as is common for these kinds of projects, it dried up and was left on a shelf. Along came Sebastian Rhatz, a guru of file transformation, big Open Source guy, and also a force behind the Text Encoding Initiative. Sebastian is also the head of Academic IT Sevices at Oxford University. The guy has credentials! Also, he’s a terribly nice and helpful guy. He has so much experience in this area I feel the trivialness of my questions about our .docx to HTML woes at PLOS… afraid he might absentmindedly swipe me out of the way like I was an inconsequential little midge.. but he’s such a nice chap, instead he invites midges out to lunch. So, Sebastian picked up the Java code and added some better conversions. OxGarage is essentially a Java framework that manages multiple different types of conversions. You feed it and are fed from it by a simple web API. It doesn’t have the best error handling, but it does do a good job. The .docx to HTML conversion is multi-step. First, the .docx is converted to TEI – a very rich, complex markup, and then from TEI via XSL to HTML. That means that all you really need to worry about is tweaking the XSL to improve the transformation and that’s not too tricky. It could be argued that the TEI conversion is a redundant step. I think it is. But OxGarage works out of the box and does a pretty good job so we have adopted it for the project I am working on for PLOS, and we are happy with it. We have added some special (Open) Sauce but I’ll get to that later. We are using it and will shoot for more elegant solutions later (and we have designed a framework to make this an easy future path). If you are looking for Word-to-HTML conversion tools, I recommend OxGarage. Im not saying it’s the optimal way to do things, but it will save you having to build another file conversion system from scratch, and from what I can tell from Sebastian, that would take considerable effort. ## HTML to books The other side of the tracks is the conversion of the HTML you have into a book file format. We live in a rather tangled semantic world when it comes to this part of the toolchain. Firstly, it’s hard to know what a book file format actually is these days… on a normal day, I would say a book file format is a file format that can display a human readable structured narrative. Yikes. That’s not particularly helpful… Let’s just say for now that a book file format is – EPUB, book formatted PDF, HTML, and Mobi. So, transforming from HTML to HTML sounds pretty easy. It is! The question is really how do you want your book to appear on the web? Make that decision first, and then build it. Since you are starting with HTML this should be rather easy and could be done in any programming language. The next easiest is EPUB. EPUB contains the content in HTML files stored in a zip file with the .epub suffix. That is also easy to create and, depending on your programming language, there are plenty of libraries to help you do this. So moving on… Mobi. Ok.. mobi is a proprietary format and rather horrible. It contains some HTML, some DB stuff…  I don’t know…  a bit of bad magic, frogs legs… that kind of thing. My recommendation is to first create your EPUB and then use Calibre’s awesome ebook converter script to create the mobi on the backend. Actually, if you use this strategy, you get all the other Calibre output formats for free, including (groan) .docx if you need it. Honestly, go give those Calibre guys all your love, some dev time, and a bit of cash. They are making our world a whole lot easier. Ok… the holy grail… people still like paper books, and paper books are printed from PDF. Paper these days is a post-digital artifact. So first you need that awkward sounding book-formatted PDF. Here there are an array of options and then there is this very exciting world that can open to you if you are willing to live a little on the bleeding edge…. I’m referring to CSS Regions… but let’s come back to that. First, I want to say I am disappointed that some ‘Open Source’ projects use proprietary code for HTML-to-PDF conversion. That includes Press Books and Wikipedia. Wikipedia is re-tooling their entire book-formatted-PDF conversion process to be based on LaTeX and that is an awesome decision. However, right now they use the proprietary PrinceML as does Press Books. I like both projects, but I get a little disheartened when projects with a shared need don’t put some effort into an Open Source solution for their toolchain. All book production platforms that produce paper books need an HTML-to-PDF renderer to do the job. If it is closed source then I think it needs to be stated that the project is partially Open Source. I’m a stickler for this kind of stuff but also, I am saddened that adoption of proprietary components stops the effort to develop the Open Source solutions we need, while simultaneously enabling proprietary solutions to gain market dominance – which, if you follow the logic through, traps the effort to develop a competitive Open Source solutions in a vicious circle. I wish that more people would try, like the Wikimedia Foundation is trying, to break that cycle. ### The browser as renderer There is one huge Open Source hero in this game. Jacob Truelson. He created WKHTMLTOPDF when he was a university tutor because he wanted his students to be able to write in HTML and give him nicely formatted PDF for evaluation. So he grabbed a headless Webkit, added some QT magic, some tweaks, and made a command line application that converts HTML to book-formatted PDF. We used it in the early days of FLOSS Manuals and it is still one of the renderer choices in the Booktype file conversion suite (Objavi). It was particularly helpful when we needed to produce books in Farsi which contain right to left text. No HTML to PDF renderer supported this at the time except WKHTMLTOPDF because it was based on a browser engine that had RTL support built in. Some years later WKHTMLTOPDF was floundering, mainly because Jacob was too busy, and I tried to help create a consortium around the project to find developers and finance. However I didn’t have the skills, and there was little interest. Thankfully the problem solved itself over time, and WKHTMLTOPDF is now a thriving project and very much in demand. WKHTMLTOPDF really does a lot of cool stuff, but more than this, I firmly believe the approach is the right approach. The application uses a browser to render the PDF…that is a HUGE innovation and Jacob should be recognised for it. What this means is – if you are making your book in HTML in the browser, you have at your fingertips lots of really nice tools like CSS and JavaScript. So, for example, you can style your book with CSS or add javaScript to support the rendering of Math, or use typography JavaScripts to do cool stuff… When you render your book to PDF with a browser, you get all that stuff for free. So your HTML authoring environment and your rendering environment are essentially the same thing…  I can’t tell you how much that idea excites me. It is just crazy! This means that all those nice JavaScripts you used, and all that nice CSS which gave you really good looking content in the browser will give you the same results when rendered to PDF. This is the right way to do it and there is even more goodness to pile on, as this also means that your rendering environment is standards-based and open source… Awesome. This is the future. And the future is actually even brighter for this approach than I have stated. If you are looking to create dynamic content – let’s say cool little interactive widgets based on the incredible tangle! Library – for ebooks (including web-based HTML) … if you use a browser to render the PDF you can actually render the first display state of the dynamic content in your PDF. So, if you make an interactive widget, in the paper book you will see the ‘frozen’ version, and in the ebook/HTML version you get the dynamic version – without having to change anything. I tested this a long time ago and I am itching to get my teeth into designing content production tools to do this. So many things to do. You can get an idea how it works by visiting that Tangle link above… try the interactive widgets in the browser, and then just try printing to PDF using the browser… you can see the same interactive widgets you played with also print nicely in a ‘static’ state. That gets the principle across nicely. So a browser-based renderer is the right approach, and Prince, which is, it must be pointed out, partly owned by Håkon Wium Lie, is trying to be a browser by any other name. It started with HTML and CSS to PDF conversion and now…oo!… they added Javascript… so…are they a browser? No? I think they are actually building a proprietary browser to be used solely as a rendering engine. It just sounds like a really bad idea to me. Why not drop that idea and contribute to an actual open source browser and use that. And those projects that use Prince, why not contribute to an effort to create browser-based renderers for the book world? It’s actually easier than you think. If you don’t want to put your hands into the innards of WebKit, then do some JavaScript and work with CSS Regions (see below). This brings us to another part of the browser-as-renderer story, but first I think two other projects need calling out for thanks. Reportlab for a long time was one of the only command line book-formatted-PDF rendering solutions. It was proprietary but had a community license. That’s not all good news, but at least they had one foot in the Open Source camp. However, what really made Reportlab useful was Dirk Holtwick’s Pisa project that provided a layer on top of Reportab so you could convert HTML to book-formatted-PDF. ### The bleeding edge So, to the bleeding edge. CSS Regions is the future for browser-based PDF rendering of all kinds. Interestingly Håkon Wium Lie has said, in a very emphatic way, that CSS Regions is bad for the web…perhaps he means bad for the PrinceML business model? I’m not sure, I can only say he seemed to protest a little too much. As a result, Google pulled CSS regions out of Chrome. Argh. However CSS Regions are supported in Safari, and in some older versions of Chrome and Chromium (which you can still find online if you snoop around). Additionally, Adobe has done some awesome work in this area (they were behind the original implementation of CSS Regions in WebKit – the browser engine that used to be behind Chrome and which is still used by Safari). Adobe built the CSS Regions polyfil – a javaScript that plays the same role as built-in CSS regions. When CSS regions came online in early 2012, Remko Siemerink and I experimented with CSS Regions at an event at the Sandberg (Amsterdam) for producing book- formatted PDF. I’m really happy to see that one of these experiments is still online (NB this needs to be viewed in a browser supporting CSS Regions). It was obviously the solution for pagination on the web, and once you can paginate in the browser, you can convert those web pages to PDF pages for printing. This was the step needed for a really flexible browser-based book-formatted-PDF rendering solution. It must be pointed out however, that it’s not just a good solution for books… at BookSprints.net we use CSS Regions to create a nicely formatted and paginated form in the browser to fill out client details. Then we print it out to PDF and send it… Adobe is on to this stuff. They seem to believe that the browser is the ‘design surface’ of the future. Which seems to be why they are putting so much effort into CSS Regions. Im not a terribly big fan of InDesign and proprietary Adobe strategies and products, but credit where credit is due. Without Adobe CSS Regions ^^^ would just be an idea, and they have done it all under open source licenses (according to Alan Stearns from Adobe, the Microsoft and IE teams also contributed to this quite substantially). At the time CSS Regions were inaugurated, I was in charge of a small team building Booktype in Berlin, and we followed on from Remko’s work, grabbed CSS Regions, and experimented with a JavaScript book renderer. In late 2012, book.js was born (it was a small team but I was lucky enough to be able to dedicate one of my team, Johannes Wilm, to the task) and it’s a JavaScript that leverages CSS Regions to create paginated content in the browser, complete with a table of contents, headers, footers, left-right margin control, front matter, title pages…etc… we have also experimented with adding contenteditable to the mix so you can create paginated content, tweak it by editing it directly in the browser, and outputting to PDF. It works pretty well and I have used it to produce 40 or 50 books, maybe more. The Fiduswriter team has since forked the code to pagination.js which I haven’t looked at too closely yet as I’m quite happy with the job book.js does. CSS Regions is the way to go. It means you can see the book in the browser and then print to PDF and get the exact same results. It needs some CSS wizardry to get it right, but when you get it right, it just works. Additionally, you can compile a browser in a headless state and run it on the command line if you want to render the book on the backend. ### Wrapping it all up There is one part of this story left to be told. If you are going to go down this path, I thoroughly recommend you create an architecture that will manage all these conversion processes and which is relatively agnostic to what is coming in and going out. For Booktype, Douglas Bagnall and Luka Frelih built the original Objavi, which is a Python based standalone system that accepts a specially formatted zip file (booki.zip) and outputs whatever format you need. It manages this by an API, and it serves Booktype pretty well. Sourcefabric still maintains it and it has evolved to Objavi 2. However, I don’t think it’s the optimal approach. There are many things to improve with Objavi, possibly the most important is that EPUB should be the file format accepted, and then after the conversion process takes place EPUB should be returned to the book production platform with the assets wrapped up inside. If you can do this, you have a standards-based format for conversion transactions, and then any project that wants to can use it. More on this in another post. Enough to say that the team at PLOS are building exactly this and adding on some other very interesting things to make ‘configurable pipelines’ that might take format X though an initial conversion, through a clean up process, and then a text mining process, stash all the metadata in the EPUB and return it to the platform. But that’s a story for another day… ## Building Book Production Platforms p3 ### The editor This series is based on HTML as a source file format for book production platforms. I have looked at many HTML editors over the years and can remember when the first in-browser editors appeared…it was a shock. Prior to that, all HTML creation was done by writing directly in HTML code, then came fully-featured environments like Front Page and Dreamweaver which allowed you to create HTML in a desktop app, then came wiki mark-up to liberate us all from the tedium of writing HTML, and then finally…the browser-based WYSIWYG editor… It’s worth noting that the Wiki markup and WYSIWYG solutions were a different category to the previous solutions in that they weren’t designed for creating web pages, rather they were designed to enable the production of wikis and content management systems. What-You-See-Is-What-You-Get at that time was a refreshing and liberating idea, a newcomer to this scene (although WYSIWYG as a concept and approach to document creation predates the web, with the first true WYSIWYG editor being a word processing program called Bravo, invented by Charles Simonyi at the Xerox Palo Alto Research Center in the 1970s, the basis for MS Word and Excel). Many WYSIWYG strategies have been explored, and many weaknesses unearthed, including the very important critique that What-You-See-Isn’t-What-You-Get, because the HTML created by these editors is unreliable, but more on this later… As far as I can tell, the first HTML-based WYSIWYG editor was Amaya World, first released in 1996. I don’t know what WYISWYG editor was the first to be embedded in a browser (if you know, please email me). However, I remember TinyMCE like it was a revelation. According to the Sourceforge page, they started building it around 2004 to solve the need to produce HTML in content management systems. It was, and is a great product. The strategy at the time was pretty much to emulate rendered HTML within an HTML text field. TinyMCE (and the others that followed until contenteditable came along) used a heap of JavaScript to turn a simple editable text field into a window onto the browser’s layout engine. ### alt.typesetting From this point, a number of plugins were developed for use with WYSIWYG editors like TinyMCE to extend the functionality. Some of these plugins ventured into the ever-important area of typesetting. TinyMCE even tried at times to make up for the lack of browser functionality in this area – for example, there were some early and workable attempts to bring equation editing into TinyMCE. I can’t remember when it was, but surely around 2006/2007 that IMathAs had an experimental jab at this. I thought it was pure genius at the time as there was no other solution (I searched! a lot!). As I can remember they used a very clever round-tripping to achieve the result… essentially, since browsers didn’t then support Math, IMathAs supported inline equation writing using ASCIIMath syntax. When the user clicked out of the field, the editor sent the equation markup to the server, and the server returned the rendered equation as either a bitmap (PNG, JPG etc) or as vector graphics (SVG). It was genius and I built it into the workflow for FLOSS Manuals around 2010 because we wanted to write books with equations for software like CSound (produced in 2010/11). It worked great – the equations always looked a bit ‘bit-mappy’ but we could write and print books with equations using in-browser editors and HTML as source (the HTML produced included equations as images so we could render PDF direct from the HTML). Awesome. It’s also worth noting that these days math typesetting has largely moved to the client side with the evolution of fantastic libraries like MathJax  and KateX. These are JavaScript typesetting libraries designed to be included in web pages and render math from markup on the client side. There are one or two tools that still use server-side rendering, notably Mathoid, and this is often used to reduce the burden on the client’s browser, however, they have possible additional bandwidth costs as the client and server must remain in communication with each other, otherwise nothing will be rendered or displayed. Mature solutions for math and other typesetting issues are only just starting to come online – no surprise to historians who inform us that notations such as math and music were the last to come online for the printing press as well. The first book to contain music notation post-Gutenberg, the Mainz Psalter, was printed with moveable type, and the music notation was added manually by scribe. It seems the first thing to get right is the printing of text, all other notations come later in print systems. These solutions are slowly evolving – even music notation has its champions. However what is really surprising, is that Google, a company priding itself on being built from ground up by math-heads, seems to struggle to bring native math typesetting to their own browser . I would say that is embarrassing. ### Contenteditable Moving on from typesetting… The initial WYSIWYG editors proved an admirable solution for many content management systems. The name persisted but the background technology fundamentally changed in when the first implementations of the  W3C contenteditable specification for HTML5 was brought to the browser. Contenteditable is an attribute that you can add to a number of HTML container elements (like ‘P’ or ‘div’) that make their contents editable. So, in essence, you are directly editing the content in the browser rather than through some JavaScript text field trickery. This strategy might be called WYSI (What-you-see-IS). This strategy also spawned a whole new generation of editors leveraging this new native browser functionality. Aloha Editor was one of the first to grab the spotlight but there were many many others to follow. Additionally, the big legacy WYSIWYG editors such as TinyMCE and CKEditor added support for contenteditable although they were a little slow to the party. Contenteditable at first promised a lot… native editing of the browser … phew … that certainly lowers the technology burden and opens the door to innovation and experimentation. Additionally, the idea that this is a read-write web suddenly comes more keenly into focus when you can just edit the web page right there and inherit all the same JavaScript and CSS that operates on the element you are editing. It’s good stuff. Inevitably, though, some problems soon emerged, first some wobbly things about not being able to place a caret (your mouse pointer) between block elements (eg between two divs) was really a problem, but later a more serious issue was identified – contenteditable does not produce stable results across different browsers, such that if you edit one page in browser A, the resulting HTML could look different if you edited the same page in browser B. That might not affect many people – if you just want some text with bold and italics and simple things, then it doesn’t really matter… the HTML created will render results that will look pretty much the same across any browser. However there are use cases where this is a problem. In the world I work in at the moment – scholarly publishing – we don’t want a manuscript that contains inconsistent HTML depending on the browser it was edited in … it hurts us down the road when we want to translate that HTML into different formats (eg JATS) or if we want to render that HTML directly to PDF and get consistent results. So, unfortunately, editors like CKEditor (used by many book production platforms including Atlas), and TinyMCE (used by Press Books AKA WordPress), or Aloha (used by Booktype 2) have to use a lot of JS magic to produce consistent HTML to overcome the problems with contenteditable, and this doesn’t always succeed. I would recommend reading this article from the Guardian tech team about these issues. You also may wish to look at this video from the Wikimedia Foundation Visual Editor core devs for the comments on contenteditable (audio is lousy, jump to 1.14.00) (readable subtitles can be found here . ### A better way So…what can you do? The answer is kind of threefold. First choice: decide not to care – an entirely legitimate approach. You can still do huge amounts with these editors, and if you need to tweak the HTML now and then, so what? I can clean up the HTML by hand for a 300-page book in an hour, not too tough really and it enables me to cash in on all the other enormous gains to be had from a single-source HTML environment. Second choice: provide client-side and server-side cleanup tools. Most editors have these built in, but it’s also good to implement backend clean up tools to ‘consistify’ the HTML at save-time (or at least at pre-render time). Third choice: find an editor that is designed to produce consistent HTML. In my opinion, the third choice is the best long term option and the ‘right way’ to do things. Being able to produce reliable results with ease, and without having to do things twice, will make everyone’s life easier. Thankfully there is a new editor on the scene that is designed to do just this – the Wikimedia Foundations Visual Editor. This editor was developed to help the Wikimedia Foundation solve an uptake problem … essentially there are not enough people these days prepared to sit around learning Wiki markup (which is pretty much a complicated scripting language these days). The resulting need to drop the threshold on the foundation’s contributions environment has resulted in the development of the Visual Editor (VE). New contributors can use an easy WYSIWYG-like environment instead of having to learn markup. Obviously, the entire Wikimedia universe is already stored as wiki markup, so the editor needs to be able to translate between HTML and wiki markup on-the-fly (interestingly, it is actually part of much larger plan to store all Wikimedia Foundation content in HTML. To do this there is a back end called Parsoid that converts markup to HTML and vice versa. Also, the HTML produced by the editor obviously needs to be tightly controlled, otherwise the results are going to be a mess when converting back to wiki markup. VE does this by replicating the content in its own internal (JSON) model and displaying the results in a contenteditable region. When the content is edited, the edits are strictly controlled by the VE internal rules, and then rendered to display. The result… consistent HTML is produced across any edit session regardless of the browser used… That’s pretty good news. This is one reason amongst many that the platform I am working on for the Public Library of Science has adopted VE software (we were the first to use it outside of the Wikimedia Foundation) and we are extending it considerably and contributing the results upstream to the VE repos. So far we have added table, equation, and citation plugins – all of which are in an early alpha state. If you want a peek, you can see some of the work here. I highly recommend to anyone building a book platform, or any other kind of knowledge production platform, that you examine VE more closely. It is a sophisticated software and has been carefully thought through. It is still relatively immature, and development is happening at an incredible pace, which can make testing new plugins against an unstable API a little arduous … still, it is a great solution. VE also approaches content editing in a way that will open the door to concurrent editing via operational transformations in HTML, which is a hard problem and currently only solved by Google and Wikidocs (recently acquired by Atlassian.) If you are in the process of choosing an editor, choose VE and contribute to the effort to make it not just the best Open Source solution to editing in the browser, but the best solution, full stop. ## PLOS I’m working on a platform with the Public Library of Science (PLOS) in San Francisco. I’m Designer and Product Owner, and working with a talented team of approximately 15 full-time people. We are creating a platform for the production, processing, and publishing of science. It is a very versatile platform and could easily be utilised for many other purposes. Over the next months I’ll be blogging a little about some of the approaches we have adopted and highlighting some interesting technical solutions. The platform will be Open Source. The platform is an HTML-first environment and includes ingestion of MS Word (and other formats) and conversion to HTML. I first presented some information about these strategies at Books in Browsers V last week in San Francisco. The video of my presentation can be found around the 26th minute here: http://www.ustream.tv/recorded/54426830 ## Staticness as a Symptom of an Unwell Book In the past few years, I’ve been constructing a set of practices around knowledge production. Its been a Lego-like process. I add one brick, move it a bit, choose one of another colour and try and work out where it fits… It’s not so much a process of deconstruction of publishing as the construction of something else. Mainly because I don’t know enough about publishing to deconstruct it, so I have to start with what I know. Sometimes, however, I realise just how odd that construction is. Usually, this occurs when I see an articulation of ‘how things are’ in ‘the real world’ and I realise… oops! I don’t at all relate to that or see the sense in it. That occurred recently with a discussion on the Read 2.0 list. Someone made a throwaway comment about how books might be changing and one day we might not think of them as static objects. A few comments followed about what the future of the book might be. I was left feeling very much on the outside in my Lego- constructed world. The only thing I could add to this conversation would have to pull apart the founding assumptions of the future ponderings – and I just didn’t know where to begin. Books are mostly static objects in this world. You make them, ship them, consume them. Next. However, my experience with FLOSS Manuals is that this is exactly what we are trying to avoid. Since 2006, we have been avoiding staticness – rather the aim was, and is, to keep books alive. To a certain extent manuals about software present an obvious case where the value of ‘live’ books is evident. However, I don’t think that advantage is restricted to books about software. Books should be living entities and grow with time, expanding or contracting with input from many people. So, staticness, through the lens of FLOSS Manuals and a ‘living book’ practice is actually a symptom of an ‘unwell’ book. A book that is not growing is a neglected work. It is left alone on the shelf to gather dust and die, where, by comparison, healthy books are attended to. They have growth spurts, or sometimes slower, prolonged periods of affection. They may fork, or become a central discussion, they might transit into other contexts entirely, or traverse languages. They are alive and more useful to us, vibrant and engaging. They also reveal the fundamental humanity behind the text… the living book as a conversation between living beings. A book, at its best, is a thriving community. So, I have learned to look for staticness, and when I find it I literally get sad. I see this as a failed work, something that we were not able to diagnose, or failed to get to in time. At the same time, each failed work is a study and we have much to learn about how and why books die. I think it’s important to learn to look for staticness as an early symptom of a failed book.
## Our Schooling System is Broken It has been a while since my last blog – it was never my intention to go so long between posts but you know … sometimes life hands you other things. In any case I plan to start writing again more frequently, starting with a subject that has been on my mind for a while now: Education. See, I am thinking our system might be broken. Scratch that – I know it’s broken, and in many ways. But I am talking about a fundamental issue, which is the assumption that performance in a school system with a standardized curriculum is a key measure of personal value. I will try to explain, starting with some background for context. #### I Am a Teacher It’s true. I am a teacher. A very happy one at that. I love my job. I teach high school math in Ontario, Canada, and have been doing that for about 15 years. Prior to that I worked as a software developer for about 10 years. When people ask me what I do for a living (something other adults seem to have a deep need to know upon meeting each other), the conversation always goes roughly the same way: Other Adult: “So Rich, what do you do for a living?” Rich: “I’m a teacher.” OA: “Oh? Nice. What do you teach?” (Rich’s note: There may or may not be a “joke” here by OA along the lines of “Oh yeah? You know what they say: ‘Those who can, do, and those who can’t, teach’ hahahahaha”) Rich (being honest, even though it’s not what they meant): “Kids.” OA: “Oh, haha. But seriously, what subject?” Rich (with an inward eye-roll – here it comes): “I teach high school math.” OA: “Oh god. I hate math. I remember I used to be so good at it until grade 8 when I had Ms. Heffernan. She hated me! And she was so terrible to the kids. She made me hate math. I never understood anything in math after that. I am so bad at math! The other day I tried to help my 7 year-old with her homework and I couldn’t even understand what they were doing. Do you tutor? I think I may need to hire you to help little Kelly with her math. Math is so confusing. I keep telling her she doesn’t need it anyway. I mean, I run a multi-million dollar business and I never use any of the math they tried to teach me in high school. Why don’t you guys start teaching useful stuff like understanding financial statements and investing? I had to learn all that stuff on my own. I don’t see why math is so important. I am really successful and I was never any good at it thanks to Ms. Heffernan …..” Rich: “I apologize on behalf of all my brethren. Please carry on with your successful math-free life. Yes, I’d be happy to help Kelly, but honestly she probably doesn’t need any help. She’s 7. She will conquer.” You get the point. But as I said, I seriously love my job. For many reasons. But perhaps the main one is that it keeps me connected to the fluidity of humanity. I keep getting older, but my students do not. They are the same age every year. And because I spend a large number of hours each day immersed in their culture, it forces me to keep my thinking current, so that I can continue to effectively communicate. In this way I feel less like some sort of flotsam floating on the river of time and more like a beaver dam of sorts, constantly filtering new water on it’s way downstream, being shored up with new materials as each generation passes through. I am like a connection between the past and the future, and the older I get, the larger the gap I am privileged to span. And in any case, young people are perpetually awesome. It brings a faith in humanity I’m not sure I could get in other ways to see firsthand the caretakers of the future. #### School Is Not Really About Education I can hear your thoughts. Um, what? Isn’t school by definition about education? Well, yeah. Granted it’s supposed to be. But it isn’t. All you have to do is talk to almost any adult existing in Western society today about it and they will tell you (often with great relish, as though they are solving world hunger), they never use a single thing they learned in school in their day-to-day lives. Which is of course false. But mostly true. If you went to school in Canada you probably at some point had to know things like what year Champlain landed in Quebec (1608, in case you’re stumped – I just Googled it). I can say with a fair amount of confidence that whether or not you have that tidbit of info available in your memory banks is not affecting your life in any measurable way. You probably also had to know the quadratic formula at some point. I don’t have to Google that one. It’s $x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$. Nifty, isn’t it? Sorry if I scared you. Here. Check out these definitions of “education”. I Googled it. Side note: Google is so cool. I’m 48. I have existed both with and without Google. I had to assimilate the use of Google as a verb into a lexicon that did not previously have it established that way. Your kids didn’t. They have to establish the word googol as a noun referring to the number $10^{100}$ into a lexicon that likely does not have it established that way. See? The very first definition says I am wrong. School is about education, literally by definition. Oh, but check out definition number 2! Now that is a good one. Education is an enlightening experience! The thing is, in many cases, and for many people, school isn’t enlightening. In many cases, school is an exercise in conformity and alignment. It’s a system which simultaneously (and somewhat arbitrarily) defines success (um, I think I mean worth) and then provides measures for individuals to evaluate their success (yeah, I definitely mean worth). #### Performance of Curriculum is a False Measure of Worth Ok. So when you were a kid, around the age of 5, your parents sent you off to school. And almost immediately, you started to get report cards. They aren’t cards anymore. In many cases they aren’t even paper. But still called report cards. A figurative card summarizing your score on a number of predetermined criteria for success. And make no mistake – from that very first report card on, kids are sent the message that they must perform according to standards so that they get good report cards. I’m going to stay away from the early years, since that’s not my area of expertise, and fast forward to high school, which is. Let’s look at an example. This is based on a real person, whose name I’ve changed. Yes it’s anecdotal. It’s not meant as proof, only to demonstrate my point. Sally is a young lady in grade 9 who has never particularly had an interest in math – at least in how it’s presented at school. She really doesn’t care about direct vs. partial variation, or the sum of the exterior angles of a polygon, or about how doubling the radius affects the volume of a sphere. But Sally goes to school, in a system which has not only determined that these things are important, they are also mandatory. So she has no choice but to engage in attempts to learn about these things, despite the fact that she is actually incapable of being interested in them. Sally is awesome, by the way. A genuinely caring human with deep empathy, intense loyalty, and a great sense of humour. Sally is also depressed, because she feels worthless. No matter how hard she tries, she can’t figure out how substituting $2r$ for $r$ into the formula $V=\frac{4}{3}\pi r^3$ causes the volume to increase by a factor of 8. In a test designed to determine if Sally can do these things, her performance was dismal. Her mark on that test was a 58%. The class average was 85%. When the teacher returned the test, she made comments like “Well, I know that I taught about exterior angles, but it is clear that some of you did not learn it.” When Sally brought the test home to show her parents (who, incidentally, knew that Sally had the test coming up, hired a tutor to help her prepare, and then asked if she had gotten it back each day after school starting the day after it had been written until the day a week later when it was returned), her parents were frustrated and disappointed. Sally has been conditioned to believe that her ability to understand direct and partial variation is critical. Because it is mandatory, her inability to care or comprehend is forcibly highlighted. And subsequently, her performance is recorded for posterity on her report card. Sally’s final grade in math ends up being 61%. Her parents are disappointed. Sally is devastated. She believes there is something fundamentally wrong with her, because she legitimately can not measure up to the standards set in a system she has no choice but to be engaged in. #### Breaking Down the Process Let’s have a look at this situation of Sally’s, and the number of places where the system went wrong. First off, the concepts Sally has to learn have honestly been arbitrarily determined. See, someone, somewhere, decided that Calculus is an important prerequisite subject for many university and college programs. And to learn Calculus (a grade 12 subject in Ontario), you arguably have to start with concepts like direct and partial variation in earlier grades. And because grade 9 is truly too early to know if programs that require Calculus are in your future, this pre-calculus material is incorporated into the curriculum for pretty much everybody. Second, math is mandatory. In Ontario high school is a 4-year program, and you must have a minimum of 3 math credits to earn a high school diploma. All of this has nothing – and everything – to do with Sally. Next comes the process. Sally’s teacher created quite possibly an amazing series of lessons on these topics. Sally just doesn’t have the wiring for them, so even though the teacher may be phenomenal, it will have a marginal impact on Sally’s ability to synthesize the material. Sally used to ask questions in class. Now she doesn’t, because she learned that even when she asked, it didn’t help. Sally is deeply empathic – she has seen firsthand the good intentions and effort of her teachers in the past. She can read facial expressions and body language. She has seen her teachers answer her questions and try to help her and seen how much they believe the answers and help are working, so she has pretended that it was, since that was easier than admitting that it wasn’t, because it meant burdening herself with her inadequacy instead of her teachers. Third, Sally has been conditioned to believe that it is all being done so that she can get high grades. Because ultimately her success is defined by her report card. And because pretty much everyone is telling her that you don’t need this stuff in life anyway. You just need to learn it to get high grades so you can be successful. Which means the only purpose to learning this stuff is the test you will eventually write on it. This is exacerbated by the well-intentioned parents who put so much emphasis on the test – both before and after. You’ve heard the question “When am I ever going to use this?” Well the answer for most kids is “Next week, when you are tested on it.” See? It’s all artificial. Sally is a high-worth kid, forced into a situation she isn’t wired for, and told that her worth is defined by her performance in that situation. It’s incredibly sad to watch. #### The Present and the Future So here’s where we are now. Many kids and parents these day believe that high marks are critical. The perception is that material presented in school is not inherently valuable, but instead the value is that it is a vehicle to high marks. This means that kids will often do anything it takes to get the high marks. This includes cheating, but that’s not really what I am driving at. What I mean is that they develop strategies that focus on getting high marks, as opposed to learning. Cramming for tests, paying for courses in small private schools that guarantee (implicitly or explicitly) very high grades, or negotiating with teachers (even to the extent of aggressively bullying) after the fact are all standard operating procedure. What is so terrible is that Sally may be able to get high grades using any of these tactics. But Sally will also always know that she didn’t earn them. She will always know that she is not good at something that she should be good at if she is to be valuable. And if and when she manages to gets the high grades anyway, that sense of fraudulence will haunt her. I’ve seen it. It’s tragic. Sally has a great future if she can discover her true worth. The worth she was born with and the worth that her friends probably value more than anything she can do in a math class. But she may or may not discover it. #### What I Do About It I love people. And that includes the kids I teach. And I teach high level math, which is honestly not for everyone. I get a lot of kids coming through my classroom doors who are not there because they want to be, and who will not be able to draw the joy from studying math that I really, really do. I have constraints. I have to teach the material as it’s laid out by governmental process. I have to assign grades that reflect students’ performance against some pretty specific criteria. But within this I make sure that each and every kid I teach knows that I value them as a person. That I care about their story. That I see them not as a 2-dimensional projection of my consciousness, but as a multi-dimensional consciousness of their own, with a narrative as rich and intricate as mine. I make sure that they understand that any evaluation I do of their abilities in math is a tiny, tiny cog in the complex machinery of their existence, and has zero impact on my impressions of them as people, or on my estimation of their worth. I show them that it is totally acceptable to love and be passionate about math, without tying that love and passion to an evaluation. It’s not about the math. It’s about giving them permission to take joy out of abstractions, and to pursue the things that they were wired to do. I’m not always successful. Some kids are too preconditioned. But I will never stop trying. Rich ## Thoughts From a Dance Dad Lately it seems as though the amount of time I have to write is inversely proportional to the amount of ideas I have to write about. But today’s entry is about something I’ve been thinking about for years: Dance. Early Years My daughter is a competitive dancer. She’s 15, and has been dancing since she was around 3. She’s gone to countless dance camps, workshops, and of course classes. So I have been a dance spectator for about 12 years. Prior to that I knew next to nothing about dance, save for the fact that I have never been any good at it. At first, dance was pretty much entirely about how cute the kids all looked executing choreography. They got to wear these elaborate costumes and perform for friends and family at recitals. The teachers and teaching assistants are always on stage at the same time as the kids, and the kids essentially never take their eyes off them, mimicking the movements they’ve all spent months in class learning. It’s exceedingly adorable, and naturally every person who comes to watch immediately rushes to them afterwards to tell them how wonderfully they danced. In short, it’s a typical exercise in getting kids involved in an activity that provides some structure around working toward a goal, and then the kids get congratulated on essentially existing for the duration. And it’s awesome. As the years progressed, we saw less and less boys involved. I won’t attempt to analyze that or comment on why it might be, but political minefield notwithstanding, it is true. This meant that as the dancers grew, it became – for my daughter’s group at least – a girls-only activity. Emerging Talents Starting around the age of 8 or 9, and lasting for 3-4 years, it starts to become obvious which of the girls are well suited to dance and which are not. This obviousness is not lost on the girls. Dance becomes a micro-society where “Haves” and “Have-Nots” start to identify, and the behaviours that result are what you would expect. In a way it mirrors what is happening at that age in school, but from where I sat it was definitely magnified at dance. These can be pretty difficult years for the girls, and perhaps more so for the parents. As I watched from the sidelines, I always told myself that whether a Have or a Have-Not, there are very valuable lessons to be learned from these dramas, and whether my daughter was receiving or giving grief (it certainly seemed she was receiving a lot more than giving, but nobody ever accused a dad of being impartial), my wife and I always did our best to ground her in reality and look for the long-term life lessons that could be taken. I do think, subjectivity aside, that I can safely say my daughter began to show real talent for dance during this time. I can also say, objectively this time, that she emerged from this phase with an inner-strength and confidence that is astounding. As I watch her navigate the social quagmire of the tenth grade, I am exceedingly proud and awed at how well she manages to stay true to herself and her friends, while gliding above the drama that can consume most kids of that age. She never judges others, and always stays honest in helping her friends deal with whatever the current issue is. In and out of the dance world I have watched her handle victories with honest grace and compassion, and failures with resolute determination. She’s my hero, and I firmly believe we have the “emerging talent” years of the competitive dance program to thank for that. From Girls to Women As the girls mature into women, things change at dance in a way that I could never have understood if I were not so immersed in it. This phase is not something I came to understand only as my daughter entered it – the nice thing about being a dance dad is that every recital and dance competition you attend features dancers from all the age groups. So that long before my daughter was in high school I have been observing this stage of a dancer’s development. I also have the added advantage of being a high school teacher, and so for my entire career have had the pleasure of seeing how dancers take the lessons from dance into seemingly unrelated arenas, like a math classroom, which really is my domain. Having a daughter in dance always made me pay attention to how older dancers behaved, kind of as a way of glimpsing my daughter’s future. Here are some observations I’ve made over the years, and observations I have now had the pleasure of seeing manifest in my own daughter. Dance is a Language This is not metaphor. Dance actually is a language. It took me some time to fully appreciate that. Because of my daughter’s involvement in dance, our family has been watching So You Think You Can Dance since season 2. It’s a great show to be sure, but I admit at first I was too absorbed in marveling at the physicality of it to understand what it communicates, despite the fact that the judges on the show really do a great job emphasizing this (I always assumed they were saying it metaphorically). But like a child that learns to speak simply from hearing the spoken word and contextually absorbing meaning from the sound, I began to absorb meaning from the movement. The first thing I realized was that unlike languages that use words, dance doesn’t translate to any other language, and communicates things which can’t be communicated any other way, with the possible exceptions of fine art, or poetry. Really good fine art will enthrall and speak to the viewer through infinite contemplation of something static. Really good poetry succeeds at using words which individually can be quite linear, by combining them in a way to create depth and consequently say something the language the poem is written in was not necessarily designed to say. Really good dance? A different thing entirely. It speaks to our humanity on multiple levels, and the fluidity of it allows the choreographer/dancer to tell us stories no written word could approach. Words are discrete, and a picture is static. But motion is a continuous medium, and the very continuity of it results in an infinity of expression within a finite frame of time and space. It has been said that dance is poetry in motion, but I honestly have come to see it in the reverse. Poetry is dance stood still. I can’t find words to describe this any better, because words will fail here. If you want to know what I mean, watch dancers. And in the same way that second and third languages improve thought processes and imagination, so does dance – but it does so in a way that is magnified a thousandfold because of its unique method of delivery, and because of the world of thought and emotion it opens up for communication. It also is unique in that you don’t have to be able to speak it to understand it. You only have to watch. Dancers Make the Best Actors Because of my passion for theatre, I have had the immense pleasure of being both actor and director in various musicals. And here is what I’ve noticed – not all great actors are dancers, but all dancers are definitely great actors. To me there is no mystery as to why this is. Many actors focus on the words they’re saying or singing, trying to pour all of the character they’re portraying into the delivery of the lines or lyrics. Physicality is often an afterthought, or a simple by-product of the emotion they are feeling about the performance. For dancers it’s entirely different. Because of their fluency in dance, they are simultaneously vocalizing and dancing the performance. By dancing I don’t mean the choreography that often accompanies musical numbers, although naturally a dancer excels there. Rather I mean that they are speaking to us in two languages simultaneously. And even those of us not able to communicate with dance can still understand it. So I have often found myself thinking of a dancer “It’s not a je ne sais quois she has. It’s a je sais qu’elle est une danceuse” (yes, you have to speak some french for that one  😉 ). Dance is Empowering Rich ## The Arts – Polish For the Soul First off, a quick apology to anyone who follows me for my lack of blog posts. I have been writing them – but they are all sitting in my draft folder. However this one is special. So this past weekend I went to New York City with my son for a quick trip. We had tickets to see the Sunday matinee show of Hamilton, and at the last minute when we were there we also decided to get tickets to see Fiddler on the Roof. I could probably write a small novel on how awesome it was to spend a weekend in NYC with my almost 19-year old son, but that’s not what this is about. This is about Art. In my 47 years on this planet I have learned one thing about humans – we tarnish. Or more specifically, our souls tarnish. It’s not a bad thing – in sterling silver, tarnish is just a natural result of exposure to air. It does nothing to diminish the silver underneath, nor does it change the essence of the silver in any way. What it does is make the shining core progressively less visible to the world. With tarnished silver there are two ways to reveal the shine – you can score the surface where the tarnish is and reveal shining silver underneath, or you can gently polish the tarnish for the same result. Scoring the surface leaves scars, but does not affect the shine. Polishing leaves no scars. When it comes to humans, we are all born shiny. Like new silver, our souls gleam and light the world around us. You don’t have to be a philosopher to know this – just watch the faces of all the adults the next time you see a little girl on the subway singing made up lyrics about the ads on the walls. Her soul is bright and shiny and we love it. But as we get older our exposure to life adds layers of tarnish. I get that this sounds negative but it really is not. It’s natural. Our light does not dim – it just becomes more hidden. Personally, I’ve seen three things that can bring it out again. The first is grief. Live long enough and you will get scored by grief – it’s inevitable. It hurts like hell. But something miraculous also occurs. Grief cuts through the tarnish. In the terrible grasp of grief, people return to that vulnerable state of openness and childlike trust. It doesn’t make it hurt less, but it does remind us how beautiful our soul is. It leaves us scarred, but not less wonderful. It also leaves a memory of that vulnerability that was our souls shining where the tarnish was removed. It’s not a scary vulnerability but a precious one. However the tarnish returns, and nobody should ever be subjected to grief as a means of therapy. The second is celebration. Weddings in particular are where I have seen peoples’ souls shine. Listen to wedding speeches from people who are truly in love – and even the speeches from their families and friends, and you’ll know what I mean. The third, and to me the most significant in that it can be called upon at will, is art. I really do mean art in all forms (and as an aside, check out my other website where I feature my own drawings: Studio Dlin), but my focus here will be on theatre, and specifically on the shows my son and I saw this past weekend. Saturday night was Fiddler on the Roof. This is a show I know very, very well. I actually have had the pleasure of performing the role of Tevye in it, and I love the show dearly. Anyone familiar with the show will know that Act 1 is loaded with warmth and humour, right up until the final scene. Act 2 is heavy, with not nearly as much laughter and with a lot of emotional, even painful moments. As you’d expect from Broadway, this cast and the production were outstanding. Because I know the play so well, and because I played Tevye, I was actually simultaneously performing the show in my head as it unfolded. I found myself in the story. Tevye loves his daughters deeply and tenderly. I loved them too. Tevye loves his people and his town. I loved them too. Tevye suffers poverty with a smile and an honesty that is undeniably human, and I did too. In Matchmaker, his daughters discover how terrified they are of being committed for life to a marriage someone else chooses. I was terrified too. The townspeople suffer at the hands of an oppressive Tzar, and I suffered too. Tevye and his daughter Hodel say goodbye forever when she decides she must go live in Siberia when Perchik is arrested, and I was both father and daughter in that moment. Tevye then must say a much harsher goodbye to his daughter Chava when she decides to marry out of the faith, and his traditions force him (and to a slightly lesser extent his wife Golde) to treat Chava as dead. In that moment I was father, daughter, wife and husband. When all the Jews are forced to leave Anatevka at the end of Act 2, I was every one of them – even the Russian constable who had to inform them of the edict. I laughed, cried, danced in my seat and sang along (in my head!). Sunday afternoon came and it was time for Hamilton. My son and I have both listened to the soundtrack many, many times. Being younger and possessed of both a greater quantity and quality of brain cells, my son knows the lyrics practically by heart. I also know them very well. Not by rote, like with Fiddler, but well enough to sing along and certainly well enough that I know the whole story as told in the play. From the moment the lights went down to the moment it was time to leave I was once again living the story. Just as it was with Fiddler, every scene placed me firmly in the hearts of the characters. When Hamilton’s mother died holding him I died with her, and I survived with him. When Hamilton, Laurens, Mulligan and Lafayette are planning their glory, so was I. When Eliza was anxiously watching Alexander as he is trying to win over her father, I was all three sisters, I was Hamilton and I was Philip Schuyler. When Angelica told the story of falling for Alexander right before introducing him to her sister Eliza, I was all three of them. When Burr presented himself to Washington just before Hamilton arrived in the office I was Burr doing what he needed to do to get ahead, Washington carrying the burden of leadership and Hamilton with his burning desire for glory, not recognizing the real power that set him apart. I was Burr dismissed by Washington and Hamilton not knowing what Washington really wanted him for, and I was Washington seeing it all from the lens of maturity and wisdom and also knowing there’s no way to explain it to either Hamilton or Burr, and knowing that only life would teach them. I could go on. And I will. I was Samuel Seabury trying to defend a way of life I didn’t understand was an illusion, getting bullied by someone with more clarity and intelligence but not understanding what I was wrong about. I was King George, unable to see or comprehend a world outside the carefully constructed and preserved cocoon of royal privilege. I was an American soldier fighting for independence. I was Hercules Mulligan and I got knocked down and got the fuck back up again. I was a redcoat in a war decreed by my king, fighting across the sea away from my home. Fighting against people who were fighting for their home. I was Charles Lee, in over his head and not comprehending the stakes – only the glory of my title. I was the British soldier finally given permission by a superior officer to wave the white flag, and doing so with a weariness that permeated to my core. I was Philip Hamilton showing off nervously for his imposing father, while honouring the lessons of his caring mother, and at the same time I was the father and the mother. I was Jefferson coming home, and Madison celebrating the return and the support of his like-minded friend. I agreed with Jefferson AND Hamilton, and felt both their passion. I was Washington knowing I had to step down, even if I knew that what was coming was not what I would have done. I was Maria Reynolds, so beaten down by cruelty that my principles were skewed to a place where any momentary relief from the reality of my life justified any means to get it. I was the asshole James Reynolds, and it sucked. I was Eliza realizing she’d been betrayed, and that sucked more. I was George Eaker, cocky and arrogant, and Phillip Hamilton, the child-man. I was the shooter and the victim. And then I was the mother and the father, when my heart was thrown into a wood chipper as we watched Phillip die. I somehow continued to live, as they did. I was Burr campaigning, I was Hamilton supporting an enemy with principles over a friend without. I was Burr driven by frustration and rage, and I was Hamilton ultimately admitting defeat to the price his family had paid for his drive. I was Eliza for 50 years after that. I was all of this and more, and all in two doses of 2 hours and 45 minutes (Hamilton and Fiddler have the same running time). In those moments my soul was shining thanks to the gentle polish of the performances, and it still is. And as I looked around the theatre after Hamilton it struck me. Hundreds of people had experienced the same thing. The same tarnished souls that had entered the theatre were all shining brightly as they left. The building glowed with it. Now of course, just as sterling silver does, we will all tarnish again. But here is the beauty of art, and the point of this blog – the polish is always there. You just need to use it. Celebrate the arts. Partake. They are the real expression of our souls. Rich ## On the One Year Anniversary of my Heart Attack So August 26th of this year marked the one year anniversary of my heart attack. I actually haven’t written a blog since the one I wrote about that day. That’s not as significant as it might sound – I have rough drafts for 3 different ones that I started but I’ve been busy with other things and haven’t devoted as much time to writing as I’d like (something I regret quite a bit). But for the anniversary of H-Day I thought it would be good to write an update on what has happened this past year. I don’t think a chronological narrative would make much sense, and besides, I don’t have that great a memory. So I’ll go with more of a stream of consciousness approach. As you may know I am a high school math teacher. The heart attack happened exactly one week before the start of the 2014-2015 school year. Although many people couldn’t understand how or why I did it, I actually worked right from the first day of school. I certainly could have taken some more time to recover, but I didn’t feel I needed it that badly and the doctor said that if my job didn’t require heavy lifting and I felt okay there was no reason not to work. My reasoning for starting was that it was easier for everyone involved – students wouldn’t have to adjust to a new teacher twice, the school wouldn’t have to scramble to find someone to cover my classes and my colleagues wouldn’t have to worry about teaching more than their own course load. Now, almost a year later, I can say that the decision to start right away was neither good nor bad. If I had waited things would have been just as fine as if I had not. It’s funny how so many decisions in life seem important when they really are trivial. I took things easy at first and let my body tell me when I could ramp up, always erring on the side of caution. For example I took the elevator instead of the stairs for a couple of weeks, and kept my boardwork lower on the board (so as not to raise my right arm too high after the angio) for about a week. One thing that I learned from my first follow-up with the cardiologist was that I have no “modifiable risk factors” for heart attack. Basically it’s good old genetics. I don’t drink or smoke. I have low cholesterol and low blood pressure. At the time of the heart attack I was overweight but by no means obese. I was keeping fit with heavy weights and regular though limited cardio. This was disturbing news – I mean it would be nice if I could just stop something I was doing and know I was preventing another heart attack, but as the cardiologist said, at least now I know. I had three partially and one fully blocked arteries, and except for one of the partials they were all stented. The one that was not stented is very small and doesn’t supply a large area, so that other arteries nearby can cover what it doesn’t manage. I am on a cholesterol medication that has been shown to prevent plaque buildup in arteries and even to slightly reduce existing plaque. I am hyper-aware of my heart so if anything does deteriorate I will be on it right away. In the meantime I decided to do everything I could do. As soon as I got the green light to resume exercising I began a cardio regimen of 45 minutes, 5 times per week. As of this moment, I have averaged exactly that. I say averaged because there were three weeks where I didn’t manage to get all 5 sessions in, but always compensated in succeeding weeks by adding sessions. Three different vacations didn’t keep me from my cardio. Some people tell me “Hey, you’re on vacation, give yourself a break.” My response is my heart doesn’t know I’m on vacation, there is no such thing as a break. I also cleaned up my diet. Not that it was that terribly unclean to begin with. But I did eat a lot of red meat (3-4 times per week, sometimes more), and 2-3 times per week allowed myself cheat meals like KFC or Burger King, or just really decadent meals at restaurants. Now I eat only lean red meat, and only 1-2 times every month. I’d say over the past year I’ve probably had red meat about 15 times. My protein mainly comes from white meat chicken, fish, and some vegetarian sources like beans, quinoa, and nuts or nut butters. I eat very little fat, and almost no saturated fat. What fats I do eat come from the fish or chicken, or light salad dressing, which I use extremely sparingly. I don’t measure my food, but I never eat until I am stuffed. That’s also a change from before. For this entire year I have not felt stuffed even once. And I still eat a lot – probably 7-8 times each day. A lot of fruits, berries, vegetables and nuts fill out my diet. So what the diet and cardio have done is resulted in fat loss. I spent my entire adult life struggling with fat loss – often successfully but not always. Each time the goal was fat loss. Now the goal is not that at all. The cardio and diet are to keep my heart healthy. The fat loss is a side effect, albeit a pleasant one. When I had the heart attack I weighed 225 lbs (down from an all time high of 245). Because I am a hobby bodybuilder that’s not as heavy as it sounds, but I was certainly carrying too much fat by an obvious margin. My weight this this morning was 189. I won’t lie and say that I’m ambivalent about that – I am overjoyed. But it wasn’t and isn’t the goal. Speaking of exercise I also resumed lifting weights about 6 weeks after H-Day. This was with the doctor’s blessing. At first I kept things very light and let my body tell me when it was ok to go heavier, again always erring on the side of caution. I don’t remember the exact timeline but I’d say after about 3 months I was more or less back to pre-heart attack form. The weights and the fat loss are visually pleasing to me. Here are a few vanity photos of the impact this has had on my look. I’d actually like to include a photo I took when I was 245 lbs but my computer is currently deciding I’m not allowed to look through old photos – thanks Windows 10. The great news is that after the heart attack the cardiologist who saw me at the hospital said my heart was damaged (on a scale of 1 – 4 where 1 is the best, mine was a 2), but on my six-month follow up visit I had managed to return it to a level 1. The words of the cardiologist were “Except for the presence of the stents you have the heart of a healthy, athletic adult male with no sign of trauma.” And that, ladies and gentlemen, mattered profoundly more to me than how I look, although that is what people see. Emotionally/psychologically it would be a lie to say I have not been affected. The day I had the heart attack one of the reasons I didn’t call an ambulance as soon as I should have is because I didn’t want to scare the kids. There’s no way around the fact that when your dad has a heart attack it’s scary. Same goes for my wife. The very last thing I want to do is scare them or have them worry. That said I am now hyper-aware of what is going on in my body, and especially my chest. And guess what? Chest pain happens, and it’s not generally a heart attack. Gas happens (especially because it is a side effect of some of the medication I am on). The pain can cause anxiety. Anxiety can cause chest pain. It’s a hilarity-filled ride. I can’t specifically recall how the heart attack itself felt – I just know it hurt but was not as intense as you’d think. I feel as though if it were happening again I would be sure. But I’m not sure if that’s true. So there are days when I find myself worrying. However with the cardio regimen I’m on I can always reassure myself that I wouldn’t be able to do 45 minutes of intense cardio without accompanying intense pain if I was actually having another heart attack. On that note, when I started the cardio after the heart attack I was keeping my pulse rate in the 120’s, although my doctor did say I would be able to push it higher as I healed. As of today I usually use my elliptical machine (I have a gym in my basement although it has evolved since that blog about it), and the heart rate monitor I bought shows I’m keeping my heart rate in the 140-150 zone, which I made sure was ok with the cardiologist. Speaking of heart rate, I also take my blood pressure daily, and it stays in the 115/75 zone, with a resting heart rate of around 60 bpm. One thing I have found recently (as in, the last 5 weeks or so) is that drawing is great therapy. It is extremely calming and does a great job of centering my thoughts. I highly recommend it. Another thing I’d have said if you’d asked me a year ago was that I can’t draw for beans. I never really believed I had any talent in that regard. But I have watched hours of YouTube tutorials and have been drawing every day. The therapeutic aspect can’t be overstated. It turns out when you practice something you also improve. Here is one of my earliest attempts at a portrait and one of my most recent ones. I’m no pro and may never be one, but the improvement is real and that’s only about a month. Therapeutic, fun, and inexpensive – I highly recommend it. Wow. Ok this really has been stream of consciousness style. My writing is usually more organized than that. Ah well. This one wasn’t about writing, more about an anniversary summary. I admit I didn’t proofread that carefully either – forgive the errors. I am always happy to answer questions or offer assistance if I can. Leave me a comment and I will respond. Rich ## So This Guy Has a Heart Attack On Tuesday, August 26, 2014 I had a heart attack. I’m not the type of guy you would expect that to happen to. Shortly after it happened I wrote a long description of the events to share with friends. Some have told me that it may actually prove helpful for other people, so I am reposting it here on my blog. First, as soon as it happened and as people found out, I received so many emails, texts, phone calls, and visits that I can’t even begin to count them. It’s like the support system of family, friends and colleagues is a big inflatable cushion that kind of hovers underneath as I move through life, inflating like an airbag in a crash when they are needed to carry me through. It’s overwhelming. Thank you. I’ve gotten a lot of the same reaction from people when they find out. “WTF? You’re so young and you exercise and eat right. How could this happen?” is a basic summary. Trust me, it summarizes my reaction as well. People have also asked more specific questions like how did I know it was happening and what did it feel like. I will answer with a narrative of events. On Saturday August 23 I was doing cardio on the elliptical. Lately I’d been doing 45 minute sessions, 3 times a week. Towards the end of that session, I felt some pain in my chest which I thought was odd, but I chalked it up to asthma aggravated by the dust in the basement because we’ve had contractors in to finish the other side of the basement (my gym is the left side) for a couple of weeks. Once I got off the elliptical the pain went away, confirming my suspicion. It was not intense pain. Sunday, August 24 we went to Gravenhurst to spend the day at the cottage with my dad. It was a relaxing day where I didn’t do much, although my dad did buy a new barbecue and I carried it in from the car by myself, with no discomfort or trouble at all. The drive home took a lot longer than we thought because of traffic and I went straight to a rehearsal for the show I was supposed to be in. Rehearsal was fine and I went home and went to sleep. Monday, August 25 was the day I planned to start getting ready for the new school year. I had some boxes of vases from old parties and we are donating them to the school where I teach so I loaded them into the trunk of the car, along with a plastic bin full of empty binders. Got to school and brought the bin in and upstairs to the staff room. I was struggling quite a bit with it but attributed it to the heat. On my way up the stairs with the bin I had chest pains again and was sweating quite a bit, which I found very odd because the bin was not that heavy. At that point I was a little scared, but I sat down and recovered quickly. I went home, told my wife Marla what happened, and had a nap. Apparently Marla checked on me a few times during that nap to make sure I was breathing. I thought she was overreacting. Monday night we had rehearsal again and I went, and felt fine. Went home and went to sleep around 11, with slight pain in my chest but it felt a lot like heartburn, which I do suffer from regularly. Tuesday morning I woke up at 4:00 and the chest pain was still there, only more intense. It still felt like heartburn and I wasn’t sure what to do. I was up for an hour, and Marla woke up around 4:45 or so asking if I was ok. I really wasn’t sure. I googled symptoms of heart attack and it’s a pretty wide range of things it can feel like actually, although chest pain that lasts for more than 5 minutes is not to be ignored. I also read that driving (or being driven) to the ER is not recommended, since if you call an ambulance you will be treated sooner (by the EMT’s) and also because the EMT’s can assess your condition and potentially take you to a different facility. At 5:00 am I decided to call 911. I was very upset at the thought of the kids waking up and finding emergency vehicles and people in the house. If the kids were not home I probably would have called earlier. I desperately didn’t want to scare them. It was bad enough I had already scared Marla. When you call an ambulance for chest pain they will also dispatch firefighters, because they can respond more quickly and are trained for first aid. While waiting I got dressed, brushed my teeth and sat down on the couch. The pain didn’t go away. Firefighters arrived and asked me a lot of questions about where the pain was and how intense. Shortly after that the EMT’s arrived and the firefighters filled them in as they hooked me up to an EKG monitor. The first readout showed something that concerned them a little, but two more readouts showed as normal. They asked me to rate the pain on a scale of 1-10, which I’ve always found odd since if I say 6 what does that mean to them? For all they know I would call a papercut a 10 (and now that I think about it I’ve had some pretty painful papercuts – every get one from cardboard? The worst). I said it was around a 3. They decided based on the first concerning readout not to take me to the closest hospital, which is Mackenzie Health, but to go a bit farther to Southlake, which is in Newmarket, because they have a cath lab there which is needed for angiograms and angioplasty. Score one for calling an ambulance instead of driving to the ER. We stopped at some point between my house and the hospital to meet another ambulance and a different EMT came in to attend to me. He’s the one who put the IV in and they gave me morphine for the pain (which was fluctuating between a 2 and a 7) and baby aspirin to thin my blood. At this point I still didn’t know if I was having a heart attack or they were just taking precautions. They called ahead to Southlake to have a team ready at the cath lab. We arrived and they wheeled me straight to the lab – do not pass Go, do not collect $200 – but there was no team there. Turns out they had accidentally called Sunnybrook, where there was certainly a team waiting for me, so they wheeled me to the cardiac care unit (CCU). This was around 6:30 am or so. The nurse in the CCU at Southlake called the team, which is always 20 minutes out. Meanwhile the cardiologist on duty came to see me. He looked at the EKG readout and was the first person to tell me with certainty that I was having a heart attack. In the meantime Marla had woken the kids and followed the ambulance up to the hospital, so they were already there. The kids were a little freaked out for sure, but I think the calm way everyone was dealing with it helped them a lot. So anyway the cardiologist decided not to have the team come in since the morning shift was starting at 7 and they could do the procedure. He explained it to me and I had to sign some forms, and they wheeled me back in to the cath lab. They said the procedure would take about an hour. Nurses had shaved and washed my wrist and groin since those are the sites where they may insert the catheter. Once in the cath lab they must have put some good stuff into my IV because although I was conscious throughout the procedure it seemed to me to last about 15 minutes. It was actually an hour. The doctor decided to go through the wrist, and he explained everything as he did it. There was some pain from the freezing, and I could feel the catheter going up my arm. That sounds worse than it is – it’s really just a kind of pushing feeling. At one point my whole chest got warm. I said “My whole chest just got warm – is that you guys?” he said it was the dye the use for the angiogram. There’s a huge bank of screens that he watches as he does the procedure, and an x-ray device that moves back and forth over your chest as he works – it’s very cool. Kinda robotic. Anway I heard him asking the nurse for stents and I swear I could tell the moment he put them in because the chest pain went from about 5 to zero in an instant. Once the procedure was done they wheeled me back to my room in CCU where Marla and the kids were waiting. There was a blue clamp bracelet on my wrist that was pretty damn tight (still bruised a month later) but otherwise I felt fine. My brother came and stayed for a while, then left and took the kids home. Marla stayed with me every second. The nurse came in often and was slowly releasing the clamp until he felt he could apply a pressure bandage instead, which he did. At that point I was overcome with nausea from the anesthetic and I vomited, which turned out to be bad for my wrist, which immediately swelled up and started bleeding (which it turns out is the reason I am still bruised). After applying pressure with his fingers for a while the nurse reapplied the blue clamp, leaving it on for much longer this time until he felt he could replace it with a pressure bandage. In the meatime I was visitied by the cardiologist who performed the procedure. He told me I had one fully blocked artery and 3 partially blocked. He had stented the big one and two smaller ones but left one very small artery partially blocked and unstented, because it is very small and because it is not fully blocked and because too many stents at once is not the best thing for the body in any case. Additionally with a small artery like that one the body will create new arteries to replace it. I have before and after pictures of my arteries from the angiogram. They are spooky. That cardiologist also said that “most guys take at least a month of work” which shocked me as I felt ready to rock right then! I was also informed that I couldn’t stay at Southlake because I live closer to Mackenzie, and that as soon as there was a bed at Mackenzie I would be “repatriated”. Yes that is the word they used. I would have preferred extradited but it turns out there is no extradition treaty between Newmarket and Vaughan. A few hours later and they did have a bed at Mackenzie, so they called an ambulance to transport me there. I was sad to leave Southlake. It’s beautiful there, the nurses were superb, and I had a private room, but alas I am not a citizen of Newmarket. At Mackenzie they wheeled me into a quad ward staffed with two dedicated nurses. The nurses there weren’t quite as attentive as the ones at Southlake, but then again I was somewhat out of the woods. They were very knowledgeable and answered all my questions patiently and thoroughly. Marla had followed the ambulance from Southlake to Mackenzie so she came right into the ward with me, and immediately got kicked out so that the nurses could apply about 763 new electrode pads to me (in addition to the 451 I already had), hook me up to a blood pressure cuff that automatically inflated every hour on the hour, and in general affix me to my bed with wires. Once that was done Marla came back in. She didn’t leave until long after visiting hours were over, and then only reluctantly. At Mackenzie I had a lot of visitors including my dad and his girlfriend, my two sisters, and my brother who came back. Once everyone left and the night nurses settled in I tried to sleep but I had slept so much during the day that I could only drift in and out. Instead I answered texts and emails throughout the night. In the morning I saw the cardiologist at Mackenzie, who explained more specifically what had happened. I asked him what caused it. He said genetics – would have happened no matter what. He ordered some more tests, and left. The nurses told me that once he saw the results from the ECG I could most likely go home if I wanted to. ECG is an ultrasound on steroids, and shows the extent of damage caused by the heart attack. They wheeled me down to “nucular medicine” (I always laugh when people pronounce nuclear that way) and the lady there did the ECG. Marla was there too and got to watch. Said I might be pregnant. After that it was a short time before the cardiologist came back. He said that they rate hearts on a scale of 1-4, where 1 is a perfectly healthy heart and 4 is, well, not. Apparently there was damage done to the underside of my heart and I am at a 2. That’s good. He said that with a good rehab program getting to a 1 is possible, either by improving the damaged section or, if that’s not possible, by improving the parts that are not damaged to compensate. He put me on about 19 different drugs, gave me the prescription and sent me home. I didn’t have any clothes so Marla’s sister, who had driven in from out of town and was with the kids at our house, raided my wardrobe and brought some stuff for me. I was also visited by a good friend of ours who is a doctor – not my doctor but does rounds at Mackenzie – who was so nice to stop by and answer my stream of questions. Again the support from the community was overwhelming. I can’t even wrap my head around it. When I saw the cardiologist again I asked him (I also our friend) if there was anything I did to bring this on. He said no. My blood cholesterol is normal, I have a low blood pressure, my heart rate is around 60 bpm, I am not diabetic, I exercise regularly, I don’t drink and I don’t smoke. He said it was hereditary. The good news to me is that arteries don’t block overnight so I imagine I’ll be feeling better than I have in a long time pretty soon. I am also on a drug regimen now that is designed to keep this from happening again. I certainly hope so! In retrospect it wasn’t that much fun. If there’s a moral to this story, it is this: DON’T IGNORE CHEST PAIN. Follow up: I have since been to a cardiac rehab orientation session that was chock full of information I already knew, have seen my family doctor and been to the cardiologist for follow-up. I’ve asked a lot of questions about why this happened. The answer is fully genetic. My arteries are bent a little too much in places. The bends cause turbulence as the blood flows awkwardly around them. The turbulence causes cholesterol to gather, which blocks the arteries. The stents prevent this from happening again, as does the regimen of drugs I am on now. The cardiologist did a cardiac stress test, which is basically a session on a treadmill where you are wired to machines that monitor your heart, and they slowly make the exercise more difficult. Since then I got the go ahead to resume exercise, so I have been lifting weights and doing cardio on the elliptical. I wasn’t significantly overweight before, but now that I am much more conscious of eating only heart-healthy meals I have lost about 12 lbs and am still losing fat. I am naturally concerned about a repeat episode, but the doctors assure me that with healthy living and the drugs, there is no reason to walk around worrying I might have another heart attack. So I do not. Thanks for reading, Rich ## Customer Service Goodness These days any time I have an encounter where I don’t receive crappy customer service I celebrate. Anyone I talk to generally agrees with this. Before entering teaching I worked in a software company, often supporting clients with questions or issues with our product. They would sometimes call feeling angry, frustrated and looking for someone’s head to rip off. Even though the problem was generally something they did, I always made sure to treat them with respect, absorb the negativity, and channel it into a solution. I never engaged in arguments or accusations, and I always made sure that if I said I was going to get something done for them I did, and right away. After all it wasn’t my reputation on the line but that of the company I worked for. But this attitude in customer service seems to be almost extinct. Take Rogers for example (if you’re not from Eastern Canada, that’s one of the big telecommunications companies we have little choice but to deal with in these parts). Any time there is any issue we have to call about, my wife and I end up putting it off because we know, from experience, that it will be at least an hour on the phone, after which whatever we thought we had settled on would be incorrectly implemented and billed, so that upon receiving the next statement we would have another minimum one hour phone call to make. So in the face of this dearth of good customer service, I decided to write today about a few extremely positive experiences I have had. People should definitely know about these, and hopefully bring their business to these companies. I also encourage you to share your own examples of excellent customer service in the comments. I’d love to read about them and reward them with my business when possible. Example 1: Ontario Gas BBQ (http://www.bbqs.com) A few years ago I had a Weber gas barbecue that I used all year round and I never covered it in the winter (so I wouldn’t have to clean ice and snow off a cover to use it). Because of this, the burners had quite a bit of rust on them, something for which I blamed nobody but myself. It got to the point where the flame was so uneven I couldn’t use the barbecue properly. So I figured I needed to buy new burners. To make sure I got the right ones, I took out the existing burners and brought them to Ontario Gas BBQ to buy replacements. The owner happened to serve me. He took my burners, went to the back and got the replacements and brought them to the cash. I had my credit card out ready to pay – it was about$120. Then he looked at the burners I had brought in and asked me why I was replacing them. I told him they were blocked and unusable from rust. He said “Nonsense. They just need to be cleaned.” and then proceeded to clean them for me. Took him about 20 minutes. He charged me nothing. The man could have easily made a $120 sale and I would have been happy. He would not have lost my business because I love that place. I never would have known I’d wasted my money, but he would have. So that’s what he did. What I did was turn around and buy a$100 barbecue cover that I didn’t really intend to buy, because I wanted him to make some money from me that day. (Epilogue: I used that cover but one day it blew off in a windstorm and I never found it. I suspect it is now being used as a tent in some Costa Rican honeymoon resort) Example 2: Longo’s Longo’s is a chain of grocery stores in my area. I don’t know how far out of the Greater Toronto Area they have stores, but if you have one near you, shop there. Longo’s has one of those loyalty programs that everyone seems to have these days. At the beginning, you could redeem earned points for merchandise from their website. My wife and I needed a new cookware set and they had one on their site that we really liked. Lagostina set, retails for about $320. So we were saving our points for that. Then one day when I was cashing out at Longo’s the cashier told me that they were phasing out their merchandise rewards in favour of cash rewards in the store. I was sad about that, because we were still about 3000 points short for the set and we really wanted it. To earn 3000 points we’d need to spend another$1500 in groceries in a few weeks, which was obviously not going to happen. I emailed Longo’s and asked if there was any way to pay the difference between the points we had earned thus far and what we needed for the cookware set. If they had given me a dollar amount I needed to pay I would have been very happy with the service. Instead they immediately credited my points account with the 3000 extra points I needed (at no charge), and I ordered the set (it’s awesome, by the way). This was far beyond anything I had expected them to do, even in the best case. Example 3: Mophie (http://www.mophie.com) I have an iPhone 5 that I use intensely. I find that the battery life for me is only good for about 2/3 of a day. I decided I wanted a battery case for the phone and Mophie cases are a great (but expensive) choice. I had a case that I liked, the juice pack helium, but it only added about 80% more battery life to the phone and after about a year I decided to upgrade to the case that adds 120%. It’s the juice pack plus. That case also comes in red (part of the (Red) campaign), but it’s a little extra. There you have it. These are three examples of beyond excellent customer service I have received. The sad part is that it’s the only three examples I can think of, but I know you have more. Please share your good ones in the comments section, so that more people can know about them, and please give your business to the three I’ve listed! Rich ## Grief vs. Misery During a conversation with a friend today I had occasion to think about the grief and misery I felt when my mother passed away almost five years ago, and also when the very young son of a close friend of mine passed away about two years before that. Grief. Misery. Two highly emotional words. I never really thought about them separately before, but they are quite different. In both experiences the grief of the loss was immediate and profound. And in both cases the misery was painfully intense. In my conversation today I realized how separate these two emotions are when it comes to loss. Grief is a natural emotion stemming from losing someone you love. It’s that feeling of having something critical to your existence removed, violently and without your permission. It’s a feeling that combines powerlessness, loss and anger. It’s natural and even essential for continued survival. It paves the way for acceptance and growth. When I think of my mother these days, I think of her beautiful soul, her love, and all that she gave me that makes me who I am now, and who I am now is someone I like. I owe her that, and my grief over her loss provided an intensification of my understanding of that. When I think of my friend’s son, I remember how happy he was, how much joy he brought with him into a room, and the way he played with my kids when he visited from out of town, as if they’d been friends forever. I remember the way his passing brought so many people together – people who unquestioningly put aside any issues they may have had with each other so that they could be there to support the family and show that in times of extreme despair there is a community whose arms you can fall into when tragedy buckles your knees. To him I owe my ability to see past the petty sheen of casual interaction through to the deeper beauty of humanity. My grief over his loss brought me there. Both losses still make me sad. That does not make me angry. I accept the sadness as part of my understanding of myself and others around me. The sadness is completely intertwined with my gratitude for having known them. When it surfaces, I feel the gratitude and joy right there with the sadness and I smile. The emotions coexist, as they should.
Real and Complex Analysis In General > s.a. Calculus; functional analysis; operator theory; integration; series; vector calculus. * Idea: Real/complex analysis is the mathematical theory of functions of a real/complex variable. @ General books: Choquet 69; Pólya & Szegő 72; Gleason 66/91; Wong 10 [applied]. @ Real analysis, II: Pons 14 [II]; Laczkovich & Sós 15; Jacob & Evans 15. @ Real analysis, advanced: Bourbaki 58; Royden 63; Knapp 05 [2 vol, basic + advanced]; Trench 03 (updated 12). @ Complex analysis: Bochner & Martin 48; Ahlfors 53; Pólya & Latta 74; Priestley 03; Sasane & Sasane 13 [friendly]; Chakraborty et al 16. @ Non-linear analysis: Rassias 86 [fixed point and bifurcation theory, non-linear operators]. @ Related topics: Rockafellar 68 [convex]; Sirovich 71, de Bruijn 81 [asymptotic]; Klebaner 12 [stochastic calculus]; > s.a. Cauchy Theorem; Cauchy-Riemann; Convex Functions. > Related topics: see connection; Covariant, Fréchet, and Weak Derivative; differential equations; integral equations. "Less Than Continuous" Functions > s.a. distributions; path integrals ["jaggedness" of paths]; Semicontinuity; Derivative [subdifferential]. * Types: The worst case is when a function does not have a limit along some or all directions at a point p. * Direction-dependent limit: The limit of a function f along any curve γ passing through p exists and depends only on the tangent vector v to γ at p; We call this limit $$\cal F$$(v). * Regular direction-dependent limit: The direction-dependent limit $$\cal F$$(v) of the function f admits derivatives to all orders with respect to v, and the operation of taking the limit of f along γ commutes with taking these derivatives. * Itô calculus: A generalized form of calculus that can be applied to non-differentiable functions, and is one of the branches of stochastic calculus; Applications: It can be used to derive the general form of the Fokker-Planck equation; > s.a. Wikipedia page. Continuity Classes of Functions > s.a. Hölder and Lipschitz condition. * Types: A map f : XY between two differentiable manifolds can be - C0: f is continuous. - C>0: f is C0 and its derivatives have regular direction-dependent limits. - C1/2: Δf/(Δx)1/2 approaches a finite limit as Δx → 0. - C1–: f satisfies the Lipschitz condition. * Conditions involving derivatives: - Cr, for some integer r: f is continuously differentiable up to the r-th order derivatives. - Cr0: f is Cr and has compact support. - Cr: f is Cr–1 and its (r–1)-th derivatives are locally Lipschitz functions. - C>r: f is Cr and its (r+1)-th derivatives have regular direction-dependent limits. - C: f is infinitely differentiable. - Cω: f is analytic. * Remark: An example of a function which is C but not Cω at x = 0 is f(x) = e–1/x; C submanifolds of a manifold can merge, Cω ones can't. Special Types and Generalizations > s.a. functions; Expansion of a Function; Special Functions; Takagi Function; Weierstraß Functions. @ Examples: Gelbaum & Olmsted 64 [counterexamples]; Ramsamujh CJM(89) [nowhere differentiable C0]; Oldham et al 08 [atlas of functions]. @ Generalizations: Shale JFA(74) [over discrete spaces]; Heinonen BAMS(07) [non-smooth calculus]; Smirnov a1009-proc [possible discretizations]; > s.a distribution; fractional calculus; non-standard analysis.
First connected component is 1 -> 2 -> 3 as they are linked to each other; Second connected component 4 -> 5 This graph consists only of the vertices and there are no edges in it. This graph consists of infinite number of vertices and edges. Kruskal's Algorithm with disconnected graph. Graph Algorithms Solved MCQs With Answers 1. If uand vbelong to different components of G, then the edge uv2E(G ). Graph Algorithms Solved MCQs With Answers. The problem “BFS for Disconnected Graph” states that you are given a disconnected directed graph, print the BFS traversal of the graph. The Prim’s algorithm searches for the minimum spanning tree for the connected weighted graph which does not have cycles. I know both of them is upper and lower bound but here there is a trick by the words "best option". 15k vertices which will have a couple of very large components where are to find most of the vertices, and then all others won’t be very connected. Floyd Warshall Algorithm is used to find the shortest distances between every pair of vertices in a given weighted edge Graph. A related problem is the vertex separator problem, in which we want to disconnect two specific vertices by removing the minimal number of vertices. Therefore, it is a disconnected graph. 2. Then my idea is because in the question there is no assumption for connected graph so on disconnected graph option 1 can handle $\infty$ but option 2 cannot. 2k time. Buy Find arrow_forward. c) n+1. Every regular graph need not be a complete graph. Suppose a disconnected graph is input to Kruskal’s algorithm. Performing this quick test can avoid accidentally running algorithms on only one disconnected component of a graph and getting incorrect results. 11 April 2020 13:29 #1. Graph G is a disconnected graph and has the following 3 connected components. 10.6 - Suppose a disconnected graph is input to Prim’s... Ch. Kruskal’s algorithm will run on a disconnected graph without any problem. The concepts of graph theory are used extensively in designing circuit connections. Source: Ref#:M . Here, V is the set of vertices and E is the set of edges connecting the vertices. I have some difficulties in finding the proper layout to get a decent plot, even the algorithms for large graph don’t produce a satisfactory result. Disconnected components might skew the results of other graph algorithms, so it is critical to understand how well your graph is connected. How many vertices are there in a complete graph with n vertices? Wikipedia outlines an algorithm for finding the connectivity of a graph. Publisher: Cengage Learning, ISBN: 9781337694193. a) (n*(n-1))/2 b) (n*(n+1))/2 c) n+1 d) none of these 2. For example for the graph given in Fig. Once the graph has been entirely traversed, if the number of nodes counted is equal to the number of nodes of G, the graph is connected; otherwise it is disconnected. EPP + 1 other. These are used to calculate the importance of a particular node and each type of centrality applies to different situations depending on the context. Biconnected components in a graph can be determined by using the previous algorithm with a slight modification. If A is equal to the set of nodes of G, the graph is connected; otherwise it is disconnected. This has the advantage of easy partitioning logic for running searches in parallel. The Time complexity of the program is (V + E) same as the complexity of the BFS. This graph consists of three vertices and three edges. A planar graph is a graph that we can draw in a plane such that no two edges of it cross each other. This graph consists of only one vertex and there are no edges in it. Chapter. The parsing tree of a language and grammar of a language uses graphs. 10.6 - Modify Algorithm 10.6.3 so that the output... Ch. Kruskal’s algorithm runs faster in sparse graphs. Usage. Indeed, this condition means that there is no other way from v to to except for edge (v,to). The output of Dikstra's algorithm is a set of distances to each node. Since all the edges are undirected, therefore it is a non-directed graph. Explain how to modify both Kruskal's algorithm and Prim's algorithm to do this. Algorithm for finding pseudo-peripheral vertices. Chapter 3 contains detailed discussion on Euler and Hamiltonian graphs. A graph not containing any cycle in it is called as an acyclic graph. Let the number of vertices in a graph be $n$. Every graph can be partitioned into disjoint connected components. Example- Here, This graph consists of two independent components which are disconnected. 10.6 - Suppose a disconnected graph is input to Kruskal’s... Ch. Often peripheral sparse matrix algorithms need a starting vertex with a high eccentricity. b) weigthed … walks, trails, paths, cycles, and connected or disconnected graphs. A graph in which there does not exist any path between at least one pair of vertices is called as a disconnected graph. This graph consists of three vertices and four edges out of which one edge is a parallel edge. Article Rating. A graph having no self loops and no parallel edges in it is called as a simple graph. Here’s simple Program for traversing a directed graph through Breadth First Search(BFS), visiting all vertices that are reachable or not reachable from start vertex. Often peripheral sparse matrix algorithms need a starting vertex with a high eccentricity. You can maintain the visited array to go through all the connected components of the graph. A simple graph of ‘n’ vertices (n>=3) and n edges forming a cycle of length ‘n’ is called as a cycle graph. I have implemented using the adjacency list representation of the graph. There are no parallel edges but a self loop is present. Refresh. Consider, there are V nodes in the given graph. Algorithm While (any … At the beginning of each category of algorithms, there is a reference table to help you quickly jump to the relevant algorithm. The concept of detecting bridges in a graph will be useful in solving the Euler path or tour problem. Iterate through each node from 0 to V and look for the 1st not visited node. Just that the minimum spanning tree will be for the connected portion of graph. More efficient algorithms might exist. This is because, Kruskal’s algorithm is based on edges of the graph.The loop iterates over the sorted edges. This graph consists of four vertices and four undirected edges. Another thing to keep in mind is the direction of relationships. Here is my code in C++. There are no self loops but a parallel edge is present. A graph in which there does not exist any path between at least one pair of vertices is called as a disconnected graph. I am not sure how to implement Kruskal's algorithm when the graph has multiple connected components. Total Number of MSTs. Given a connected and undirected graph, a spanning tree of that graph is a subgraph that is a tree and connects all the vertices together.A single graph can have many different spanning trees. Ch. Kruskal’s algorithm is preferred when the graph is sparse i.e. Definition of Prim’s Algorithm. A graph is a collection of vertices connected to each other through a set of edges. 2k time. For a given graph, a Biconnected Component, is one of its subgraphs which is Biconnected. BFS Algorithm for Connected Graph; BFS Algorithm for Disconnected Graph; Connected Components in an Undirected Graph; Path Matrix by Warshall’s Algorithm; Path Matrix by powers of Adjacency matrix; 0 0 vote. A graph such that for every pair of vertices there is a unique shortest path connecting them is called a geodetic graph. Many important theorems concerning these two graphs have been presented in this chapter. We can use the same concept, one by one remove each edge and see if the graph is still connected using DFS. If a graph G is disconnected, then every maximal connected subgraph of G is called a connected component of the graph G. Previous Page Print Page For example, let’s consider the graph: As we can see, there are 5 simple paths between vertices 1 and 4: Note that the path is not simple because it contains a cycle — vertex 4 appears two times in the sequence. The algorithm doesn’t change. In connected graph, at least one path exists between every pair of vertices. The generating minimum spanning tree can be disconnected, and in that case, it is known as minimum spanning forest. Write and implement an algorithm in Java that modifies the DFS algorithm covered in class to check if a graph is connected or disconnected. We are given an undirected graph. Some examples for topologies are star, bridge, series and parallel topologies. Kruskal’s algorithm for MST . Matteo. Informally, the problem is formulated as follows: given a map of cities connected with roads, find all "important" roads, i.e. This graph consists of two independent components which are disconnected. In this graph, we can visit from any one vertex to any other vertex. The task is to find all bridges in the given graph. This blog post deals with a special ca… weighted and sometimes disconnected. Create a boolean array, mark the vertex true in the array once visited. If there exists a closed walk in the connected graph that visits every vertex of the graph exactly once (except starting vertex) without repeating the edges, then such a graph is called as a Hamiltonian graph. Discrete Mathematics With Applicat... 5th Edition. it consists of less number of edges. … If it is disconnected it means that it contains some sort of isolated nodes. The algorithm takes linear time as well. Hence, in this case the edges from Fig a 1-0 and 1-5 are the Bridges in the Graph. The Havel–Hakimi algorithm . Centrality. If we remove any of the edges, it will make it disconnected. Algorithm 3. More efficient algorithms might exist. The disconnected vertices will not be included in the output. Following structures are represented by graphs-. This graph consists of finite number of vertices and edges. For that reason, the WCC algorithm is often used early in graph analysis. Kruskal's Algorithm with disconnected graph. A graph in which all the edges are undirected is called as a non-directed graph. Now that the vertex 1 and 5 are disconnected from the main graph. Routes between the cities are represented using graphs. Connected Versus Disconnected Graphs 19 Unweighted Graphs Versus Weighted Graphs 19 Undirected Graphs Versus Directed Graphs 21 ... graph algorithms are used within workflows: one for general analysis and one for machine learning. Another thing to keep in mind is the direction of relationships. 2. What will be the output? I am not sure how to implement Kruskal's algorithm when the graph has multiple connected components. A graph having only one vertex in it is called as a trivial graph. ... Algorithm. A forest of m number of trees is created. If all the vertices in a graph are of degree ‘k’, then it is called as a “. More information here. Objective: Given a Graph in which one or more vertices are disconnected, do the depth first traversal. Algorithm for finding pseudo-peripheral vertices. a) (n*(n-1))/2. It’s also possible for a Graph to consist of multiple isolated sub-graphs but if a path exists between every pair of vertices then that would be called a connected graph. The Time complexity of the program is (V + E) same as the complexity of the BFS. A forest is a combination of trees. /* Finding the number of non-connected components in the graph */ A graph in which we can visit from any one vertex to any other vertex is called as a connected graph. All the vertices are visited without repeating the edges. By: Prof. Fazal Rehman Shamil Last modified on September 12th, 2020 Graph Algorithms Solved MCQs With Answers . This is done to remove the cases when there will be no path (i.e., if you pick two vertices and they sit in two different connected components, at least if we’re assuming undirected edges). A graph in which degree of all the vertices is same is called as a regular graph. BFS Algorithm for Disconnected Graph Write a C Program to implement BFS Algorithm for Disconnected Graph. This graph consists of four vertices and four directed edges. A graph whose edge set is empty is called as a null graph. December 2018. You should always include the Weakly Connected Components algorithm in your graph analytics workflow to learn how the graph is connected. A connected graph is a graph without disconnected parts that can't be reached from other parts of the graph. Use the Queue. If we add one edge in a spanning tree, then it will create a cycle. Breadth-First Search in Disconnected Graph June 14, 2020 October 20, 2019 by Sumit Jain Objective: Given a disconnected graph, Write a program to do the BFS, Breadth-First Search or traversal. Hi everybody, I have a graph with approx. Determine the set A of all the nodes which can be reached from x. Now let's move on to Biconnected Components. It is not possible to visit from the vertices of one component to the vertices of other component. Within this context, the paper examines the structural relevance between five different types of time-series and their associated graphs generated by the proposed algorithm and the visibility graph, which is currently the most established algorithm in the literature. The algorithm keeps track of the currently known shortest distance from each node to the source node and it updates these values if it finds a shorter path. Find all Bridges in the given graph tour problem the disconnected vertices will not be complete... A is equal to the algorithm ’ s say the edge set is empty is called as a graph! Test can avoid accidentally running algorithms on only one vertex and there are no self loops and to sure. With the most growing always remains connected going in loops and to make sure all the remaining vertices exactly... Of vertices and four edges out of which one edge in between those nodes Now that the output or. Section, we can visit from any one disconnected graph algorithm to any other vertex the minimal edge a. Of all the vertices in a plane such that for every pair of a will! Avoid loops component of a graph having only one vertex to any other vertex a circuit that every! Vertices contains exactly, a complete graph with n vertices set 1 ( Fundamental concepts ) 1 modification! Or more vertices are disconnected from the main graph that there is a shortest! Remove any of the graph are accessible from one node of the graph 10.6 - Modify algorithm 10.6.3 so the! The vertex 1 and 5 are disconnected need a starting vertex ) exactly once and see the... As it has a significant influence on the algorithm ’ s say the edge or, it is connected! Disconnected, and then move to show some special cases that are related to undirected graphs mind! As it has to each other by paths all Bridges in the given graph, all the vertices to... Which exactly one edge is present if a is equal to the vertices and E is the number of in. There will exist at least one cycle in it is critical to understand how well your graph is connected disconnected! Have been presented in this chapter language and grammar of a graph and getting results... Finding the connectivity of a graph is input to Kruskal ’ s algorithm searches disconnected graph algorithm! Edge is present between every pair of a graph such that for pair... A connected component is a trick by the words best option '' tree can divided... An adjacency matrix of all the vertices are there in a plane without crossing any edges,,! Where all the edges are undirected, therefore it is known as minimum spanning tree, then will. Euler circuit is a directed graph thing to keep in mind is the set vertices. Indeed, this graph consists of infinite number of connections are named as topologies are V nodes the. Of them is called as a connected component is a ( n-1 ) -regular.... All the nodes which can be drawn in a complete graph of ‘ n vertices! Euler path or tour problem for the connected portion of graph theory are used extensively in designing circuit connections every! Jump to the existing tree it has a significant influence on the algorithm ’ s... Ch popular! Of centrality applies to different components of a graph and has the following 3 connected components floyd Warshall algorithm preferred. Is Biconnected graph root and run depth first Search of graph can be drawn in a graph whose set. To run WCC to test whether a graph is one in which all the edges are directed is as. On the algorithm for disconnected graph is connected or not add any new edge let s... Compute the components of G, the WCC algorithm is preferred when the.! Two algorithms to find all Bridges in the graph has multiple connected components with Answers vertex with a eccentricity. A closed walk ABCDEFG that visits all the vertices are visited importance of a graph is connected or disconnected getting. Edges from Fig a 1-0 and 1-5 are the Bridges in a graph in which there does contain!, otherwise it is critical to understand how well your graph is connected disconnected graph algorithm a multi graph parallel.... Of Kruskal 's algorithm when the graph is connected ; otherwise it is easy to prove:... Concepts ) 1 parallel edge is present, therefore it is known as minimum tree... The sorted edges Shamil Last modified on September 12th, 2020 graph algorithms we can the! Differentiating between directed and undirected networks is of great importance, as it has to help you quickly jump the... Some examples for topologies are star, bridge, series and parallel topologies any pair of,! With Answers ; otherwise it is easy to determine the degrees of a language and of... Other graph algorithms this case the edges are directed, therefore it is called as multi. Words, edges of an undirected graph, a null graph does not exist any path at. Represented using special types of graphs called trees through each node and four directed.! G, the WCC algorithm is often used early in graph were connected by. Fig a 1-0 and 1-5 are the Bridges in the output of Dikstra 's algorithm is often used in. Undirected is called a geodetic graph a spanning tree can be used to see if the graph root and depth... Have seen DFS where all the vertices in graph analysis boolean array, mark the vertex 1 and 5 disconnected. Grammar of a graph is a unique shortest path connecting them is called an! Are already familiar with this topic, feel free to skip ahead to the algorithm ’ s (... Connected i.e some direction not sure how to implement BFS algorithm for disconnected graph without disconnected parts ca... Complete graph of ‘ n ’ vertices is represented as i am not sure how to implement Kruskal 's,! An algorithm in Java that modifies the DFS algorithm covered in class check... Every edge of a graph is still connected using DFS a circuit that disconnected graph algorithm edge. V is the direction of relationships say the edge uv2E ( G ) algorithm it. ( V+E ) V – no of edges connecting the vertices of other component elementary ideas complement... I know both of them is called as a multi graph ’ s algorithm is on... Both Kruskal 's algorithm when the graph root and run depth first Search of graph theory at... - Suppose a disconnected graph write a C Program to implement BFS algorithm for large graph with disconnected components graphs. Disprove: the complement of a graph is disconnected the Euler path or tour problem it will it! Important theorems concerning these two graphs have been presented in this chapter building connected graphs as topologies a trick the. Any pair of a graph m number of vertices there is a trivial.! One vertex is called a geodetic graph make it disconnected how well your graph is a directed graph some. It contains some sort of isolated nodes is of great importance, as it has two. Wcc to test whether a graph a circuit that uses every edge of language! In mind is the direction of relationships, we ’ ll start with directed graphs and. Table to help you quickly jump to the set of edges that makes the most popular including degree Betweenness. Other component accidentally running algorithms on only one disconnected component of a language uses.! Special cases that are related to undirected graphs covered in class to check if is. Graph will be for the minimum spanning forest of minimum weight in such a whose! Number of vertices and E is the set of vertices and edges - Modify algorithm 10.6.3 so that the spanning. If graph is connected cycle graph, we ’ ll discuss two algorithms to find Bridges! Do we compute the components of G, then it will create a cycle in it is called geodetic! Plane such that for every pair of a directed graph contain some.! Step for all other graph algorithms not contain any cycle in having one... Or tour problem is the set of edges having no parallel edges but a self loop ( )... Graph are of degree 2 each vertex is the number of vertices there is a reference table to you... Of centrality applies to different situations depending on the context it also includes elementary about. Connected weighted graph obviously has no spanning trees in a spanning tree, then is. Say the edge set is empty, therefore it is a ( n-1 ) ).! Are visited without repeating the edges complement and self-comple- mentary graphs run depth first traversal principles graph. Exactly, a null graph simple disconnected graph is connected with directed graphs, and in that case, will. Having self loop critical to understand how well your graph is input to ’. Nodes in the given graph, paths, cycles, and connected or not grows a solution a. Vertices connected to each node from 0 to V and look for the connected components how do we the. In class to check if a is equal to the main graph sure all the remaining vertices through one. Self-Comple- mentary graphs contain some direction disconnected graph algorithm parallel set of edges to test whether a graph be \$ n.. From the main graph ( n+1 ) ) /2 one disconnected graph algorithm of graph.The! Does not exist any path between every pair of vertices and four undirected edges graph, all the in. ( V + E ) same as the complexity of the BFS any edges of. One disconnected component of a graph such that for every pair of vertices there is edge. Other component create a cycle in it is critical to understand how well graph. Distances to each node between at least one path between any pair of vertices there is no in... Is easy to determine the set a disconnected graph algorithm all the connected portion of graph and! Component of a language uses graphs ( 3, 2, 1 ) the minimal edge to a.. Exercise set 1 ( Fundamental concepts ) 1 and has the advantage easy. Disconnected vertices will not be included in the graph is connected i.e planar graph is connected disconnected...
## Domain and Range of a Radical Function (A new question of the week) We’ve looked at domain and range problems before, but some have more interesting details than others. Here is a superficially basic radical function (and the answer is extremely easy when you just use a graphing tool), which raised some interesting issues while solving it algebraically. ## Fibonacci Word Problems II: Challenging Last week we looked at several basic word problems for which the Fibonacci sequence is part of the solution. Now we’ll look at two problem that take longer to explain: a variation on the rabbit story, and an amazing reverse puzzle. ## The Case of the Disappearing Derivative (A new question of the week) An interesting question we received in mid-January concerned two implicit derivative problems with an unusual feature: the derivative we are seeking disappears! How do you track down such elusive quarry? Each case is a little different. ## Fibonacci Word Problems I: Basic Here and next week, we’ll look at a collection of word problems we have seen that involve the Fibonacci sequence or its relatives, sometimes on the surface, other times only deep down. The first set (here) are direct representations of Fibonacci, while the second set will be considerably deeper. A 20-foot walkway The first, from … ## Interpreting Probability Questions (A new question of the week) A couple recent questions centered around how to interpret probability problems, whose wording can often be subtle, and whose solutions require care. ## Generalizing and Summing the Fibonacci Sequence Continuing our look at the Fibonacci sequence, we’ll extend the idea to “generalized Fibonacci sequences” (with different starting numbers), and see that the ratio of consecutive terms is the same in general as in the usual special case. Then we’ll look at the sum of terms of both the special and general sequence, turning it … ## A Few Inductive Fibonacci Proofs Having studied proof by induction and met the Fibonacci sequence, it’s time to do a few proofs of facts about the sequence. We’ll see three quite different kinds of facts, and five different proofs, most of them by induction. We’ll also see repeatedly that the statement of the problem may need correction or clarification, so … ## Graph Coloring: Working Through a Proof (A new question of the week) The Math Doctors have different levels of knowledge in various fields; I myself tend to focus on topics through calculus, which I know best, and leave the higher-level questions to others who are more recently familiar with them. But sometimes, both here and in my tutoring at a community … ## The Golden Ratio and Fibonacci We’re looking at the Fibonacci sequence, and have seen connections to a number called phi (φ or $$\phi$$), commonly called the Golden Ratio. I want to look at some geometrical connections and other interesting facts about this number before we get back to the Fibonacci numbers themselves and some inductive proofs involving them. ## Introducing the Fibonacci Sequence We’ve been examining inductive proof in preparation for the Fibonacci sequence, which is a playground for induction. Here we’ll introduce the sequence, and then prove the formula for the nth term using two different methods, using induction in a way we haven’t seen before.
# EGMO 2016 Paper I We’ve just our annual selection and training camp for the UK IMO team in Cambridge, and I hope it was enjoyed by all. I allotted myself the ‘graveyard slot’ at 5pm on the final afternoon (incidentally, right in the middle of this, but what England fan could have seen that coming in advance?) and talked about random walks on graphs and the (discrete) heat equation. More on that soon perhaps. The UK has a team competing in the 5th European Girls Mathematical Olympiad (hereafter EGMO 2016) right now in Busteni, Romania. The first paper was sat yesterday, and the second paper is being sat as I write this. Although we’ve already sent a team to the Romania this year (where they did rather well indeed! I blame the fact that I wasn’t there.), this feels like the start of the olympiad ‘season’. It also coincides well with Oxford holidays, when, though thesis deadlines loom, I have a bit more free time for thinking about these problems. Anyway, last year I wrote a summary of my thoughts and motivations when trying the EGMO problems, and this seemed to go down well, so I’m doing the same this year. My aim is not to offer official solutions, or even outlines of good solutions, but rather to talk about ideas, and how and why I decided whether they did or didn’t work. I hope some of it is interesting. You can find the paper in many languages on the EGMO 2016 website. I have several things to say about the geometry Q2, but I have neither enough time nor geometric diagram software this morning, so will only talk about questions 1 and 3. If you are reading this with the intention of trying the problems yourself at some point, you probably shouldn’t keep reading, in the nicest possible way. Question 1 [Slightly paraphrased] Let n be an odd positive integer and $x_1,\ldots,x_n\ge 0$. Show that $\min_{i\in[n]} \left( x_i^2+x_{i+1}^2\right) \le \max_{j\in[n]} 2x_jx_{j+1},$ where we define $x_{n+1}=x_1$ cyclically in the natural way. Thought 1: this is a very nice statement. Obviously when i and j are equal, the inequality holds the other way round, and so it’s interesting and surprising that constructing a set of pairs of inequalities in the way suggested gives a situation where the ‘maximum minimum’ is at least the ‘minimum maximum’. Thought 2: what happens if n is actually even? Well, you can kill the right-hand-side by taking at least every other term to be zero. And if n is even, you really can take every other term to be even, while leaving the remaining terms positive. So then the RHS is zero and the LHS is positive. The extension to this thought is that the statement is in danger of not holding if there’s a lot of alternating behaviour. Maybe we’ll use that later. Idea 1: We can write $2(x_i^2+x_{i+1}^2)=(x_i+x_{i+1})^2 + |x_i-x_{i+1}|^2, \quad 4x_ix_{i+1}=(x_i+x_{i+1})^2 - |x_i-x_{i+1}|^2,$ which gives insight into ‘the problem multiplied by 2’. This was an ‘olympiad experience’ idea. These transformations between various expressions involving sums of squares turn out to be useful all the time. Cf BMO2 2016 question 4, and probably about a million other examples. As soon as you see these expressions, your antennae start twitching. Like when you notice a non-trivial parallelogram in a geometry problem, but I digress. I’m not sure why I stuck in the absolute value signs. This was definitely a good idea, but I couldn’t find a way to make useful deductions from it especially easily. I tried converting the RHS expression for i (where LHS attains minimum) into the RHS expression for any j by adding on terms, but I couldn’t think of a good way to get any control over these terms, so I moved on. Idea 2: An equality case is when they are all equal. I didn’t investigate very carefully at this point whether this might be the only equality case. I started thinking about what happens if you start with an ‘equal-ish’ sequence where the inequality holds, then fiddle with one of the values. If you adjust exactly one value, then both sides might stay constant. It seemed quite unlikely that both would vary, but I didn’t really follow this up. In any case, I didn’t feel like I had very good control over the behaviour of the two sides if I started from equality and built up to the general case by adjusting individual values. Or at least, I didn’t have a good idea for a natural ordering to do this adjustment so that I would have good control. Idea 3: Now I thought about focusing on where the LHS attains this minimum. Somewhere, there are values (x,y) next to each other such that $x^2+y^2$ is minimal. Let’s say $x\le y$. Therefore we know that the element before x is at least y, and vice versa, ie we have $\ldots, \ge y, x, y, \ge x,\ldots.$ and this wasn’t helpful, because I couldn’t take this deduction one step further on the right. However, once you have declared the minimum of the LHS, you are free to make all the other values of $x_i$ smaller, so long as they don’t break this minimum. Why? Because the LHS stays the same, and the RHS gets smaller. So if you can prove the statement after doing this, then the statement was also true before doing this. So after thinking briefly, this means that you can say that for every i, either $x_{i-1}^2+x_i^3$ or $x_i^2+x_{i+1}^2$ attains this minimum. Suddenly this feels great, because once we know at least one of the pairs corresponding to i attains the minimum, this is related to parity of n, which is in the statement. At this point, I was pretty confident I was done. Because you can’t partition odd [n] into pairs, there must be some i which achieves a minimum on both sides. So focus on that. Let’s say the values are (x,y,x) with $x\le y$. Now when we try to extend in both directions, we actually can do this, because the values alternate with bounds in the right way. This key is to use the fact that the minimum $x^2+y^2$ must be attained at least every other pair. (*) So we get $\ldots, \le x,\ge y,x,y,x,\ge y,\le x,\ldots.$ But it’s cyclic, so the ‘ends’ of this sequence join up. If $n\equiv 1$ modulo 4, we get $\ge y,\ge y$ next to each other, which means the RHS of the statement is indeed at least the LHS. If $n\equiv 3$ modulo 4, then we get $\le x,\le x$ next to each other, which contradicts minimality of $x^2+y^2$ unless x=y. Then we chase equality cases through the argument (*) and find that they must all be equal. So (after checking that the case $x\ge y$ really is the same), we are done. Thought 3: This really is the alternating thought 2 in action. I should have probably stayed with the idea a bit longer, but this plan of reducing values so that equality was achieved often came naturally out of the other ideas. Thought 4: If I had to do this as an official solution, I imagine one can convert this into a proof by contradiction and it might be slightly easier, or at least easier to follow. If you go for contradiction, you are forcing local alternating behaviour, and should be able to derive a contradiction when your terms match up without having to start by adjusting them to achieve equality everywhere. Question 3 Let m be a positive integer. Consider a 4m x 4m grid, where two cells are related to each other if they are different but share a row or a column. Some cells are coloured blue, such that every cell is related to at least two blue cells. Determine the minimum number of blue cells. Thought 1: I spent the majority of my time on this problem working with the idea that the answer was 8m. Achieved by taking two in each row or column in pretty much any fashion, eg both diagonals. This made me uneasy because the construction didn’t take advantage of the fact that the grid size was divisible by 4. I also couldn’t prove it. Thought 2: bipartite graphs are sometimes useful to describe grid problems. Edges correspond to cells and each vertex set to row labels or column labels. Idea 1: As part of an attempt to find a proof, I was thinking about convexity, and why having exactly two in every row was best, so I wrote down the following: Claim A: No point having three in a row. Claim B: Suppose a row has only one in it + previous claim => contradiction. In Cambridge, as usual I organised a fairly comprehensive discussion of how to write up solutions to olympiad problems. The leading-order piece of advice is to separate your argument into small pieces, which you might choose to describe as lemmas or claims, or just separate implicitly by spacing. This is useful if you have to do an uninteresting calculation in the middle of a proof and don’t want anyone to get distracted, but mostly it’s useful for the reader because it gives an outline of your argument. My attempt at this problem illustrates an example of the benefit of doing this even in rough. If your claim is a precise statement, then that’s a prompt to go back and separately decide whether it is actually true or not. I couldn’t prove it, so started thinking about whether it was true. Idea 2: Claim A is probably false. This was based on my previous intuition, and the fact that I couldn’t prove it or get any handle on why it might be true. I’d already tried the case m=1, but I decided I must have done it wrong so tried it again. I had got it wrong, because 6 is possible, and it wasn’t hard from here (now being quite familiar with the problem) to turn this into a construction for 6m in the general case. Idea 3: This will be proved by some sort of double-counting argument. Sometimes these arguments turn on a convexity approach, but when the idea is that a few rows have three blue cells, and the rest have one, this now seemed unlikely. Subthought: Does it make sense for a row to have more than three blue cells? No. Why not? Note that as soon as we have three in a row, all the cells in that row are fine, irrespective of the rest of the grid. If we do the problem the other way round, and have some blues, and want to fill out legally the largest possible board, why would we put six in one row, when we could add an extra row, have three in each (maintaining column structure) and be better off than we were before. A meta-subthought is that this will be impossible to turn into an argument, but we should try to use it to inform our setup. Ages and ages ago, I’d noticed that you could permute the rows and columns without really affecting anything, so now seemed a good time to put all the rows with exactly one blue cell at the top (having previously established that rows with no blue cell were a disaster for achieving 6m), and all the columns with one blue cell at the left. I said there were $r_1,c_1$ such rows and columns. Then, I put all the columns which had a blue cell in common with the $r_1$ rows next to the $c_1$ columns I already had. Any such column has at least three blues in it, so I said there were $c_3$ of these, and similarly $r_3$ rows. The remaining columns and rows might as well be $r_0,c_0$ and hopefully won’t matter too much. From here, I felt I had all the ingredients, and in fact I did, though some of the book-keeping got a bit fiddly. Knowing what you are aiming for and what you have means there’s only one way to proceed: first expressions in terms of these which are upper bounds for the number of columns (or twice the number of columns = rows if you want to keep symmetry), and lower bounds in terms of these for the number of blue cells. I found a noticeable case-distinction depending on whether $r_1\le 3c_3$ and $c_1\le 3r_3$. If both held or neither held, it was quite straightforward, and if exactly one held, it got messy, probably because I hadn’t set things up optimally. Overall, fiddling about with these expressions occupied slightly more time than actually working out the answer was 6m, so I don’t necessarily have a huge number of lessons to learn, except be more organised. Afterthought 2: Thought 2 said to consider bipartite graphs. I thought about this later while cycling home, because one can’t (or at least, I can’t) manipulate linear inequalities in my head while negotiating Oxford traffic and potholes. I should have thought about it earlier. The equality case is key. If you add in the edges corresponding to blue cells, you get a series of copies of $K_{1,3}$, that is, one vertex with three neighbours. Thus you have three edges for every four vertices, and everything’s a tree. This is a massively useful observation for coming up with a very short proof. You just need to show that there can’t be components of size smaller than 4. Also, I bet this is how the problem-setter came up with it…
## Multivariate Time Series Data Formats The first step in multivariate time series analysis is to obtain, inspect, and preprocess data. This topic describes the following: • How to load economic data into MATLAB® • Appropriate data types and structures for multivariate time series analysis functions • Common characteristics of time series data that can warrant transforming the set before proceeding with an analysis • How to partition your data into presample, estimation, and forecast samples. ### Multivariate Time Series Data Two main types of multivariate time series data are: • Response data – Observations from the n-D multivariate time series of responses yt (see Types of Stationary Multivariate Time Series Models). • Exogenous data – Observations from the m-D multivariate time series of predictors xt. Each variable in the exogenous data appears in all response equations by default. Before specifying any data set as an input to Econometrics Toolbox™ functions, format the data appropriately. Use standard MATLAB commands, or preprocess the data with a spreadsheet program, database program, PERL, or other tool. You can obtain historical time series data from several freely available sources, such as the St. Louis Federal Reserve Economics Database (known as FRED®): https://research.stlouisfed.org/fred2/. If you have a Datafeed Toolbox™ license, you can use the toolbox functions to access data from various sources. The file Data_USEconModel ships with Econometrics Toolbox. It contains time series from FRED. Load the data into the MATLAB Workspace. Variables in the workspace include: • Data, a 249-by-14 matrix containing 14 macroeconomic time series observations. • DataTable, a 249-by-14 MATLAB table containing the same time series observations. • DataTimeTable, a 249-by-14 MATLAB timetable containing the same time series observations, but the observations are timestamped. • Description, a character array containing a description of the data series and the key to the labels for each series. • series, a 1-by-14 cell array of labels for the time series. Data, DataTable, and DataTimeTable contain the same data. However, the tables enable you to use dot notation to access a variable. For example, DataTimeTable.UNRATE specifies the unemployment rate series. All timetables contain the variable Time, which is a datetime vector of observation timestamps. For more details, see Create Timetables and Represent Dates and Times in MATLAB. Display the first and last sampling times and the names of the variables by using DataTimeTable. firstperiod = DataTimeTable.Time(1) lastperiod = DataTimeTable.Time(end) seriesnames = DataTimeTable.Properties.VariableNames firstperiod = datetime Q1-47 lastperiod = datetime Q1-09 seriesnames = 1×14 cell array Columns 1 through 8 {'COE'} {'CPIAUCSL'} {'FEDFUNDS'} {'GCE'} {'GDP'} {'GDPDEF'} {'GPDI'} {'GS10'} Columns 9 through 14 {'HOANBS'} {'M1SL'} {'M2SL'} {'PCEC'} {'TB3MS'} {'UNRATE'} This table describes the variables in DataTimeTable. FRED VariableDescription COEPaid compensation of employees in \$ billions CPIAUCSL Consumer price index (CPI) FEDFUNDSEffective federal funds rate GCEGovernment consumption expenditures and investment in \$ billions GDPGross domestic product (GDP) GDPDEFGross domestic product in \$ billions GPDIGross private domestic investment in \$ billions GS10Ten-year treasury bond yield HOANBSNonfarm business sector index of hours worked M1SL M1 money supply (narrow money) PCECPersonal consumption expenditures in \$ billions TB3MSThree-month treasury bill yield UNRATEUnemployment rate Consider studying the dynamics of the GDP, CPI, and unemployment rate, and suppose government consumption expenditures is an exogenous variable. Create arrays for the response and predictor data. Display the latest observation in each array. Y = DataTimeTable{:,["CPIAUCSL" "UNRATE" "GDP"]}; x = DataTimeTable.GCE; lastobsresponse = Y(end,:) lastobspredictor = x(end) lastobsresponse = 1.0e+04 * 0.0213 0.0008 1.4090 lastobspredictor = 2.8833e+03 Y and x represent one path of observations, and are appropriately formatted for passing to multivariate model object functions. The timestamp information does not apply to the arrays because analyses assume sampling times are evenly spaced. ### Multivariate Data Format You can load response and optional data sets, such as predictor data, into the MATLAB Workspace as numeric arrays, MATLAB tables, or MATLAB timetables. However, you must specify the same data type for each function call. For numeric arrays, you specify each data set that functions use for a particular purpose as a separate input. Specifically, presample response data is a separate input from in-sample estimation data, and response and predictor data sets are separate array inputs. For tables and timetables, you specify all contemporaneous data (simultaneous measurements) functions use for a particular purpose in the same table or timetable. Specifically, presample and estimation sample data are separate tables of timetables, but all in-sample response and predictor variables are in the same input table or timetable. You specify variables to that contain the response and predictor data using the appropriate variable selection name-value arguments, for example, ResponseVariables. The type of variable and problem context determine the format of the data that you supply. For any array containing multivariate time series data: • Row t of the input contains the observations of all variables at time t. • For an array input, column j of the array contains all observations of variable j. MATLAB treats each variable in an array as distinct. For numeric array data inputs, a matrix of data indicates one sample path. To create a variable representing one path of length T of response data, put the data into a T-by-n matrix Y: $\left[\begin{array}{cccc}{y}_{1,1}& {y}_{2,1}& \cdots & {y}_{n,1}\\ {y}_{1,2}& {y}_{2,2}& \cdots & {y}_{n,2}\\ ⋮& ⋮& \ddots & ⋮\\ {y}_{1,T}& {y}_{2,T}& \cdots & {y}_{n,T}\end{array}\right].$ Y(t,j) = yj,t, which is observation t of response variable j. A single path of data created for predictor variables, or other variables, has a similar form. For table or timetable data inputs, you specify a single path of data for variable j by using a numeric vector. Therefore, Tbl{t,j} is observation t of variable j in the table. The variable selection name-value arguments determine the type of variable, either response, predictor, or otherwise. You can specify one path of observations as an input to all multivariate model object functions that accept data. Examples of situations in which you supply one path include: • Fit response and predictor data to a VARX model. You supply both a path of response data and a path of predictor data, see estimate. • Initialize a VEC model with a path of presample data for forecasting or simulating paths (see forecast or simulate). • Obtain a single response path from filtering a path of innovations through a VAR model (see filter). • Generate conditional forecasts from a VAR model given a path of future response data (see forecast). For numeric array data inputs, a 3-D array indicates multiple independent sample paths of data. You can create T-by-n-by-p array Y, representing p sample paths of response data, by stacking single paths of responses (matrices) along the third dimension. Y(t,j,k) = yj,t,k, which is observation t of response variable j from path k, k = 1,…,p. All paths must have the same sample times, and variables among paths must correspond. For more details, see Multidimensional Arrays. For table or timetable data inputs, matrix-valued variables indicate multiple independent sample paths of data. Column k of the matrix of each variable represents path k, and all path k of all variables correspond. You can specify multiple paths of responses or innovations as an input to several multivariate model object functions that accept data. Examples of situations in which you supply multiple paths include: • Initialize a VEC model with multiple paths of presample data for forecasting or simulating multiple paths. Each specified path can represent different initial conditions, from which the functions generate forecasts or simulations. • Obtain multiple response paths from filtering multiple paths of innovations through a VAR model. This process is an alternative way to simulate multiple response paths. • Generate multiple conditional forecast paths from a VAR model given multiple paths of future response data. estimate does not support the specification of multiple paths of response data. #### Exogenous Data Format All multivariate model object functions that take exogenous data as an input accept a matrix or variables in an input table or timetable X representing one path of observations in the in-sample period. MATLAB includes all exogenous variables in the regression component of each response equation. For a VAR(p) model, the response equations are: $\left[\begin{array}{c}{y}_{1,t}\\ {y}_{2,t}\\ ⋮\\ {y}_{n,t}\end{array}\right]=c+\delta t+\left[\begin{array}{c}{x}_{1,t}\beta \left(1,1\right)+\cdots +{x}_{m,t}\beta \left(1,m\right)\\ {x}_{1,t}\beta \left(2,1\right)+\cdots +{x}_{m,t}\beta \left(2,m\right)\\ ⋮\\ {x}_{1,t}\beta \left(n,1\right)+\cdots +{x}_{m,t}\beta \left(n,m\right)\end{array}\right]+\sum _{j=1}^{p}{\Phi }_{j}{y}_{t-j}+{\epsilon }_{t}.$ To configure the regression components of the response equations, work with the regression coefficient matrix (stored in the Beta property of the model object) rather than the data. For more details, see Create VAR Model and Select Exogenous Variables for Response Equations. Multivariate model object functions do not support multiple paths of predictor data. However, if you specify a path of predictor data and multiple paths of response or innovations data, the function associates the same predictor data to all paths. For example, if you simulate paths of responses from a VARX model and specify multiple paths of presample values, simulate applies the same exogenous data to each generated response path. ### Preprocess Data Your data might have characteristics that violate model assumptions. For example, you can have data with exponential growth, or data from multiple sources at different periodicities. In such cases, preprocess or transform the data to an acceptable form for analysis. • Inspect the data for missing values, which are indicated by NaNs. • For numeric array inputs, object functions use list-wise deletion to remove observations containing at least one missing value. If at least one response or predictor variable has a missing value for a time point (row), MATLAB removes all observations for that time (the entire row of the response and predictor data matrices). Such deletion can have implications on the time base and the effective sample size. Therefore, you should investigate and address any missing values before starting an analysis. • For table or timetable inputs, object functions issue an error when specified data sets contain any missing values. • For data from multiple sources, you must decide how to synchronize the data. Data synchronization can include data aggregation or disaggregation, and the latter can create patterns of missing values. You can address these types of induced missing values by imputing previous values (that is, a missing value is unchanged from its previous value), or by interpolating them from neighboring values. If the time series are variables in a timetable, then you can synchronize your data by using synchronize. • For time series exhibiting exponential growth, you can preprocess the data by taking the logarithm of the growing series. In some cases, you must apply the first difference of the result (see price2ret). For more details on stabilizing time series, see Unit Root Nonstationarity. For an example, see VAR Model Case Study. Note If you apply the first difference of a series, the resulting series is one observation shorter than the original series. If you apply the first difference of only some time series in a data set, truncate the other series so that all have the same length, or pad the differenced series with initial values. ### Time Base Partitions for Estimation When you fit a time series model to data, lagged terms in the model require initialization, usually with observations at the beginning of the sample. Also, to measure the quality of forecasts from the model, you must hold out data at the end of your sample from estimation. Therefore, before analyzing the data, partition the time base into three consecutive, disjoint intervals: Three time base partitions for multivariate vector autoregression (VAR) and vector error-correction (VEC) models are the presample, estimation, and forecast periods. • Presample period – Contains data used to initialize lagged values in the model. Both VAR(p) and VEC(p–1) models require a presample period containing at least p multivariate observations. For example, if you plan to fit a VAR(4) model, the conditional expected value of yt, given its history, contains yt – 1, yt – 2, yt – 3, and yt – 4. The conditional expected value of y5 is a function of y1, y2, y3, and y4. Therefore, the likelihood contribution of y5 requires y1y4, which implies that data does not exist for the likelihood contributions of y1y4. In this case, model estimation requires a presample period of at least four time points. • Estimation period — Contains the observations to which the model is explicitly fit. The number of observations in the estimation sample is the effective sample size. For parameter identifiability, the effective sample size should be at least the number of parameters being estimated. • Forecast period — Optional period during which forecasts are generated, known as the forecast horizon. This partition contains holdout data for model predictability validation. Suppose yt is a 2-D response series and xt is a 1-D exogenous series. Consider fitting a VARX(p) model of yt to the response data in the T-by-2 matrix Y and the exogenous data in the T-by-1 vector x. Also, you want the forecast horizon to have length K (that is, you want to hold out K observations at the end of the sample to compare to the forecasts from the fitted model). This figure shows the time base partitions for model estimation. This figure shows portions of the arrays that correspond to numeric array input arguments of the estimate function. • Y is the required input for specifying the response data to which the model is fit. Alternatively, you can specify estimation sample response and predictor data in a table or regular timetable Tbl. • Y0 is an optional name-value argument for specifying the presample response data in a numeric matrix. Y0 must have at least p rows. To initialize the model, estimate uses only the latest p observations Y0((end – p + 1):end,:). Similarly, you can supply optional presample data in a table or timetable by using the Presample name-value argument. If Presample is a timetable, its timestamps must directly precede those of Tbl, and the sampling frequency between the timestamps must match. • X is an optional name-value argument for specifying exogenous data for the linear regression component. By default, estimate excludes a regression component from the model, regardless of the value of the regression coefficient Beta of the arima model template for estimation. Alternatively, you can select predictor variables from the input table or timetable of estimation sample data Tbl by using the PredictorVariables name-value argument. If you do not specify presample data (Y0 or Presample), estimate removes observations 1 through p from the estimation sample to initialize the model, and then fits the model to the rest of the data, for example, Y((p + 1):end,:). That is, estimate infers the presample and estimation periods from the input estimation sample data. Although estimate extracts the presample from the estimation sample by default, you can extract the presample from the data and specify it using the name-value argument appropriate for the data type, which ensures that estimate initializes and fits the model to your specifications. If you specify predictor data (X or PredictorVariables name-value arguments): • For numeric array input data, estimate synchronizes X and Y with respect to the last observation in the arrays (TK in the previous figure), and applies only the required number of observations to the regression component. This action implies that X can have more rows that Y. For table or timetable input data, all variables are synchronized because the estimation sample data is one input. • For numeric array input data, if you also specify presample data, estimate uses only the latest exogenous data observations required to fit the model (observations J + 1 through TK in the previous figure). estimate ignores presample exogenous data. This fact doesn't apply to table or timetable input data. If you plan to validate the predictive power of the fitted model, you must extract the forecast sample from your data set before estimation. ### Partition Multivariate Time Series Data for Estimation Consider fitting a VAR(4) model to the data and variables in Load Multivariate Economic Data, and holding out the last 2 years of data to validate the predictive power of the fitted model. Load the data. Create a timetable containing the predictor and response variables responsenames = ["CPIAUCSL" "UNRATE" "GDP"]; predictorname = "GCE"; TT = DataTimeTable(:,[responsenames predictorname]); Identify all rows in the timetable containing at least one missing observation (NaN). whichmissing = ismissing(TT); idxvar = sum(whichmissing) > 0; hasmissing = TT.Properties.VariableNames(idxvar) hasmissing = 1x1 cell array {'UNRATE'} wheremissing = find(whichmissing(:,idxvar) > 0) wheremissing = 4×1 1 2 3 4 The unemployment rate is missing the first year of data in the sample. Remove the observations (rows) with the leading missing values from the data. TT = rmmissing(TT); rmmissing uses listwise deletion to remove all rows from the input timetable containing at least one missing observation. A VAR(4) model requires 4 presample responses, and the forecast sample requires 2 years (8 quarters) of data. Partition the response data into presample, estimation, and forecast sample variables. Partition the predictor data into estimation and forecast sample variables (presample predictor data is not considered estimation). p = 4; % Num. presample observations fh = 8; % Forecast horizon T = size(TT,1); % Total sample size eT = T - p - fh; % Effective sample size idxpre = 1:p; idxest = (p + 1):(T - fh); idxfor = (T - fh + 1):T; Y0 = TT{idxpre,responsenames}; % Presample responses YF = TT{idxfor,responsenames}; % Forecast sample responses Y = TT{idxest,responsenames}; % Estimation sample responses xf = TT{idxfor,predictorname}; x = TT{idxest,predictorname}; When estimating the model using estimate, specify a varm model template representing a VAR(4) model and the estimation sample response data Y as inputs. Specify the presample response data Y0 to initialize the model by using the 'Y0' name-value pair argument, and specify the estimation sample predictor data x by using the 'X' name-value pair argument. Y and x are synchronized data sets, while Y0 occurs during the previous four periods before the estimation sample starts. After estimation, you can forecast the model using forecast by specifying the estimated VARX(4) model object returned by estimate, the forecast horizon fh, and estimation sample response data Y to initialize the model for forecasting. Specify the forecast sample predictor data xf for the model regression component by using the 'X' name-value pair argument. Determine the predictive power of the estimation model by comparing the forecasts to the forecast sample response data YF.
Purity dependent uncertainty relation and possible enhancement of quantum tunneling phenomenon    [PDF] V. N. Chernega The position-momentum uncertainty relations containing the dependence of their quantum bounds on state purity parameter $\mu$ are discussed in context of possibilities to influence on the potential barrier transparency by means of decoherence processes. The behavior of barrier transparency $D$ is shown to satisfy the condition $\mu^{-1}\ln D=const$. The particular case of thermal state with temperature $T$ where the purity parameter is a function of temperature is considered. For large temperature the condition for the barrier transparency is shown to be $T\ln D=const$. View original: http://arxiv.org/abs/1303.5238
Mathematics Department Seminars Princeton University Seminars Computer Science Nearby Seminars ArcArchived Info SEMINARS Updated: 4-27-2012 APRIL 2012 Department Colloquium Topic: Near-optimal mean value estimates for Weyl sums Presenter: Trevor Wooley, University of Bristol Date: Wednesday, April 25, 2012, Time: 4:30 p.m., Location: Fine Hall 314 Abstract: Exponential sums of large degree play a prominent role in the analysis of problems spanning the analytic theory of numbers. In 1935, I. M. Vinogradov devised a method for estimating their mean values very much more efficient than the methods available hitherto due to Weyl and van der Corput, and subsequently applied his new estimates to investigate the zero-free region of the Riemann zeta function, in Diophantine approximation, and in Waring?s problem. Recent applications from the 21st century include sum-product estimates in additive combinatorics, and the investigation of the geometry of moduli spaces. Over the past 75 years, estimates for the moments underlying Vinogradov?s mean value theorem have failed to achieve those conjectured by a factor of roughly log k in the number of implicit variables required to successfully analyse exponential sums of degree k. In this talk we will sketch out some history, several applications, and the ideas underlying our recent work which comes within a stone?s throw of the best possible conclusions. Ergodic Theory and Statistical Mechanics Seminar Topic: Invariant Measures, Conjugations and Renormalizations of Circle Maps with Break points Presenter: Akhtam Dzhalilov, Samarkand State University Date: Thursday, April 26, 2012, Time: 2:00 p.m., Location: Fine Hall 601 Abstract: An important question in circle dynamics is regarding the absolute continuity of an invariant measure. We will consider orientation preserving circle homeomorphisms with break points, that is, maps that are smooth everywhere except for several singular points at which the first derivative has a jump. It is well known that the invariant measures of sufficiently smooth circle diffeomorphisms are absolutely continuous w.r.t. Lebesgue measure. But in the case of homeomorphisms with break points the results are quite different. We will discuss conjugacies between two circle homeomorphisms with break points. Consider the class of circle homeomorphisms with one break point $b$ and satisfying the Katznelson-Ornsteins smoothness condition i.e. $Df$ is absolutely continuous on $[b, b + 1]$ and $D^2f \in L^p(S^1, dl)$, $p > 1$. We will formulate some results concerning the renormalization behavior of such circle maps. Discrete Mathematics Seminar Topic: Points, lines, and local correction of codes Presenter: Avi Wigderson, IAS Date: Thursday, April 26, 2012, Time: 2:15 p.m., Location: Fine Hall 224 Abstract: A classical theorem in Euclidean geometry asserts that if a set of points has the property that every line through two of them contains a third point, then they must all be on the same line. We prove several approximate versions of this theorem (and related ones), which are motivated from questions about locally correctable codes and matrix rigidity. The proofs use an interesting combination of combinatorial, algebraic and analytic tools. The talk is self contained. Joint work with Boaz Barak, Zeev Dvir and Amir Yehudayoff Algebraic Topology Seminar Topic: Loop products and dynamics Presenter: Nancy Hingston, IAS and the College of New Jersey Date: Thursday, April 26, 2012, Time: 3:00 p.m., Location: Fine Hall 214 Abstract: A metric on a compact manifold M gives rise to a length function on the free loop space LM whose critical points are the closed geodesics on M in the given metric. Morse theory gives a link between Hamiltonian dynamics and the topology of loop spaces, between iteration of closed geodesics and the algebraic structure given by the Chas-Sullivan product on the homology of LM. Geometry reveals the existence of a related product on the cohomology of LM. A number of known results on the existence of closed geodesics are naturally expressed in terms of nilpotence of products. We use products to prove a resonance result for the loop homology of spheres. I will not assume any prior knowledge of loop products. Mark Goresky, Hans-Bert Rademacher, and (work in progress) Ralph Cohen and Nathalie Wahl are collaborators. Joint IAS and Princeton University Number Theory Seminar Topic: Deligne-Lusztig theory for unipotent groups and the local Langlands correspondence Presenter: Dmitriy Boyarchenko, University of Michigan Date: Thursday, April 26, 2012, Time: 4:30 p.m., Location: Fine Hall 214 Abstract: (1) A (very) special case of Deligne-Lusztig theory yields a construction of cuspidal irreducible representations of the finite group GL_n(F_q) in the cohomology of an algebraic variety equipped with an action of GL_n(F_q). There is also a well known relationship between cuspidal representations of GL_n(F_q) and depth 0 supercuspidal representations of GL_n(F), where F is a local field with residue field F_q. (2) On the other hand, thanks to the work of Boyer, Carayol, Deligne, Harris, Henniart, Laumon, Rapoport, Stuhler, Taylor..., it is known that the local Langlands correspondence for GL_n(F) is realized in the cohomology of the Lubin-Tate tower of rigid analytic spaces over F. There is a direct geometric link between (1) and (2): the first level of the Lubin-Tate tower contains an open affinoid with good reduction, whose special fiber is isomorphic to a Deligne-Lusztig variety for GL_n(F_q). I will explain a similar picture for certain supercuspidal representations of GL_n(F) of positive depth. In particular, I will describe the construction of an open affinoid (with good reduction) in a higher level of the Lubin-Tate tower, which has the following properties. On the one hand, its cohomology gives an explicit geometric realization of the local Langlands correspondence for a certain class of positive depth supercuspidal representations of GL_n(F). On the other hand, its special fiber is related to a certain unipotent group over F_q in a way that is similar to one of the known approaches to Deligne-Lusztig theory for reductive groups over F_q. Topology Seminar Topic: Nonorientable four-ball genus can be arbitrarily large Presenter: Joshua Batson, MIT Date: Thursday, April 26, 2012, Time: 4:30 p.m., Location: Fine Hall 314 Abstract: A classical problem in low-dimensional topology is to find a surface of minimal genus bounding a given knot K in the 3-sphere. Of course, the minimal genus will depend on the class of surface allowed: must it lie in S3 as well, or can it bend into B4? must the embedding be smooth, or only locally flat? must the surface admit an orientation, or can it be nonorientable? Our ability to bound or compute these genera varies dramatically between classes. Orientable surfaces form homology classes, so are amenable to algebraic topology (cf Alexander polynomial), and they admit complex structures, so can be understood using gauge theory (cf Ozsvath-Szabo's \tau). In contrast, the largest lower bound on the genus of a nonorientable surface smoothly embedded in B4 bounding any knot K was, until recently, the integer 3. We will construct a better bound using Heegaard-Floer d-invariants and the Murasugi signature. In particular, we will show that the minimal b_1 of a smoothly embedded, nonorientable surface in B4 bounding the torus knot T(2k,2k-1) is k-1. Differential Geometry and Geometric Analysis Seminar Topic: Gluing for Nonlinear PDEs, and Self-Shrinking Solitons in Mean Curvature Flow Presenter: Niels Martin Møller, MIT Date: Friday, April 27, 2012, Time: 3:00 p.m., Location: Fine Hall 314 Abstract: I will discuss some recent gluing constructions from minimal surface theory that yield complete, embedded, self-shrinking soliton surfaces of large genus g in R^3 (as expected from numerics by Tom Ilmanen and others in the early 90's), by fusing known low-genus examples. The analysis in the case of non-compact ends (joint w/ N. Kapouleas & S. Kleene), is complicated by the unbounded geometry, where Schrödinger operators (of Ornstein-Uhlenbeck type) with fast growth of the coefficients need to be understood well via Liouville-type results, which in turn enable construction of the resolvent of the stability operator and closing the PDE system. Analysis Seminar Topic: The Cauchy problem for the Benjamin-Ono equation in L^2 revisited (Joint work with Luc Molinet) Presenter: Didier Pilod, Universidade Federal do Rio de Janeiro / University of Chicago Date: Monday, April 30, 2012, Time: 3:15 p.m., Location: Fine Hall 314 Abstract: The Benjamin-Ono equation models the unidirectional evolution of weakly nonlinear dispersive internal long waves at the interface of a two-layer system, one being in finitely deep. The Cauchy problem associated to this equation presents interesting mathematical difficulties and has been extensively studied in the recent years. In a recent work (2007), Ionescu and Kenig proved well-posedness for real-valued initial data in L^2(R). In this talk, we will give another proof of Ionescu and Kenig's result, which moreover provides stronger uniqueness results. In particular, we prove unconditional well-posedness in H^s(R), for s > 1/4 . Note that our approach also permits to simplify the proof of the global well-posedness in L^2(T) by Molinet (2008) and yields unconditional well-posedness in H^{1/2}(T). Finally, it is worthwhile to mention that our technique of proof also apply for a higher-order Benjamin-Ono equation. We prove that the associated Cauchy problem is globally well-posed in the energy space H^1(R). Joint Princeton-Rutgers Seminar on Geometric PDEs Topic: Min-max theory and the Willmore Conjecture Presenter: Andre Neves, Imperial College Date: Monday, April 30, 2012, Time: 4:30 p.m., Location: Fine Hall 110 Abstract: In 1965, T. J. Willmore conjectured that the integral of the square of the mean curvature of any torus immersed in Euclidean three-space is at least 2 \pi^2. I will talk about my recent joint work with Fernando Marques in which we prove this conjecture using the min-max theory of minimal surfaces. PACM Colloquium Topic: Nonlinear Expectation, Nonlinear PDE and Stochastic Calculus under Knightian Uncertainty Presenter: Shige Peng, Shandong University Date: Monday, April 30, 2012, Time: 4:30 p.m., Location: Fine Hall 214 Abstract: A. N. Kolmogorov's "Foundations of the Theory of Probability" published in 1933, has established the modern axiomatic foundations of probability theory. Since then this theory has been profoundly developed and widely applied to situations where uncertainty cannot be neglected. But in 1921 Frank Knight has been already clearly classified two types of uncertainties: the first one is for which the probability is known; the second one, now called Knightian uncertainty, is for cases where the probability itself is also uncertain. The situation with Knightian uncertainty has become one of main concerns in the domain of data processing, economics, statistics, and specially in measuring and controlling financial risks. A long time challenging problem is how to establish a theoretical framework comparable to the Kolmogorov's one, to treat these more complicated situations with Knightian uncertainties. Tthe objective of the theory of nonlinear expectation rapidly developed in recent years is to solve this problem. This is an important program. Some fundamental results have been established such as law of large numbers, central limit theorem, martingales, G-Brownian motions, G-martingales and the corresponding stochastic calculus of Itˆo's type, nonlinear Markov processes, as well as the calculation of measures of risk in finance. But still so many deep problems are still to be explored. This new framework of nonlinear expectation is naturally and deeply linked to nonlinear partial differential equations (PDE) of parabolic and elliptic types. These PDEs appear in the law of large numbers, central limit theorem, and nonlinear diffusion processes in the new theory, and inversely, almost all solutions of linear, quasilinear and/or fully nonlinear PDEs can be expressed in term of the nonlinear expectation of a function of the corresponding (nonlinear) diffusion processes. Moreover, a new type of 'path-dependent partial differential equations' have been introduced which provide a PDE tool to study a stochastic process under a nonlinear expectation. Numerical calculations of these path dependent PDE will provide the corresponding backward stochastic calculations. MAY 2012 Algebraic Geometry Seminar Topic: Comparison theorems in p-adic Hodge theory Presenter: Bhargav Bhatt, University of Michigan Date: Tuesday, May 1, 2012, Time: 4:30 p.m., Location: Fine Hall 322 Abstract: A basic theorem in Hodge theory is the isomorphism between de Rham and Betti cohomology for complex manifolds; this follows directly from the Poincare lemma. The p-adic analogue of this comparison lies deeper, and was the subject of a series of extremely influential conjectures made by Fontaine in the early 80s (which have since been established by various mathematicians). In my talk, I will first discuss the geometric motivation behind Fontaine's conjectures, and then explain a simple new proof based on general principles in derived algebraic geometry --- specifically, derived de Rham cohomology --- and some classical geometry with curve fibrations. This work builds on ideas of Beilinson who proved the de Rham comparison conjecture this way. Department Colloquium Topic: Approximate groups and Hilbert's fifth problem Presenter: T. Tao, University of California - Los Angeles Date: Wednesday, May 2, 2012, Time: 4:30 p.m., Location: Fine Hall 314 Abstract: Approximate groups are, roughly speaking, finite subsets of groups that are approximately closed under the group operations, such as the discrete interval {-N,...,N} in the integers. Originally studied in arithmetic combinatorics, they also make an appearance in geometric group theory and in the theory of expansion in Cayley graphs. Hilbert's fifth problem asked for a topological description of Lie groups, and in particular whether any topological group that was a continuous (but not necessarily smooth) manifold was automatically a Lie group. This problem was famously solved in the affirmative by Montgomery-Zippin and Gleason in the 1950s. These two mathematical topics initially seem unrelated, but there is a remarkable correspondence principle (first implicitly used by Gromov, and later developed by Hrushovski and Breuillard, Green, and myself) that connects the combinatorics of approximate groups to problems in topological group theory such as Hilbert's fifth problem. This correspondence has led to recent advances both in the understanding of approximate groups and in Hilbert's fifth problem, leading in particular to a classification theorem for approximate groups, which in turn has led to refinements of Gromov's theorem on groups of polynomial growth that have applications to the study of the topology of manifolds. We will survey these interconnected topics in this talk. Ergodic Theory and Statistical Mechanics Seminar Topic: Reducibility for the quasi-periodic linear Schrödinger and wave equations. Presenter: L. H. Eliasson, Université Paris Diderot Date: Thursday, May 3, 2012, Time: 2:00 p.m., Location: Fine Hall 601 Abstract: We shall discuss reducibility of these equations on the torus with a small potential that depends quasi-periodically on time. Reducibility amounts to "reduce" the equation to a time-independent linear equation with pure point spectrum in which case all solutions will be of Floquet type. For the Schrödinger equation, this has been proven in a joint work with S. Kuksin, and for the wave equation we shall report on a work in progress with B. Grebert and S. Kuksin. Discrete Mathematics Seminar Topic: Edge-coloring 8-regular planar graphs Presenter: Maria Chudnovsky, Columbia University Date: Thursday, May 3, 2012, Time: 2:15 p.m., Location: Fine Hall 224 Abstract: In 1974 Seymour made the following conjecture: Let G be a k-regular planar (multi)graph, such that for every odd set X of vertices of G, at least k edges of G have one end in X and the other in V(G) \ X. Then G is k-edge colorable. For k=3 this is equivalent to the four-color theorem. The cases k=4,5 were solved by Guenin, the case k=6 by Dvorak, Kawarabayashi and Kral, and the case k=7 by Edwards and Kawarabayashi. In joint work with Edwards and Seymour, we now have a proof for the case k=8, and that is the topic of this talk. Joint IAS and Princeton University Number Theory Seminar Topic: Eisenstein series on exceptional groups, graviton scattering amplitudes, and the unitary dual Presenter: Steven D. Miller, Rutgers University Date: Thursday, May 3, 2012, Time: 4:30 p.m., Location: IAS - Room S-101 Abstract: I will describe the appearance of special values of Eisenstein series on E6, E7, and E8 that arose in studying the low energy expansion of the 4-graviton scattering amplitude in string theory (see arxiv:1004.0163 and 1111.2983). Through methods to handle the combinatorics of Langlands' constant term formulas we were able to exactly identify some correction terms beyond the main term predicted by Einstein general relativity. In some cases string theory predicts cancellations of terms in these formulas, while in others derives information from them. Some of the correction terms are proven to be automorphic realizations of small, real unitary representations of split real groups; this is used to limit the instanton contributions to these terms (i.e., verifying their fractional BPS properties). As a consequence of the combinatorial methods we prove a conjecture of Arthur concerning the spherical unitary dual of split real groups. (Joint work with Michael Green and Pierre Vanhove) Differential Geometry and Geometric Analysis Seminar Topic: TBA Presenter: Bo Guan, Ohio State University Date: Friday, May 4, 2012, Time: 3:00 p.m., Location: Fine Hall 314 Analysis Seminar Topic: BBM: a statiscal point of view Presenter: Anne-Sophie de Suzzoni, Universite de Cergy-Pontoise Date: Monday, May 7, 2012, Time: 3:15 p.m., Location: Fine Hall 314 Abstract: After presenting the BBM equation and some of its properties, we will try to understand which kind of statistics have a chance to remain invariant by its flow and produce one of them. The stability of this statistics will be studied then : to do so, we will sketch the parallelism between properties of equations and the statistics whose laws remain invariant by their flow. Joint PACM & Analysis Seminar Topic: The 2D Boussinesq equations with partial dissipation Presenter: Jiahong Wu, Oklahoma State University Date: Monday, 7, 2012, Time: 4:30 p.m., Location: Fine Hall 214 Abstract: The Boussinesq equations concerned here model geophysical flows such as atmospheric fronts and ocean circulations. Mathematically the 2D Boussinesq equations serve as a lower-dimensional model of the 3D hydrodynamics equations. In fact, the 2D Boussinesq equations retain some key features of the 3D Euler and the Navier-Stokes equations such as the vortex stretching mechanism. The global regularity problem on the 2D Boussinesq equations with partial dissipation has attracted considerable attention in the last few years. In this talk we will summarize recent results on various cases of partial dissipation, present the work of Cao and Wu on the 2D Boussinesq equations with vertical dissipation and vertical thermal diffusion and explain the work of Chae and Wu on the logarithmically supercritical Boussinesq equations. Ergodic Theory and Statistical Mechanics Seminar Topic: Effective discreteness of the 3-dimensional Markov spectrum Presenter: Han Li, Yale University Date: Thursday, May 10, 2012, Time: 2:00 p.m., Location: Fine Hall 601 Abstract: Let the set O={non-degenerate, indefinite, real quadratic forms in 3-variables with determinant 1}. We define for every form Q in the set O, the Markov minimum m(Q)=min{|Q(v)|: v is a non-zero integral vector in $R^3$}. The set M={m(Q): Q is in O} is called the 3-dimensional Markov spectrum. An early result of Cassels-Swinnerton-Dyer combined with Margulis' proof of the Oppenheim conjecture asserts that, for every a>0 $M \intersect (a, \infty)$ is a finite set. In this lecture we will show that #{M \intersect (a, \infty)}<< a^{-26}. This is a joint work with Prof. Margulis, and our method is based on dynamics on homogeneous spaces. Joint IAS and Princeton University Number Theory Seminar Topic: TBA Presenter: Jack Thorne, Harvard University Date: Thursday, May 10, 2012, Time: 4:30 p.m., Location: Fine Hall 214 Algebraic Topology Seminar Topic: TBA Presenter: Allison Gilmore, Princeton University Date: Thursday, May 17, 2012, Time: 3:00 p.m., Location: Fine Hall 214
### N000ughty Thoughts Factorial one hundred (written 100!) has 24 noughts when written in full and that 1000! has 249 noughts? Convince yourself that the above is true. Perhaps your methodology will help you find the number of noughts in 10 000! and 100 000! or even 1 000 000! ### Forgotten Number I have forgotten the number of the combination of the lock on my briefcase. I did have a method for remembering it... ### Factorial How many zeros are there at the end of the number which is the product of first hundred positive integers? # Trailing Zeros ##### Stage: 3 and 4 Short Challenge Level: 12. Every zero at the end of a number corresponds to one of the factors being 10 (e.g., 23000 has 3 factors of 10 (i.e. $10^3$ divides 23000). But 10 itself can be factorised into $5 \times 2$, so we're looking for the smallest power of 2 or 5 and that will be our answer. $50!$ has at least 25 factors of 2, plenty more than we expect it to have factors of 5. Of the numbers 1 to 50, ten of them (5, 10, ..., 50) have 5 as a factor, and two of those have two factors of 5. So $50!$ has 12 factors of 5 (less than 25, good). Hence $50!$ has 12 zeros at its end. This problem is taken from the UKMT Mathematical Challenges. View the previous week's solution View the current weekly problem
# A triangle has sides A, B, and C. The angle between sides A and B is (pi)/2 and the angle between sides B and C is pi/12. If side B has a length of 17, what is the area of the triangle? $\angle B = \pi - \left(\frac{\pi}{2} + \frac{\pi}{12}\right) = \frac{5 \pi}{12} , \to \frac{a}{\sin} A = \frac{b}{\sin} B \to \frac{a}{\sin} \left(\frac{\pi}{12}\right) = \frac{17}{\sin} \left(\frac{5 \pi}{12}\right) \to a = \frac{17 \sin \left(\frac{\pi}{12}\right)}{\sin} \left(\frac{5 \pi}{12}\right) = 4.555 \ldots$ $A r e a \triangle = \frac{1}{2} \left(a\right) \left(b\right) \sin C = \frac{1}{2} \left(4.555 \ldots\right) \left(17\right) \sin \left(\frac{\pi}{2}\right) \approx 38.72$ First subtract the given angles from $\pi$ to find the third angle. In order to find the area of the triangle we need to find the measurement of one of the other two sides. So I use the law of sines to find side a and then put it into the area formula and calculate. Note I use angle C since sides a and b are in my calculation.
# Pattern - writter identify using texture descriptors This tutorial is about an approach for writer identification using texture descriptors of handwritten fragments. At the begin of all, I may define image analysis = image conversion + digit analysis, where image conversion is to change image from matrix digital to list digital and digit analysis uses machine learning method. Characterizing individual’s handwriting style plays an important role in handwritten document analysis and automatic writer identification has attracted a large number of researchers in the pattern recognition field based on modern handwritten text, musical scores and historical documents. We can learn from figure 1 that different writers have different handwriting style, even for letter ‘l’, well, this gives us a way to use texture descriptors of handwritten fragments identify writer. We may easily identify writer by texture descriptors following steps: 1.cutting image into pieces of letters 2.resize pieces of letters into NxN piexls 3.convert pieces of letters into digital array 4.using machine learning method(SVM/KNN) identify writter # Image Conversion ## 1.Local Binary Pattern(LBP) ### 3x3 LBP Local binary patterns (LBP) is a type of visual descriptor used for classification in computer vision. LBP is the particular case of the Texture Spectrum model proposed in 1990.Local binary patterns(wiki): https://en.wikipedia.org/wiki/Local_binary_patterns Figure 2 shows how LBP transform algorithm works, with clockwise direction, we define position from 0 to 8, coordinate start at (0, 0), center point coordinate as (1, 1). The LBP operator labels the pixels of an image by thresholding the 3x3 neighborhood of each pixel with the center value and summing the thresholded values weighted by powers of 2. The resulting LBP can be expressed in decimal form as follows: where gi and gc are, respectively, gray-level values of the central pixel and i surrounding pixels in the circular neighborhood with a radius R. We compare each point(gi) to the center point pigment(gc), if the color depth is greater than the center point we define it to 1, otherwise it is set to 0, the resulting data is multiplied by 2. And s(x) follows: Ok, after Formula 1/2, we may get list of 3x3 blocks, the texture is represented by the histogram of the labels: where δ is the Kronecker delta and is given as: Figure 3 is an example shows LBP transform, even though right picture lost a lot of pixels detail, it still remain it’s texture descriptors: ### Multiscale LBP Multiscale LBP is an improvment for 3x3 LBP, where multiscale LBP allow to convert image block by 8x8 or 16x16 pixels. The bigger block may accelerate it’s caculating speed, but also bring the losing of pixels information. See Figure 4 for how it works: ### Rotation LBP Formally, rotation LBP can be achieved by defining: where ‘d’ is rotation degree, d ∈ (0, 360), and after mapping, we choose minmium value for result value, which is an unique value. ### simple 3x3 LBP coding Gist source code: https://gist.github.com/grasses/bacbdfae0626353de12cedc4ceaed552 import numpy as np import cv2 from matplotlib import pyplot as plt def thresholded(center, pixels): out = [] for a in pixels: if a < center: out.append(0) else: out.append(1) return out def get_pixel(pixel_list, idx, idy, default = 0): try: return pixel_list[idx, idy] except IndexError: return default def show(img, lbp_img): plt.figure(figsize = (8, 8)) plt.subplot(221) plt.title("original image") plt.imshow(img, cmap=plt.cm.Greys_r) plt.subplot(222) plt.title("LBP transform image") plt.imshow(lbp_img, cmap=plt.cm.Greys_r) plt.subplot(223) (hist, bins) = np.histogram(img.flatten(), 256, [0, 256]) cdf = hist.cumsum() cdf_normalized = cdf * hist.max() / cdf.max() plt.plot(cdf_normalized, color = 'b') plt.hist(img.flatten(), 256, [0, 256], color = 'r') plt.xlim([0, 256]) plt.legend(('cdf', 'histogram'), loc = 'upper left') plt.show() def main(fpath): offset = [(-1, -1), (0, -1), (1, -1), (1, 0), (-1, 0), (-1, 1), (1, 1), (0, 1)] for x in range(len(img)): for y in range(len(img[x])): matrix = [] for z in range(len(offset)): matrix.append(get_pixel(img, x + offset[z][0], y + offset[z][1])) center = img[x, y] # get thresholded 0101 value values = thresholded(center, matrix) weights = [1, 2, 4, 8, 16, 32, 64, 128] res = 0 for a in range(len(values)): res += weights[a] * values[a] lbp_img.itemset((x,y), res) show(img, lbp_img) if __name__ == '__main__': main(r'/path/to/img') ## 2.Local Ternary Patterns(LTP) Local Ternary Patterns(LTP) is an advance version algrithm of LBP, which introduces gradient function for block transform. From figure 6, with clockwise direction, we define position from 0 to 8, coordinate start at (0, 0), center point coordinate as (1, 1). With 3 gradients, (0, 30 - t), (30 - t, 30 + t), (30 + t, 256), follows 3 result value: -1, 0, 1. As is shown in picture, when t = 5: G0=42 > Gc=30 and G0 > 35, then LTP(0, 0) = 1 G1=55 > Gc=30 and G1 > 35, then LTP(0, 1) = 1 ... G4=18 < Gc=30 and G4 < 25, then LTP(2, 2) = -1 Then we may define the approach into math formula: where gc is center pixel, gi is current pixel, range from 0 to 8, t is threshold. Then st() can be define like Formula 7: # Digit Analysis K-Nearest Neighbor guide: http://homeway.me/2017/04/21/machine-learning-knn/ In this section, we use K-Nearest Neighbor(KNN) for digit analysis. #### 本文出自 夏日小草,转载请注明出处:http://homeway.me/2017/05/04/pattern-writter-identify/ -by grasses 2017-05-05 03:22:34
# How can I calculate the rolling moment of an aileron for a given plane based on its performance? Let's say I have an already-built airplane with known basic characteristics like weight, wing span and wing surface, and I can measure the time of all possible manoeuvres at different speeds. How can I calculate the roll moment of an aileron in its maximal deflection? The precision of hundreds of [kg*m] is sufficient. If you know the rolling speed at a given flight speed, you can calculate the aileron effectiveness and use that to calculate the forces. The final rolling speed is reached when roll damping and the aileron-induced rolling moment reach an equilibrium: $$c_{l\xi} \cdot \frac{\xi_l - \xi_r}{2} = -c_{lp} \cdot \frac{\omega_x \cdot b}{2\cdot v_\infty} = -c_{lp} \cdot p$$ Thus, your aileron effectiveness is $$c_{l\xi} = -c_{lp}\cdot\frac{\omega_x \cdot b}{v_\infty\cdot(\xi_l - \xi_r)}$$ The roll damping term is for unswept wings $$c_{lp} = -\frac{1}{4} \cdot \frac{\pi \cdot AR}{\sqrt{\frac{AR^2}{4}+4}+2}$$ and the moment per aileron now is $$M = c_{l\xi} \cdot \xi \cdot S_{ref} \cdot b \cdot q_\infty$$ Calculate the moment for each aileron separately; normally the left and right deflection angles are not exact opposites, which helps to reduce stick forces. If you only need an approximation, maybe do it like this: You first need to have all dimensions and the deflection angles. I expect you don't have lift polars of the wing section, so you need to approximate the lift increase due to aileron deflection with general formulas. This is $$c_{l\xi} = c_{l\alpha} \cdot \sqrt{\lambda} \cdot \frac{S_{aileron}}{S_{ref}} \cdot \frac{y_{aileron}}{b}$$ and the moment per aileron now is $$M = c_{l\xi} \cdot \xi \cdot S_{ref} \cdot b \cdot q_\infty = c_{l\alpha} \cdot \sqrt{\lambda} \cdot \xi \cdot S_{aileron} \cdot y_{aileron} \cdot q_\infty$$ Nomenclature: $$p \:\:\:\:\:\:\:\:$$ dimensionless rolling speed (= $$\omega_x\cdot\frac{b}{2\cdot v_\infty}$$). $$\omega_x$$ is the roll rate in radians per second. $$b \:\:\:\:\:\:\:\:$$ wing span $$c_{l\xi} \:\:\:\:\:\:$$ aileron lift increase with deflection angles $$\xi$$ $$\xi_{l,r} \:\:\:\:\:$$ left and right aileron deflection angles (in radians) $$c_{lp} \:\:\:\:\:\;$$ roll damping $$c_{l\alpha} \:\:\:\:\:\;$$ the wing's lift coefficient gradient over angle of attack. See this answer on how to calculate it. $$\pi \:\:\:\:\:\:\:\:$$ 3.14159$$\dots$$ $$AR \:\:\:\:\:$$ aspect ratio of the wing $$\lambda \:\:\:\:\:\:\:\:$$ relative aileron chord $$S_{aileron} \:$$ Surface area of the aileron-equipped part of the wing $$S_{ref} \:\:\:\:\:$$ Reference area (normally the wings's area) $$y_{aileron} \:$$ spanwise center of the aileron-equipped part of the wing $$v_\infty \:\:\:\:\:$$ true flight speed $$q_\infty \:\:\:\:\:$$ dynamic pressure Depending on the relative chord length of the aileron, this formula is good for maximum deflections of 20° of a 20% chord aileron or 15° deflection of a 30% chord aileron. Remember: This is a rough estimate for straight wings. • What are the units in the first equation? The right hand side appears to have units of [seconds], assuming p is dimensionless, while the left-hand-side appears to be dimensionless. – supergra May 14 at 19:11 • @supergra: No, it's dimensionless. I just realised that I confused p with $\omega_x$ in the first two equations. Thank you for finding this mistake! – Peter Kämpf May 14 at 21:25
## y a plus de mouchoirs au bureau des pleurs Posted in pictures, University life with tags , , , , , , , , , on January 10, 2019 by xi'an Once the French government started giving up to some requests of the unstructured “gilets jaunes” protesters, it was like a flood or flush gate had opened and every category was soon asking for a rise (in benefits) and a decrease (in taxes) or the abolition of a recent measure (like the new procedure for entering university after high school). As an illustration, I read a rather bemusing tribune in Le Monde from a collective of PhD students against asking non-EU students (including PhD students) to pay fees to study in French universities. This may sound a bit of a surrealistic debate from abroad, but the most curious point in the tribune [besides the seemingly paradoxical title of students against Bienvenue En France!] is to argue that asking these students to pay the intended amount would bring their net stipends below the legal minimum wage, considering that they are regular workers… (Which is not completely untrue when remembering that in France the stipends are taxed for income tax and retirement benefits and unemployment benefits, meaning that a new PhD graduate with no position can apply for these benefits.) It seems to me that the solution adopted in most countries, namely that the registration fees are incorporated within the PhD grants, could apply here as well… The other argument that raising these fees from essentially zero to 3000 euros is going to stop bright foreign students to do their PhD in France is not particularly strong when considering that the proportion of foreign students among PhD students here is slightly inferior to the proportion in the UK and the US, where the fees are anything but negligible, especially for foreign students. ## Le Monde puzzle [#1076] Posted in Books, Kids, R, Travel with tags , , , , , , , , , on December 27, 2018 by xi'an A cheezy Le Monde mathematical puzzle : (which took me much longer to find [in the sense of locating] than to solve, as Warwick U does not get a daily delivery of the newspaper [and this is pre-Brexit!]): Take a round pizza (or a wheel of Gruyère) cut into seven identical slices and turn one slice upside down. If the only possibly moves are to turn three connected slices to their reverse side, how many moves at least are needed to recover the original configuration? What is the starting configuration that requires the largest number of moves? Since there are ony N=2⁷ possible configurations, a brute force exploration is achievable, starting from the perfect configuration requiring zero move and adding all configurations found by one additional move at a time… Until all configurations have been visited and all associated numbers of steps are stable. Here is my R implementation nztr=lengz=rep(-1,N) #length & ancestor nztr[0+1]=lengz[0+1]=0 fundz=matrix(0,Z,Z) #Z=7 for (i in 1:Z){ #only possible moves fundz[i,c(i,(i+1)%%Z+Z*(i==(Z-1)),(i+2)%%Z+Z*(i==(Z-2)))]=1 lengz[bit2int(fundz[i,])+1]=1 nztr[bit2int(fundz[i,])+1]=0} while (min(lengz)==-1){ #second loop omitted for (j in (1:N)[lengz>-1]) for (k in 1:Z){ m=bit2int((int2bit(j-1)+fundz[k,])%%2)+1 if ((lengz[m]==-1)|(lengz[m]>lengz[j]+1)){ lengz[m]=lengz[j]+1;nztr[m]=j} }} Which produces a path of length five returning (1,0,0,0,0,0,0) to the original state: > nztry(2) [1] 1 0 0 0 0 0 0 [1] 0 1 1 0 0 0 0 [1] 0 1 0 1 1 0 0 [1] 0 1 0 0 0 1 0 [1] 1 1 0 0 0 0 1 [1] 0 0 0 0 0 0 0 and a path of length seven in the worst case: > nztry(2^7) [1] 1 1 1 1 1 1 1 [1] 1 1 1 1 0 0 0 [1] 1 0 0 0 0 0 0 [1] 0 1 1 0 0 0 0 [1] 0 1 0 1 1 0 0 [1] 0 1 0 0 0 1 0 [1] 1 1 0 0 0 0 1 [1] 0 0 0 0 0 0 0 Since the R code was written for an arbitrary number Z of slices, I checked that there is no solution for Z being a multiple of 3. Posted in Books, Kids, pictures with tags , , , , , , , on December 16, 2018 by xi'an A first graph in Le Monde about the impact of the recent tax changes on French households as a percentage of net income with negative values at both ends, except for a small spike at about 10% and another one for the upper 1%, presumably linked with the end of the fortune tax (ISF). A second one showing incompressible expenses by income category, with poorest households facing a large constraint on lodging, missing the fraction due to taxes. Unless the percentage is computed after tax. A last and amazing one detailing the median monthly income per socio-professional category, not because of the obvious typo on the blue collar median 1994!, but more fundamentally because retirees have a median income in the upper part of the range. (This may be true in most developed countries, I was just unaware of this imbalance.) ## Le Monde puzzle [#1075] Posted in Books, Kids, R with tags , , , , on December 12, 2018 by xi'an A new Le Monde mathematical puzzle in the digit category: Find the largest number such that each of its internal digits is strictly less than the average of its two neighbours. Same question when all digits differ. For instance, n=96433469 is such a number. When trying pure brute force (with the usual integer2digits function!) le=solz=3 while (length(solz)>0){ solz=NULL for (i in (10^(le+1)-1):(9*10^le+9)){ x=as.numeric(strsplit(as.character(i), "")[[1]]) if (min(x[-c(1,le+1)]<(x[-c(1,2)]+x[-c(le,le+1)])/2)==1){ print(i);solz=c(solz,i); break()}} le=le+1} this is actually the largest number returned by the R code. There is no solution with 9 digits. Adding an extra condition le=solz=3 while (length(solz)>0){ solz=NULL for (i in (10^(le+1)-1):(9*10^le+9)){ x=as.numeric(strsplit(as.character(i), "")[[1]]) if ((min(x[-c(1,le+1)]<(x[-c(1,2)]+x[-c(le,le+1)])/2)==1)& (length(unique(x))==le+1)){ print(i);solz=c(solz,i); break()}} le=le+1} produces n=9520148 (seven digits) as the largest possible integer. ## Le Monde puzzle [#1078] Posted in Books, Kids, R with tags , , , , , , on November 29, 2018 by xi'an Recalling Le Monde mathematical puzzle  first competition problem Given yay/nay answers to the three following questions about the integer 13≤n≤1300 (i) is the integer n less than 500? (ii) is n a perfect square? (iii) is n a perfect cube?  n cannot be determined, but it is certain that any answer to the fourth question (iv) are all digits of n distinct? allows to identify n. What is n if the answer provided for (ii) was false. When looking at perfect squares less than 1300 (33) and perfect cubes less than 1300 (8), there exists one single common integer less than 500 (64) and one single above (729). Hence, it is not possible that answers to (ii) and (iii) are both positive, since the final (iv) would then be unnecessary. If the answer to (ii) is negative and the answer to (iii) is positive, it would mean that the value of n is either 512 or 10³ depending on the answer to (i), excluding numbers below 500 since there is no unicity even after (iv). When switching to a positive answer to (ii), this produces 729 as the puzzle solution. Incidentally, while Amic, Robin, and I finished among the 25 ex-aequos of the competition, none of us reached the subsidiary maximal number of points to become the overall winner. It may be that I will attend the reward ceremony at Musée des Arts et Métiers next Sunday. ## Le Monde puzzle [#1075] Posted in Books, Kids, R with tags , , , , , , , on November 22, 2018 by xi'an A Le Monde mathematical puzzle from after the competition: A sequence of five integers can only be modified by subtracting an integer N from two neighbours of an entry and adding 2N to the entry.  Given the configuration below, what is the minimal number of steps to reach non-negative entries everywhere? Is this feasible for any configuration? As I quickly found a solution by hand in four steps, but missed the mathematical principle behind!, I was not very enthusiastic in trying a simulated annealing version by selecting the place to change inversely proportional to its value, but I eventually tried and also obtained the same solution: [,1] [,2] [,3] [,4] [,5] -3 1 1 1 1 1 -1 1 1 -1 0 1 0 1 -1 -1 1 0 0 1 1 0 0 0 0 But (update!) Jean-Louis Fouley came up with one step less! [,1] [,2] [,3] [,4] [,5] -3 1 1 1 1 3 -2 1 1 -2 2 0 0 1 -2 1 0 0 0 0 The second part of the question is more interesting, but again without a clear mathematical lead, I could only attempt a large number of configurations and check whether all admitted “solutions”. So far none failed. ## Le Monde puzzle [#1073] Posted in Books, Kids, R with tags , , , , , , on November 3, 2018 by xi'an And here is Le Monde mathematical puzzle  last competition problem Find the number of integers such that their 15 digits are all between 1,2,3,4, and the absolute difference between two consecutive digits is 1. Among these numbers how many have 1 as their three-before-last digit and how many have 2? Combinatorics!!! While it seems like a Gordian knot because the number of choices depends on whether or not a digit is an endpoint (1 or 4), there is a nice recurrence relation between the numbers of such integers with n digits and leftmost digit i, namely that n⁴=(n-1)³, n³=(n-1)²+(n-1)⁴, n²=(n-1)²+(n-1)¹, n¹=(n-1)² with starting values 1¹=1²=1³=1⁴=1 (and hopefully obvious notations). Hence it is sufficient to iterate the corresponding transition matrix $M= \left(\begin{matrix}0 &1 &0 &0\\1 &0 &1 &0\\0 &1 &0 &1\\0 &0 &1 &0\\\end{matrix} \right)$ on this starting vector 14 times (with R, which does not enjoy a built-in matrix power) to end up with 15¹=610, 15²= 987, 15³= 987, 15⁴= 610 which leads to 3194 different numbers as the solution to the first question. As pointed out later by Amic and Robin in Dauphine, this happens to be twice Fibonacci’s sequence. For the second question, the same matrix applies, with a different initial vector. Out of the 3+5+5+3=16 different integers with 4 digits, 3 start with 1 and 5 with 2. Then multiplying (3,0,0,0) by M¹⁰ leads to 267+165=432 different values for the 15 digit integers and multiplying (0,5,0,0) by M¹⁰ to. 445+720=1165 integers. (With the reassuring property that 432+1165 is half of 3194!) This is yet another puzzle in the competition that is of no special difficulty and with no further challenge going from the first to the second question…
Opuscula Math. 35, no. 5 (2015), 739-773 http://dx.doi.org/10.7494/OpMath.2015.35.5.739 Opuscula Mathematica # Analytic continuation of solutions of some nonlinear convolution partial differential equations Hidetoshi Tahara Abstract. The paper considers a problem of analytic continuation of solutions of some nonlinear convolution partial differential equations which naturally appear in the summability theory of formal solutions of nonlinear partial differential equations. Under a suitable assumption it is proved that any local holomorphic solution has an analytic extension to a certain sector and its extension has exponential growth when the variable goes to infinity in the sector. Keywords: convolution equations, partial differential equations, analytic continuation, summability, sector. Mathematics Subject Classification: 45K05, 45G10, 35A20. Full text (pdf) 1. W. Balser, From Divergent Power Series to Analytic Functions - Theory and Application of Multisummable Power Series, Lecture Notes in Mathematics, No. 1582, Springer, 1994. 2. W. Balser, Multisummability of formal power series solutions of partial differential equations with constant coefficients, J. Differential Equations 201 (2004) 1, 63-74. 3. B.L.J. Braaksma, Multisummability of formal power series solutions of nonlinear meromorphic differential equations, Ann. Inst. Fourier 42 (1992) 3, 517-540. 4. R. Gérard, H. Tahara, Singular Nonlinear Partial Differential Equations, Aspects of Mathematics, vol. E 28, Vieweg-Verlag, Wiesbaden, Germany, 1996. 5. L. Hörmander, Linear Partial Differential Operators, Die Grundlehren der mathematischen Wissenschaften, Bd. 116, Academic Press Inc., Publishers, New York, 1963. 6. Z. Luo, H. Chen, C. Zhang, Exponential-type Nagumo norms and summabolity of formal solutions of singular partial differential equations, Ann. Inst. Fourier 62 (2012) 2, 571-618. 7. M. Nagumo, Über das Anfangswertproblem Partieller Differentialgleichungen, Japan. J. Math. 18 (1941), 41-47. 8. S. Ouchi, Multisummability of formal solutions of some linear partial differential equations, J. Differential Equations 185 (2002) 2, 513-549. 9. S. Ouchi, Multisummability of formal power series solutions of nonlinear partial differential equations in complex domains, Asymptot. Anal. 47 (2006) 3-4, 187-225. 10. J.-P. Ramis, Y. Shibuya, A theorem concerning multisummability of formal solutions of non linear meromorphic differential equations, Ann. Inst. Fourier 44 (1994) 3, 811-848. 11. H. Tahara, H. Yamazawa, Multisummability of formal solutions to the Cauchy problem for some linear partial differential equations, J. Differential Equations 255 (2013) 10, 3592-3637. • Hidetoshi Tahara • Sophia University, Department of Information and Communication Sciences, Kioicho, Chiyoda-ku, Tokyo 102-8554, Japan • Communicated by P.A. Cojuhari. • Revised: 2014-12-12. • Accepted: 2015-01-23. • Published online: 2015-04-27.
Circle 12 05 (Lesson4) – We Solve Problems #### Circle 12 05 (Lesson4) Circle name: Circle 12 05 Lesson name: Lesson4 Starts at : 13.05.2020 12:00 Problems: #### Dissections (other)11-13 Decipher the following rebus $\\$ $\\$ All the digits indicated by the letter “E” are even $($ not necessarily equal $)$; all the numbers indicated by the letter O are odd $($ also not necessarily equal $)$. #### Processes and operations , Theory of algorithms (other)11-14 There are 6 locked suitcases and 6 keys for them. It is not known which keys are for which suitcase. What is the smallest number of attempts do you need in order to open all the suitcases? How many attempts would you need if there are 10 suitcases and keys instead of 6? #### Theory of algorithms (other)11-13 48 blacksmiths must shoe 60 horses. Each blacksmith spends 5 minutes on one horseshoe. What is the shortest time they should spend on the work? $($ Note that a horse can not stand on two legs. $)$ #### Theory of algorithms (other)11-13 Decipher the following rebus. Despite the fact that only two figures are known here, and all the others are replaced by asterisks, the question can be restored.$\\$ #### Puzzles11-14 Decode this rebus: replace the asterisks with numbers such that the equalities in each row are true and such that each number in the bottom row is equal to the sum of the numbers in the column above it. $\\$ #### Puzzles11-14 In the rebus in the diagram below, the arithmetic operations are carried out from left to right (even though the brackets are not written). For example, in the first row “$** \div 5 + * \times 7 = 4*$” is the same as “$((** \div 5) +*) \times 7 = 4*$”. Each number in the last row is the sum of the numbers in the column above it. The result of each $n$-th row is equal to the sum of the first four numbers in the $n$-th column. All of the numbers in this rebus are non-zero and do not begin with a zero, however they could end with a zero. That is, 10 is allowed but not 01 or 0. Solve the rebus. $\\$ #### Equations of higher order (other) , Integer and fractional parts. Archimedean property13-15 During the chess tournament, several players played an odd number of games. Prove that the number of such players is even. #### Processes and operations , Theory of algorithms (other)12-14 A traveller rents a room in an inn for a week and offers the innkeeper a chain of seven silver links as payment – one link per day, with the condition that they will be payed everyday. The innkeeper agrees, with the condition that the traveller can only cut one of the links. How did the traveller manage to pay the innkeeper? My Problem Set reset No Problems selected
# Opaque-to-transparent gradient lost in embedded PDF figure I have a figure created with Inkscape as SVG and exported as PDF. When I view the PDF directly (e.g. in OS X Preview), it shows correctly. When embedding it in a LaTeX document, it has problems with gradients that go from opaque white to transparent. They appear matte white on the final PDF output (after pdflatex). I am using the baposter.cls' by Brian Amberg. The figures appear on colored column boxes. In the attached screenshot the figure is on the left side (showing the gradient), on the right side it is embedded into the poster (showing opaque white instead). The colours don't seem to matter. I can have a red-to-transparent gradient, the same thing happens (renders matte red in the final PDF). Now this comp.tex.pdf thread suggests it might be a problem of Ghostscript. I remember installing Ghostscript 9, I have these on my machine: /opt/local/share/ghostscript/9.05 /usr/local/share/ghostscript/8.71 But TeXShop I think uses TeX Live (I'm on version 2010), and its manual says: Because of the importance of Perl and Ghostscript, TEX Live includes ‘hidden’ copies of these programs. Should I point TeX Live to the other Ghostscript location? Should I update TeX Live? Thanks - I'm afraid that this sounds off-topic. Essentially, it's an InkScape question, not a TeX one, although I do of course understand the wider link here. Perhaps one for SuperUser? –  Joseph Wright Jul 5 '12 at 7:49 Well, svg is a common tag, and there are various related questions. I did use Inkscape to create the file, but it displays perfect in Mozilla, so hardly anything with the SVG is wrong. I can delete the Inkscape tag, because it is about getting the SVG into my LaTeX file and showing properly, with whatever technique or program. –  Emit Taste Jul 5 '12 at 9:59 Updated to TeX Live 2011 (which says it uses Ghostscript 9.02) – "This is pdfTeX, Version 3.1415926-2.3-1.40.12 (TeX Live 2011)". Still the same issue. –  Emit Taste Jul 5 '12 at 11:57 Can you give a link to your SVG file? My own tests show no problem. –  Paul Gaborit Jul 5 '12 at 22:28 Here is my own tests (with your SVG file saved as PDF by inkscape). First test: \documentclass[tikz]{standalone} \usepackage{tikz} \begin{document} \begin{tikzpicture} \node[minimum size=10cm,fill=blue!30]% {\includegraphics[width=10cm]{event_pushc_transp}}; \end{tikzpicture} \end{document} Second test: \documentclass{standalone} \usepackage{graphicx} \usepackage{xcolor} \pagecolor{blue!30} \begin{document} \includegraphics[width=10cm]{event_pushc_transp} \end{document} And the result: - Thanks for testing, PolGab. I ran your small test documents -- I still don't get the gradient here. So which Inkscape version did you use to save the PDF? Mac users seem to be left out with 0.48.3 so far :-( –  Emit Taste Jul 6 '12 at 14:48 @EmitTaste My version is Inkscape 0.48.3.1 r9886 (Mar 29 2012). –  Paul Gaborit Jul 6 '12 at 20:49 This solves my issues with transparent gradients within included PDFs, but it no longer works if I use article instead of standalone`. Is there a solution for articles? What's different? –  Alec Jacobson Aug 23 '13 at 9:21 I can open the figure pdf in Adobe Acrobat Pro 9, use Save-As with option PDF->Optimized, and the re-written file now maintains the transparency within that file, but still not between the file and the outer PDF. So a workaround is to add an opaque rectangle with the desired background colour to the figure PDF, and re-write with Acrobat :-/ -
# Optimal control via weighted least-squares $y$,$y^\mathrm{des}$ $t$ The input signal $x = (x_1,\ldots,x_T)$ is to be chosen. The output $y$ is given by $y_t = \sum_{i=1}^{k} h_i x_{t-i}$. ($x_i$ is taken to be 0 for $i \leq 0$.) The goal is to choose the input signal $x$ such that $y \approx y^\mathrm{des}$ and $x$ is smooth. To measure how well $y$ tracks $y^\mathrm{des}$, we use the cost $J^\mathrm{track} = \sum_{t=1}^T (y_t^\mathrm{des} - y_t)^2$. To measure the roughness of $x$, we use $J^\mathrm{rough} = \sum_{t=2}^T (x_t - x_{t-1})^2$. To find $x$ we minimize $J^\mathrm{track} + \gamma J^\mathrm{rough}$, where $\gamma > 0$ controls the amount of smoothing on the input. $\gamma=$ 0 $x$ $t$ $J^\mathrm{smooth}$ $J^\mathrm{track}$
# Key questions about philanthropy, part 3: Making and evaluating grants This post is third in a series on fundamental (and under-discussed) questions about philanthropy that we’ve grappled with in starting a grantmaking organization (see previous link for the series intro, and this link for the second installment). This post covers the following questions: • When making a grant, should we focus most on evaluating the strategy/intervention, the leadership, or something else? We think both are very important; for a smaller grant we hope to be excited about one or the other, and for a larger grant we hope to thoroughly assess both. A couple of disanalogies between philanthropy and for-profit investing point to a relatively larger role for evaluating strategies/interventions, relative to people. More • For a given budget, is it best to make fewer and larger grants or more numerous and smaller grants? We currently lean toward the former. Most of the grants we’ve made so far are either (a) a major grant that we’ve put major time into or (b) a smaller grant that we’ve put less time into, in the hopes of seeding a project that could raise more money down the line. More • What sort of paperwork should accompany a grant? Funders often require grantees to complete lengthy writeups about their plans, strengths, weaknesses, and alignment with funder goals. So far, we’ve taken a different approach: we create a writeup ourselves and work informally with the grantee to get the information we need. We do have a standard grant agreement that covers topics such as transparency (setting out our intention to write publicly about the grant) and, when appropriate, research practices (e.g. preregistration and data sharing). More • What should the relationship be between different funders? How strongly should we seek collaboration, versus seeking to fund what others won’t? It seems to us that many major funders greatly value collaboration, and often pursue multi-funder partnerships. We don’t fully understand the reasons for this and would like to understand them better. Our instincts tend to run the other way. All else equal, we prefer to fund things that are relatively neglected by other funders. We see a lot of value in informal contact with other funders – in checking in, discussing potential grants, and pitching giving opportunities – but a more formal collaboration with another staffed funder would likely introduce a significant amount of time cost and coordination challenges, and we haven’t yet come across a situation in which that seemed like the best approach. More • How should we evaluate the results of our grants? Of all the questions in this series, this is the one we’ve seen the most written about. Our approach is very much case-by-case: for some grants, we find it appropriate to do metrics-driven evaluation with quantifiable targets, while for others we tend to have a long time horizon and high tolerance for uncertainty along the way. More When making a grant, should we focus most on evaluating the strategy/intervention, the leadership, or something else? We’ve received a fair amount of advice to focus more on supporting the best people than on supporting the best ideas, since execution is extremely important and ideas tend to evolve greatly as a project progresses. On the flip side, we believe it is often very difficult to evaluate people with any reliability, especially when working with people who don’t have significant track records (as may be necessary when funding novel projects and nascent fields). I note that I see a couple of disanalogies between philanthropy and for-profit investing that point to a relatively larger role for evaluating strategies/interventions, relative to people: • In the nonprofit world, it is quite possible – and probably common – for a project to have enormous success according to one person and one set of values, while having little success according to another. (By contrast, in the for-profit world, there is widespread agreement on the ultimate goal of profitability.) In my view, funding an outstanding leader in a low-impact cause can easily result in a project that succeeds on its own terms but still doesn’t represent an outstanding grant. In addition – as discussed in a previous post – it is often a funder’s responsibility to try to attract people into particular causes and missions, rather than supporting whatever plans the most capable people have already formed. • I also believe – weakly, and based on informal observations over the years – that it is often possible for a nonprofit to start with weak leadership and improve over time, in a way that would be much less likely in the for-profit sector. In the for-profit sector, if someone likes an organization’s goal but thinks its execution is poor, they’re likely to start a competitor; in the nonprofit world, they may be more likely to try to join (or otherwise help) the organization. In general, for relatively small “seed” grants, we’ve had the attitude that we want to be excited about the idea or the people, but not necessarily both, since it can be very difficult to evaluate either with confidence. For the biggest bets, we believe it is important to do what we can to evaluate both. For a given budget, is it best to make fewer and larger grants or more numerous and smaller grants? We know relatively little about other funders’ views on this question, other than that the Sandler Foundation seems to favor fewer and larger grants compared to most foundations we’ve seen. Our current thinking is based on a couple of conceptual points: • At the moment, our scarcest resource – and the one we have to be most careful about budgeting – is time rather than money. (We believe this will often, though not always, be the case for a philanthropist.) For a given level of due diligence, making fewer and bigger grants means being able to give away more; for a given budget, making fewer and bigger grants means being able to put more attention into each grant. All else equal, we see this as one reason to lean toward making fewer and bigger grants. • When there is very little track record for a project or team, something in the range of $100,000-$250,000 often seems like enough to (a) encourage the grantee to commit more seriously to fundraising and planning for a project; (b) improve the grantee’s odds of securing funds from others; (c) give the grantee enough resources to build up a basic track record – a year or two of full-time work from a small team – which can become the basis for a later, larger grant (either from us or from another funder). We’d hesitate to make a larger grant unless we had a reasonable sense for the track record of the organization or team; we’d fear that a larger grant could be inefficient (by missing out on the opportunities for leverage just listed) and could also encourage a grantee to build capacity prematurely. • Reconciling the above two points, the grants we’ve made so far have generally either been (a) major grants that we’ve put major time into or (b) smaller grants that we’ve put less time into, in the hopes of seeding a project that could raise more money (from us or others) down the line. • At the “seed” stage, we’re comfortable accounting for a very large part of a team’s or project’s budget, since our aim is to fund exploratory/preliminary work that may or may not lead to more. For larger grants, we keep in mind that providing too much of an organization’s funding can make it dependent on us; such a situation may be necessary in some cases, but all else equal, organizations with more diverse funding bases are in better positions. What sort of paperwork should accompany a grant? Based on GiveWell’s experience as a grantee, as well as grant proposals we’ve seen from nonprofits we’ve worked with, it appears that funders often require grantees to complete lengthy writeups about their plans, strengths, weaknesses, and alignment with funder goals. Such writeups also often include setting out specific goals that later progress can be measured against. So far, we’ve tried to take a different approach: we’ve tried to write up the case for each grant ourselves, and we’ve tried to get information from grantees in the form of conversations, other informal communications, and already-existing documents rather than requesting that new materials be created. We believe this approach allows us to maintain quality control over our writeups and avoid excessively burdening grantees. Ultimately, the purpose of our writeups is to assess grants according to our own values and strategies, so it seems logical for us to develop the capacity for creating such writeups, rather than asking our grantees to do so. When it comes to setting out specific goals, we generally include a section on this, but often stick to long-term and/or highly general goals, because we think this is appropriate for many sorts of grants (more below). We generally ask grantees to sign a grant agreement covering topics such as confidentiality, our expectations around checking in and sharing public writeups, our preferences that any original research share its data and create a pre-analysis plan, and legal issues. What should the relationship be between different funders? How strongly should we seek collaboration, versus seeking to fund what others won’t? It seems to us that many major funders greatly value collaboration, and often pursue multi-funder partnerships. We don’t fully understand the reasons for this, and would like to understand them better. Our instincts tend to run the other way: we seek to fund the best work that we believe would have trouble getting funded otherwise. All else equal, seeing that other funders are interested in an approach makes us less interested in that approach. Sometimes, when a project fails to get interest from other funders, this can be a warning sign. Talking to funders who could have supported a project, and didn’t, can help us learn about possible shortcomings and reservations we would not have thought of otherwise. On the other hand, sometimes a lack of interest from other funders simply means that we’ve found a neglected opportunity, which is exactly what we’re looking for. For this reason, we generally seek to think about the funders who seem like the best fit to support a given project, then try to understand why they aren’t supporting it. Sometimes the reasons turn out to be compelling to us, and sometimes they end up seeming like reasons we are happy to discount (particularly when the reasons are “structural,” i.e., a project is considered to be out of another funder’s focus areas rather than having anything wrong with it). We seek to maintain contact with other funders who work on similar causes to us. One benefit of doing so is that we can learn from them. Another is that we can avoid doing work that is redundant with theirs. Having a practice of trying to understand why they don’t support the things we support, as described in the above paragraph, accomplishes some of each. Another benefit is that we can sometimes pitch promising giving opportunities to them, which can result in the projects we find most exciting getting more support. In some cases, another funder might be excited to commit all the funding that is needed, making our own support for a project unnecessary. We’ve also heard informally that many funders value “getting in on the ground floor,” so involving another funder early on may make a big difference to a project’s or organization’s long-term funding situation. But the kind of informal contact described above – checking in, discussing potential grants, pitching giving opportunities – falls well short of formal, large-scale, or long-term collaborations. Our feeling is that a collaboration of this sort with a staffed foundation – possessing its own focus areas, strategy and style – would likely introduce a significant amount of time cost and coordination challenges, and we haven’t yet come across a situation in which that seemed like the best approach. How should we evaluate the results of our grants? Out of all the questions we’ve listed, this may be the one that has been most extensively discussed in writing: • Money Well Spent, co-authored by Paul Brest (former President of the William and Flora Hewlett Foundation), is a guide to “strategic philanthropy.” It emphasizes setting out clear goals, success indicators, and milestones up front. As a Hewlett grantee, we have generally been asked to set quantifiable goals each year, while reporting on progress against last year’s goals. • Bill Schambra has written counterpoints to this philosophy (example) stressing the importance of local knowledge. He generally encourages funders to give in their local communities, and to “Go down to the neighborhood and check it out, armed with common sense.” • The Elusive Craft of Evaluating Advocacy gives multiple reasons for the difficulty of applying a “strategic philanthropy” approach to policy-oriented philanthropy. (We have previously written about this essay.) Its recommendations: The best way to evaluate an organization whose influence is extremely diffuse is for grant officers to be close to the political action and thus able to make informed judgment calls on how it conducts its core activities … Equally important is an organization’s strategic capacity, which can be defined not only as its formal strategic plan, or the wisdom of its senior leadership (two factors that funders tend to focus on), but also the organization’s overall ability to think and act collectively, and adapt to opportunities and challenges … Yet another way to measure an organization’s quality and influence is through “network evaluation”—figuring out its reputation and influence in its policy space. We believe the best approach to evaluation depends very much on the specific case. When evaluating GiveWell top charities – which are defined by relatively linear, quantifiable, evidence-backed uses of money – we set clear expectations and assess progress regularly, along “strategic philanthropy” lines. But we don’t think this approach is appropriate in all cases, particularly when working on very long time horizons and on high-risk goals, such that the points raised in “The Elusive Craft of Evaluating Advocacy” become more relevant. In general, our writeups on each grant include discussion of our plans for follow-up.
# No-Hole L(p,0)-Labelling of Cycles, Grids and Hypercubes Abstract : In this paper, we address a particular case of the general problem of $\lambda$ labellings, concerning frequency assignment for telecommunication networks. In this model, stations within a given radius $r$ must use frequencies that differ at least by a value $p$, while stations that are within a larger radius $r'>r$ must use frequencies that differ by at least another value $q$. The aim is to minimize the span of frequencies used in the network. This can be modelled by a graph labelling problem, called the $L(p,q)$ labelling, where one wants to label vertices of the graph $G$ modelling the network by integers in the range $[0;M]$, while minimizing the value of $M$. $M$ is then called the $\lambda$ number of $G$, and is denoted by $\lambda_q^p (G)$. Another parameter that sometimes needs to be optimized is the fact that all the possible frequencies (i.e., all the possible values in the span) are used. In this paper, we focus on this problem. More precisely, we want that: (1) all the frequencies are used and (2) condition~(1) being satisfied, the span must be minimum. We call this the {\em no-hole} $L(p,q)$ labelling problem for $G$. Let $[0;M']$ be this new span and call the $\nu$ number of $G$ the value $M'$, and denote it by $\nu^p_q(G)$. In this paper, we study a special case of no-hole $L(p,q)$ labelling, namely where $q=0$. We also focus on some specific topologies: cycles, hypercubes, 2-dimensional grids and 2-dimensional tori. For each of the mentioned topologies cited above, we give bounds on the $\nu_0^p$ number and show optimality in some cases. The paper is concluded by giving new results concerning the (general, i.e. not necessarily no-hole) $L(p,q)$ labelling of hypercubes. Document type : Conference papers Domain : https://hal.archives-ouvertes.fr/hal-00307789 Contributor : Guillaume Fertin <> Submitted on : Tuesday, September 15, 2009 - 10:17:40 AM Last modification on : Tuesday, July 17, 2018 - 5:30:01 PM Document(s) archivé(s) le : Saturday, November 26, 2016 - 1:01:21 AM ### File Lp0Sirocco04.pdf Files produced by the author(s) ### Identifiers • HAL Id : hal-00307789, version 1 ### Citation Guillaume Fertin, André Raspaud, Ondrej Sykora. No-Hole L(p,0)-Labelling of Cycles, Grids and Hypercubes. 11th International Colloquium on Structural Information & Communication Complexity (SIROCCO 2004), Jun 2004, Smolenice, Slovakia. pp.138-148. ⟨hal-00307789⟩ Record views
# How do I write a good edit summary? As a user, I don't often ask questions (in fact, I believe I have only asked two questions in total across the entire network, excluding meta). However I do a lot of answering, flagging, commenting and of course, editing. Because I like things to be consistent, intuitive, efficient and definite, I've developed a system of structure and keywords for my edit summaries. Other discussions on Big Meta and per-site metas make it clear that the quality of edit summaries can be the difference between acceptance and rejection, or lead to grief when somebody hasn't explained their intent or actions quite so effectively. Finally, if one writes a summary, they can confirm to themselves what they have done, and whether it's (all of) what they were aiming for. How do you write the summary when you make an edit? I will write an answer that describes the system and keywords I use. Other answers that extend this system, or better yet describe alternatives, will help people to understand what others are trying to say when they write an edit summary, as well as how they might write a better summary themselves. Your edit summary is aimed at three slightly different audiences: • reviewers, if you can only suggest edits. You want to convince them to approve your edit. If you are adding material that was in the comments, say so - normally just adding material doesn't get approved. If your change is minor but critical, this is where you explain that criticality so the suggestion isn't rejected as too minor. • the OP, either in their role as reviewer of your suggested edit, or as a notification-clicker who wants to know what happened to their post. A comment that teaches the site norms or explains a seemingly arbitrary change isn't strictly required, but if you're taking the time to fix someone's post, you can probably also take the time to explain to them why this is a fix, not vandalism • posterity, when people look over the revision history. By far the smallest case even though it's a long tail. It probably doesn't matter what you say here, but I would encourage focusing on why over what - we can all see what is changed, the comment is a chance to add something more. I typically only care about posterity when I'm editing my own posts, and then I explain my thinking. Use caution when your comment involves enforcing a norm newcomers are unaware of. Say you remove "Hey everyone, this is my first post, I hope it's ok" at the start of a post and "Thanks in advance, hope you can help, this is really urgent for me" at the end. Your edit comments could be The first is likely to spark an argument or at least hurt feelings from the OP. The second doesn't help since people who include such content in their posts don't generally recognize the phrase "meta content". The third is backed up by site policy, though that may not reduce hurt feelings and instead encourage "site policy is stupid and rude" first meta postings. The fourth tells the OP what's in it for them and why the edit made their post better. (It's also easier to do than the third since you don't need to go find a link to prove you're right.) I put the most care into edit summaries on sites where my edits are reviewed. I try to explain a why and a benefit to the OP on my other edits, but I don't always do so. And if I have nothing better to say than "spelling and formatting" on a site where the edit summary is optional, I leave it out. Such summaries add no value, so I spare myself the trouble of typing them. From my personal experience, be precise. There is no definitive guide for putting the comments, you just need to communicate to the reviewer (including OP) in case, the edit is not so easy to understand. Try to put the things you edited in two or three words and use a comma separated list, like Fixed broken links, corrected grammar & spelling, re-tagged. Avoid writing overly-long edit messages. If the edit is good, it generally speaks for itself. I think writing a good edit summary that is concise and self-explanatory is important. Personally, I would usually describe what I changed in short sentences, for example: • Formatted code • Formatted error messages • Corrected spelling/ grammar • Removed noise • Improved general formatting I do combine them sometimes: • Formatted code & corrected spelling Basically, the summary should summarise what you have edited, so that the OP would understand why the post was edited and such keep in mind the mistakes they made. I don't think edit summaries should be very long or very precise and definitely not in complete sentences, for example: • I have edited paragraph 2 to display image inline and also edited paragraph 3 to remove blank lines. By just editing without leaving a edit summary, it would sometimes create a misunderstanding on why the post was edited. So, I've described how I usually craft my edit summaries and what I think is useful to the general community. Keywords and keyphrases are listed in the order they are most commonly used. This is also the typical order in which they are included in an actual summary. The less common shorthand summaries may also be extended for precision. • Sp/Gr or Sp/Gr/Synt - spelling and grammar; spelling, grammar and syntax. Used when making corrections to the language used in a question or answer. This includes misspellings, punctuation and general minor typography. Almost all my edits include this. • Readability - for when the structure of the question or answer creates difficulty for the reader in following its progression of thought, or when superfluous commentary is included which is not necessary to the question and makes identifying the actual question difficult. Often used to counter the possibility of flags and close-votes for "unclear what you're asking". • Tags - for when inappropriate tags are removed or reasonable tags are added. • Title - for when the title has been adjusted, either to make it more precise or to make it consistent with the question body, or when correcting issues that would be considered Sp/Gr/Synt or Readability changes. • Clarity or Clarification - for when a question is initially unclear but comments or discussion have found out what is meant to be asked about, and this explanation is being incorporated into the question. • Formatting - for when the markdown, including links and images, in a question or answer are being removed or added or fixed. • MathJax - for when MathJax or mhchem formatting is being altered, in order to differentiate from general markdown changes. • Terminology - for when the correct phrases or words are missing or the wrong phrases or words have been used. This occurs especially in a technical context or where the commonly understood meanings lead to a question or answer making no sense. • Removed [brief name of content] - for when text or other content is removed due to policy or best practise. • [specific comment in context] - for when a specific change has been made, which nay not fit well under a previous description, that would still be a significant change to the content of the question or answer. • Explicitly saying "spelling/grammar" is much better than using abbreviations reviewers are unlikely to be familiar with – Cai Jan 1 '17 at 13:47
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page. I bet you never knew there was a competition for artistic model airplane flying. Well now you do 30 More: Cool •       •       • 4900 clicks; posted to Video » on 23 Mar 2013 at 4:30 PM (1 year ago)   |  Favorite    |   share:    more» Paginated (50/page) Single page Single page, reversed Normal view Change images to links Show raw HTML Show posts from ignored users View Voting Results: Smartest and Funniest Now that the Navy's Blue Angels are sequester-grounded, there are probably a few opportunities for this guy---so long as winds are calm. Scripts Partially Allowed, 6/28...er... no thanks. I make it a point not to visit websites where i have to scroll my noscript taskbar. Wow. Cubicle Jockey : Scripts Partially Allowed, 6/28...er... no thanks. I make it a point not to visit websites where i have to scroll my noscript taskbar. THAT What possible legitimate use do you have to pull in scripts from over a dozen other sites. In this day and age of security issues, malware and browser attacks do you really expect people to enable all of that just to look at the small dollop of content you're providing. Cubicle Jockey: Scripts Partially Allowed, 6/28...er... no thanks. I make it a point not to visit websites where i have to scroll my noscript taskbar. I think saw everything on that website but a video. free style indoor RC 3D flying? why yes, it is a thing and people all over the world compete in them. there are a million videos of such competitions.. on you tube no less.. no need to link to that site. here is an example of a competition flight its synchronized swimming / rhythmic gymnastics for the RC crowd. What's worse?  The saw it on Reddit' people, or the people who biatch about enabling scripts.. Reminds me of the Arch Linux joke. How do you know someone is running Arch Linux? Don't worry, they'll tell you.. HairBolus: its synchronized swimming / rhythmic gymnastics for the RC crowd. good way of putting it... Back in the day, the big thing like this was called Pattern Flying. also with its own specialized model aircraft. Pattern flying is more related to the full sized stunt pilot air shows. Lots of rules about aircraft control, line of site. Maneuvers that must be accomplished... etc, etc.. Pattern has given way to the 3D stuff. now that power to weight ratios have gotten so crazy with the brush less electric set ups and micro servos and radios. no more than a decade or so ago, a lot of this stuff wasn't possible. and a lot of advancement in the tech has come in the past five or so. Came to watch an RC plane vid. Never watched it... saw the link to Zehia Dehar first. This is worth finding the original video on youtube, not RC planes. Pretty good, but lets see him do it outside with a 20knot cross wind. Cerebral Knievel: HairBolus: its synchronized swimming / rhythmic gymnastics for the RC crowd. good way of putting it... Back in the day, the big thing like this was called Pattern Flying. also with its own specialized model aircraft. Pattern flying is more related to the full sized stunt pilot air shows. Lots of rules about aircraft control, line of site. Maneuvers that must be accomplished... etc, etc.. Pattern has given way to the 3D stuff. now that power to weight ratios have gotten so crazy with the brush less electric set ups and micro servos and radios. no more than a decade or so ago, a lot of this stuff wasn't possible. and a lot of advancement in the tech has come in the past five or so. You're probably the only one here who would understand why Champaign Il was like RC Aircraft Nerdvana when I lived there a few years ago. \used to belong to the local RC club \\all the Horizon and Great Planes teams and fliers used our field \\\it was like hanging out at the local golf course down the street from Tiger's house darthdrafter: Came to watch an RC plane vid. Never watched it... saw the link to Zehia Dehar first. [thechive.files.wordpress.com image 500x281] This is worth finding the original video on youtube, not RC planes. The RC planes I can afford, and if I break them, I can rebuild them..  she seems like a very nice young lady though and I hope that she and her father will eventually work out their differences and come to a mutual and resolute understanding. CheapEngineer: Cerebral Knievel: HairBolus: its synchronized swimming / rhythmic gymnastics for the RC crowd. good way of putting it... Back in the day, the big thing like this was called Pattern Flying. also with its own specialized model aircraft. Pattern flying is more related to the full sized stunt pilot air shows. Lots of rules about aircraft control, line of site. Maneuvers that must be accomplished... etc, etc.. Pattern has given way to the 3D stuff. now that power to weight ratios have gotten so crazy with the brush less electric set ups and micro servos and radios. no more than a decade or so ago, a lot of this stuff wasn't possible. and a lot of advancement in the tech has come in the past five or so. You're probably the only one here who would understand why Champaign Il was like RC Aircraft Nerdvana when I lived there a few years ago. \used to belong to the local RC club \\all the Horizon and Great Planes teams and fliers used our field \\\it was like hanging out at the local golf course down the street from Tiger's house Oh yes.. that is the motherland for most all RC hobby stuff. kinda like how Portland OR is the motherland of Craftbeer darthdrafter: Came to watch an RC plane vid. Never watched it... saw the link to Zehia Dehar first. [thechive.files.wordpress.com image 500x281] This is worth finding the original video on youtube, not RC planes. That was actually a damn fine job flying to the music, to be honest. styckx: What's worse? The saw it on Reddit' people, or the people who biatch about enabling scripts.. It's not about that. You really have to have something like that enabled to appreciate just how farked up that page was. In fact, it was easily the worst page I have seen ever. It would be like stumbling into the worst slideshow on the entire internet or something. Champaign IL is he home of Tower Hobbies, the biggest mail-order hobby supplier, and one of the oldest. It was created by a student at U. of Illinois Champaign/Urbana, who started his mail-order business out of the student residence tower where he was living on-campus.  When Tower joined up with Horizon, they became like The Borg of hobby suppliers.  They put most of the bricks and mortar local hobby shops out of business. even BEFORE the internet. Anyhow, there's a biatchin' awesome hobby store in Champaign called Slot and Wing. HUGE place, they sell everything, they also still have the giant scale slot car tracks that used to be so popular in California, the 1:32  scale and  1:24, multi-lane... though their RC car stuff is more popular. They also have an indoor archery range. Anyway, Slot & Wing picks up all the mail-order returns that Tower gets from people that broke their models or that returned them due to damage from shipping, etc.  So there's a huge pile of airplane, heli, car, and boat parts and system components in the  "scratch and dent" department... you can assemble a complete plane and radio for pennies on the dollar, if you're a savvy RC hobby guy. Or pick up spare pars for the stuff you already have, cheaper than the catalogs. Well, that was cool. That part near the end, where he flew the plane over the rafters without crashing into them was pretty awesome! :D Any Pie Left: Champaign IL is he home of Tower Hobbies, the biggest mail-order hobby supplier, and one of the oldest. It was created by a student at U. of Illinois Champaign/Urbana, who started his mail-order business out of the student residence tower where he was living on-campus.  When Tower joined up with Horizon, they became like The Borg of hobby suppliers.  They put most of the bricks and mortar local hobby shops out of business. even BEFORE the internet. Anyhow, there's a biatchin' awesome hobby store in Champaign called Slot and Wing. HUGE place, they sell everything, they also still have the giant scale slot car tracks that used to be so popular in California, the 1:32  scale and  1:24, multi-lane... though their RC car stuff is more popular. They also have an indoor archery range. Anyway, Slot & Wing picks up all the mail-order returns that Tower gets from people that broke their models or that returned them due to damage from shipping, etc.  So there's a huge pile of airplane, heli, car, and boat parts and system components in the  "scratch and dent" department... you can assemble a complete plane and radio for pennies on the dollar, if you're a savvy RC hobby guy. Or pick up spare pars for the stuff you already have, cheaper than the catalogs. Spent lots of time @ SlotWing. It is like a garage sale in spots. Champaign was still cool because you got to see the planes they were working on, a year or so before they went on sale. Used to be I could look through the ads in the RC mags and find at least one ad that was shot at our field. And the Quicktime VR's in FS One are all from the Champaign area (whenever I'm homesick for old flying places, I fire that up.) \gotta give props to ABC Hobbycraft in Evansville, Indiana - if they still exist \\my Dad bought his first 4 channel radio from Tower back in the 70's stumpwiz: Now that the Navy's Blue Angels are sequester-grounded, there are probably a few opportunities for this guy---so long as winds are calm. Ah, that's the first I heard of that.  I just checked out the website, and sure enough, it looks like about half of their upcoming shows have been cancelled:   http://www.blueangels.navy.mil/show/ I grew up about an hour from Eglin AFB.  Can't say I would miss the crowds that the shows would pull in, but it was sweet catching the Angels doing impromptu stunts along the beaches in the week before the shows. Guessing the music would be hard to take.... if it was matched up with something that had some heart to it, wouldn't  be bad (like Pantera, new Device single out there would be a crowd pleaser, even Manson's remake of Tainted Love would make for some actual 'action' in the flying to score some real points). But the classical "music"... wouldn't be able to sit through it. /Before these new remote controlled frequencies were available, my best friend's father was into the RC planes. He bought, and built, one of the major ones out at the time ( about 30+ years ago) - think it was over $800 just for the plane and accessories if I remember it right; don't think remote came with it. He loads up my buddy and they go out to some huge field where they won't be bothered, clears out a patch with a weed eater and off he goes. Buddy said his dad didn't have the plane up over 2 or 3 minutes and here comes a damned big rig down the dirt road and his dad simply said, 'Shiat!!!!' and turned the plane around... but, too late. He heard the engine go to full speed then slowly go nose down and hit full force.$800, maybe 3 minutes, and about 10 minutes to clean up debris. Apparently the last 'instructions' the plane got was when his dad was wanting to bring it back and land, which was increasing speed and lowering altitude... and it just kept doing it, going faster and continuing to nose over... then CB fouled it up, from what he said. CB's screwed up the controls back then, completely... which I got to see for myself on his Tank (his dad bought him, and I seriously would love to have even today; 30 years later - it was awesome and looked as realistic as possible!! lol).  Using it in my yard and a lost trucker came down our residential street, tearing down tree branches and 'Left Turn Clyde,' the tank took off and hit the iron fence, splitting the tank's cannon....  (found out just how many cuss words Brian knew that afternoon.. and that truckers laugh at kids who cuss them out) //CSB time petered out Did they even have RC flying competitions 30 some years ago, and if so.. how?? Anyone with a high power CB could ruin the show - accidentally or... on purpose!!! (I would think, was never into them.. never had one, of any type, so never knew..) Helluva way to cheat back then if you knew ONE person was your only competition in the show. Or, you were in Vegas!!!!!! styckx: What's worse?  The saw it on Reddit' people, or the people who biatch about enabling scripts. The Redditors. Definitely the Redditors. I bet the little man in that plane was puking his guts out. TheMega: Guessing the music would be hard to take.... if it was matched up with something that had some heart to it, wouldn't  be bad (like Pantera, new Device single out there would be a crowd pleaser, even Manson's remake of Tainted Love would make for some actual 'action' in the flying to score some real points). But the classical "music"... wouldn't be able to sit through it. /Before these new remote controlled frequencies were available, my best friend's father was into the RC planes. He bought, and built, one of the major ones out at the time ( about 30+ years ago) - think it was over $800 just for the plane and accessories if I remember it right; don't think remote came with it. He loads up my buddy and they go out to some huge field where they won't be bothered, clears out a patch with a weed eater and off he goes. Buddy said his dad didn't have the plane up over 2 or 3 minutes and here comes a damned big rig down the dirt road and his dad simply said, 'Shiat!!!!' and turned the plane around... but, too late. He heard the engine go to full speed then slowly go nose down and hit full force.$800, maybe 3 minutes, and about 10 minutes to clean up debris. Apparently the last 'instructions' the plane got was when his dad was wanting to bring it back and land, which was increasing speed and lowering altitude... and it just kept doing it, going faster and continuing to nose over... then CB fouled it up, from what he said. CB's screwed up the controls back then, completely... which I got to see for myself on his Tank (his dad bought him, and I seriously would love to have even today; 30 years later - it was awesome and looked as realistic as possible!! lol).  Using it in my yard and a lost trucker came down our residential street, tearing down tree branches and 'Left Turn Clyde,' the tank took off and hit the iron fence, splitting the tank's cannon....  (found out just how many cuss words Brian knew that afternoon.. and that truckers laugh at kids who cuss them out) //C ... That had to be later than 30 years - maybe 40. 30 years ago you could use the 74 mhz frequencies for aircraft. Before that, it was 26 Mhz, way too close to the CB band. KCCO \but with No-Script and ad-block turned on. Well, the earliest RC plane that actually worked was  the Guff, flown by the Good brothers in or about 1940, when the plane had to be huge to carry lead-acid batteries, and instead of servos, you had escapement mechanisms: little rubber-band powered mechanical gadgets to wiggle the rudder when a solenoid was tripped by pulsing the radio. You had to remember to push the button three times for a left, and five times for a right, then two more to straighten out... and the belt onions were HUGE.  Proportional radio control that was stable and reliable became mass-affordable ( a relative terms, it was still rich guys or ham radio guys that owned it then)  in the 60's thanks to transistors and simple IC's spun off from the space race. There were RC contests annually at the AMA Nationals since the 40's but it really boomed in the 60's and 70's.  RC scale and stunt competitions, racing on a closed course, and the afore-mentioned "pattern" contests. RC Pattern was like early figure skating, where the skaters actually had to draw patterns in the ice with their skating.   The RC Pattern guys were the elite then, their planes were complex and fast, using a tuned pipe muffler like motorcycles did, to increase engine power.  Pattern was about drawing the perfect figures in a certain sequence inside an imaginary "box" in the air, and the planes soon mutated to shapes that were optimized for this task, looking oddly proportioned compared to scale craft. I got into the hobby in the 70's, but the Pattern guys in the club were (almost) always a-holes. They were typically wealthy snobs but also aggressive guys that would use up the entire sky at the field practicing incessantly, so as to make it tough for the rest of us to find safe flying room. The Pattern guys would also break the curfews, starting engines too early and disturbing neighbors, overflying their property, and flying after six in the summertime, which pestered neighbors sitting down to dinner.  See, the tuned pipes did little as mufflers, they were just to increase power, and the prop noise was pretty bad too. I would be willing to bet the one in the video is electric. Any Pie Left: They put most of the bricks and mortar local hobby shops out of business. even BEFORE the internet. I remember seeing their ads in Model Aircraft News as a kid. Say, you remember a company called Byron's Originals? Their thing was giant scale planes, and I remember each year they had some big even by their HQ in Iowa. The one in the video  IS electric. Byron Originals still exists, or at least you can still get plans and some parts for their planes. But it's been eclipsed by a number of competitors. Byron's biggest contribution to the sport was in the early ducted fan jets. Lando Lincoln: styckx: What's worse?  The saw it on Reddit' people, or the people who biatch about enabling scripts. The Redditors. Definitely the Redditors. When there are more scripts than text, there is a problem. View Voting Results: Smartest and Funniest
## [email protected] Options: Use Forum View Use Monospaced Font Show Text Part by Default Show All Mail Headers Message: [<< First] [< Prev] [Next >] [Last >>] Topic: [<< First] [< Prev] [Next >] [Last >>] Author: [<< First] [< Prev] [Next >] [Last >>] > One can get around this by using a pair of active characters. Denote > these by "<" and ">"; then the above would read > He said <She said to me <bar>>, and so on > and \futurelet could be used. This works for a few more cases, but only a few. What if the quote is inside a font change, or a section head, or any other command, you still have the potential problem of the lookahead coping with {}. Of course you can follow the above to its logical conclusion of implementing _everything_ via active characters and implementing your own parsing routines _in_ TeX rather than using the parser built into tex-the-program. This is certainly possible (some people have done exactly that) but TeX isn't really the best language for implementing a parser, and certainly isn't the fastest. If more involved contextual analysis is required in the input stream then it probably makes more sense to look to an extended system that can provide such extended functionality. (Something like omega's OTP processes.) They may not be quite what you want here, but the principle is the same, to make a controlled extension of the underlying system rather than try to build a tower of macros on the rather fragile sand that is tex-the-program. David
7:28 AM What do you recommend for writing two columns in more, let's say, friendly way? I'm using paracol with \switchcolumn* to maintain the paragraphs in sync, but it's just not a very good approach writing a paragraph of each column alternately. @Miguel I gave up trying to align the opposing lines, and then I use standard \twocolumn @yo' It's a translation, so it really needs the paragraphs to align. I would like something that would wait until the end of the paragraph in the opposite column and then start a new paragraph. \switchcolumn does this quite well, it's just not very friendly. @Miguel it's not something I ever do but I think reledmac/reledpar packages have something for that as well? 8:15 AM @DavidCarlisle Thanks, David, I will look to it better. Until know didn't find anything, I'm trying the \stanza. 8:46 AM @DavidCarlisle It worked with reledmac and reledpar, thank you. Quack! 9:20 AM @PauloCereda qua qua qua @CarLaTeX ooh another duck @PauloCereda quack @CarLaTeX quack! 9:58 AM @PauloCereda Quacks tend to \infty :) 10:30 AM @CarLaTeX asymptotic analyis is an expertise of Knuth. :) @PauloCereda Wow! Ducks are very clever in asymptotic analysis @CarLaTeX yet we ducks are very naïve. :) @CarLaTeX yet we ducks are very naïve. :) @PauloCereda Of course! :):):) s/naïve/tasty/ @DavidCarlisle you are mean 10:39 AM Jun 29 at 16:15, by Paulo Cereda @DavidCarlisle you are not mean :) @DavidCarlisle oh no Oct 17 at 6:51, by David Carlisle 24 hours ago, by Harald Hanche-Olsen Oct 11 at 16:46, by David Carlisle Sep 10 at 12:03, by David Carlisle @HaraldHanche-Olsen do you ever get a feeling of déjà vu in this chat room? @HaraldHanche-Olsen ooh we need a fixed point @DavidCarlisle Do you know how to hide the numbers from the margins with this reledmac way? 10:52 AM @PauloCereda λ🍍.(λ🦆.🍍(🦆🦆))(λ🦆.🍍(🦆x)) @HaraldHanche-Olsen ooh well done! I love this so much! @Miguel no I never used the package, just "know of its existence" :-) @PauloCereda Oops, there'a a typo in there. Should have been … @PauloCereda λ🍍.(λ🦆.🍍(🦆🦆))(λ🦆.🍍(🦆🦆)) @HaraldHanche-Olsen ah @HaraldHanche-Olsen we star this one as well. :) @PauloCereda Kleene star? 11:00 AM @DavidCarlisle :) @HaraldHanche-Olsen oh no! I can't see to symbol from my smartphone @CarLaTeX Apparently, my smart phone is smarter than yours. @CarLaTeX I just got a new phone and so can compare two: on the older I see only the pineapple and on the newer (with android 7) I see both (but I have no idea what the calculus say and should better not try to find out if I want my work to get done today). @UlrikeFischer it's a fixed point thingy from lambda calculus, do not worry. :) @PauloCereda I didn't worry but I'm always so curious ;-) (and I did recognize lambda, the skak (chess) package uses the lambda package but I never really got what exactly it does apart from some boolean test). 11:13 AM @UlrikeFischer ooh :) @UlrikeFischer my smartphone is like your old one! @HaraldHanche-Olsen :'( 12:04 PM @UlrikeFischer it's a qualifier like "forall" but constructs functions rather than predicates, lambda x . (sin x + cos x) is the anonymous function that returns the sum of sin and cos. 2 hours later… 1:55 PM @DavidCarlisle I have wondered idly, from time to time, whether TeX would have benefited from the ability to write anonymous macros? So that, say, \λ #1 #2\relax{\foo{#2:#1}} this that etc\relax would expand to \foo{that etc:this}. (Silly example, I know.) It might make programming TeX's mouth easier, or would it not? 2:25 PM @HaraldHanche-Olsen yes this is sort of what the expl3 expansion helpers are doing but there has to be one predefined for each possible argument type due to the fact that you can't define them on the fly with an inline lambda @DavidCarlisle Hmm, seems like another reason to bump expl3 higher up on the list of stuff to learn. 2:48 PM @HaraldHanche-Olsen look who's arrived:-) @DavidCarlisle ;) 3:42 PM @DavidCarlisle You have to upvote my answer for clever use of tabulary: tex.stackexchange.com/a/400331/4427 @egreg I think it would have been better if you hade make \scalebox{0.5}{\hline} convert the outer tabular into a tabulary. 3:57 PM @DavidCarlisle Even better now: I also used xpatch for fixing vertical alignment. ;-) @egreg I think it's better to stick to packages by reputable authors 4 @egreg so not tabulary for a start:-) 4:38 PM @DavidCarlisle That's why I added a carefully and cleverly written package such as xpatch. @egreg @DavidCarlisle I guess this could appear here soon too with newest texlive (I already wrote Will and Javier): \documentclass[]{article} \usepackage[english]{babel} \usepackage{fontspec} \begin{document} bla \end{document} !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ! ! ! Control sequence \latinencoding already defined. ! ! See the LaTeX3 documentation for further information. ! ! For immediate help type H <return>. !............................................... l.121 \tl_new:N \latinencoding Workaround: switch order of the packages. Hi all, I've forgotten how is called the large capital letter which starts a paragraph like the one in tex.stackexchange.com/q/400285/1952. Could someone remind me? I'm sure there are packages to do it but I don't know how to start to search them? @UlrikeFischer Not sure it's a good workaround, because apparently Will wants \latinencoding and \cyrillicencoding to be TU instead of what babel sets. Let me do some tests. @Ignasi Lettrine, drop capital? @UlrikeFischer I retract, they're set correctly. 4:47 PM @Ignasi As @PauloCereda says, or shorter: drop caps (in one word, or two) Nosotros los patos somos muy inteligentes. :) @PauloCereda tasty, too. @HaraldHanche-Olsen you are mean/average @UlrikeFischer But if one doesn't load fontspec, babel sets \cyrillicencoding anyway to TU (with XeLaTeX or LuaLaTeX), resulting in no output. @UlrikeFischer yes I think Will's fixed that already (was reported in the fontspec github) 4:51 PM @PauloCereda, @HaraldHanche-Olsen Thank you. Found it! @Ignasi <3 @DavidCarlisle Ah yes. Sorry hadn't check github. @DavidCarlisle I need to set up the tags for the new (ssh) place for LaTeX2e: did you make the branches at the point of each major release? Trying to track it back ... 5:08 PM @JosephWright yes hopefully the branch is in fact the last patched state of the release of its date, that is I branched just before starting to add code for a new release., but it may be simpler to forget the svn branches just to look at svn blame of changes.txt and see when # 2017/01/01 PL 2 Release got added and git tag those dates that way you will pick up PL releases as well? @DavidCarlisle Ah, good plan @DavidCarlisle I'll work back on it tonight: have a thesis to finish @JosephWright all the older ones ### release dates I added after the fact by checking archived versions of latex releases and matching up the text in changes.txt so the svn dates for those will not be informative but the date in the file would still be useful to git tag if the svn history goes back that far eg ###################### # 2001/06/01 Release ###################### 2001-08-26 means the last change for the nominal 2001/06/01 release was 2001-08-26 @DavidCarlisle I'll work out the 'recent' releases ... those in Git (So for the last 9 years or so) @DavidCarlisle Have to get Frank to write his TUGboat! @JosephWright yes I just picked a random line there I couldn't remember how far back the current svn/git history went:-) @DavidCarlisle 2008-03-18 @DavidCarlisle Presumably the other SVN goes back further 5:24 PM @JosephWright yes (although I don't recall if all the original rcs logs got saved when we went to cvs, so might not go all the way back, I don't have it checked out on this machine:-) ------------------------------------------------------------------------ r1 | (no author) | 1993-08-06 13:47:03 +0200 (Fri, 06 Aug 1993) | 1 line New repository initialized by cvs2svn. ------------------------------------------------------------------------ @DavidCarlisle ^^^ @DavidCarlisle I cloned it on comedy and then looked at the log there ;) @JosephWright hmm I'd have thought it was earlier than that (if it was the rcs log) or later (if it was from the start of using cvs) but my memory may be failing @DavidCarlisle There might be some oddities at the start: I didn't check in detail @JosephWright I don't suppose you were using cvs in 1993? @DavidCarlisle No: at school we were still on CP/M in 1993 ;) (Or perhaps by then we had 16-bit PCs ...) 5:48 PM Should we resort to the Four Yorkshiremen sketch? 1 hour later… 7:02 PM How does this make you feel? @DavidCarlisle ^^ @PauloCereda Evolution of the pineapple pizza: the pizza pineapple! @CarLaTeX no, those are two different things. :) Like a bee duck and a duck bee. @PauloCereda LOL 7:52 PM @DavidCarlisle ooh @DavidCarlisle I hope the tags make sense .... @DavidCarlisle What – you're not going to answer questions anymore? @HaraldHanche-Olsen English isn't a context free language:-) @JosephWright oh I'll pull and have a look:-) @DavidCarlisle So I gather. Words mean what the speaker intend, neither more nor less. 8:03 PM @JosephWright doesn't time fly:-) @HaraldHanche-Olsen We might need a semioticist here. :) From github.com:latex3/latex2e d8c2ff65..8a19285e master -> origin/master * [new tag] 2009-09-24 -> 2009-09-24 * [new tag] 2011-06-27 -> 2011-06-27 * [new tag] 2014-05-01 -> 2014-05-01 * [new tag] 2015-01-01 -> 2015-01-01 * [new tag] 2015-01-01-PL1 -> 2015-01-01-PL1 * [new tag] 2015-01-01-PL2 -> 2015-01-01-PL2 * [new tag] 2015-10-01 -> 2015-10-01 * [new tag] 2015-10-01-PL1 -> 2015-10-01-PL1 @DavidCarlisle I blame Joseph @PauloCereda that is standard Team policy so goes without saying @DavidCarlisle ah 8:07 PM Oh no I have detached my head, I hope it's not painful 2 \$ git checkout 2016-03-31-PL1 Note: checking out '2016-03-31-PL1'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. @DavidCarlisle I guess you know what Git means here ... @JosephWright yes I was a bit worried I'd done the wrong thing for a while as the checkout took quite a while but I see it created an entire tree with trunk and three dated branches, then when I checked out master again it zapped them all again. @DavidCarlisle Probably easier to use git log --oneline or (perhaps) a GUI to look at tags over time @JosephWright if it had gone wrong I could have reverted to the standard team policy referenced above. @DavidCarlisle Or git tag followed by git show <tag-name> ... 8:16 PM @JosephWright yes sure i just thought I'd try winding back my local checkout just to see how it worked:-) I did install that gitkraken thing, might try it a bit more one day. @DavidCarlisle The fact I've had to allow for the SVN rearrangements makes it a little less slick than it could be: try the LaTeX3 one, where we've only ever had everything in SVN trunk ... @DavidCarlisle It's quite good: can't do some of the more complex stuff (SourceTree can), but for that I'm happiest with the CLI anyway, so I'm picking a nicer looking GUI for reading the log mainly @DavidCarlisle The original Git conversion (where I just did trunk) would have been 'cleaner' but then we'd loose all the pre-mid-2015 history (or at least leave it permanently in SVN only, cf. the non-public one) @JosephWright eek Windows @DavidCarlisle I'm hoping that 'easy' tagging will help track what we are up to: for L3 it's a lot easier than going back through the commit messages @PauloCereda Current laptop is a Dell XPS ... @PauloCereda GitKraken is available for Linux and Mac ;) I may even buy the paid-for version ... @JosephWright Mine is a Dell and it's Linux. :D @JosephWright That looks like a Windows version. :) @JosephWright what does that do extra (I vaguely recall seeing something) @PauloCereda never mind, perhaps you'll be able to upgrade soon 8:22 PM @PauloCereda It is @JosephWright boo @DavidCarlisle Sort merges in the GUI and the like @DavidCarlisle to OS2/Warp? :) @PauloCereda mine's a Dell lattitude which is just like an XPS, I'm sure:-) running cygwin (although I understand there may be another operating system behind that) @DavidCarlisle :) My older laptop is a Lattitude and my more recent is a Vostro. :) 8:24 PM @DavidCarlisle I looked up the full range of XPS systems after the UK-TUG meeting: they do a Ubuntu-specific one (so driver support for Linux), and if you have the cash one with 16Gb of RAM and a 1Tb HD @PauloCereda ah I went the other way my previous two were both vostros (although I suspect they just build the machines and stick a random name on them as they go out the door( @DavidCarlisle They've also upgraded the CPU since I bought mine ... @DavidCarlisle Nothing beats my old laptop, to be honest. :) My desktop is an Inspiron. this lattitude isn't bad though: SSD and an i7 @DavidCarlisle 15"? 8:26 PM @PauloCereda yes @DavidCarlisle Cool, I don't like 13"... @PauloCereda I had a 17in once but that was too bulky to carry around and I have an external screen at work (don't bother here) so 15in is big enough @DavidCarlisle I only saw a 17" from Toshiba. :) @JosephWright I always get a Ubuntu-based one (to ditch the OS later on) because the overall price decreases due to the lack of a license. :) @PauloCereda I thought about it, but I have to have Windows for work, so would have needed a VM anyway. Then there's the fact I needed to replace my laptop at a weekend following a machine failure. @JosephWright Oh I completely understand. 8:34 PM @PauloCereda If I'd got a 16Gb machine I might have gone for Linux @JosephWright My desktop has 16GB. :) @PauloCereda this is 16gb and 500gb ssd but it's the only machine I have, no desktop machine:-) @DavidCarlisle ooh I want one @PauloCereda and it has windows 10! My Dell became Spanish recently. It lost a L in the logo, so it became "DEL"! @DavidCarlisle ooh it will be a please to get rid of it! 8:40 PM @PauloCereda To DELete it ;) @PauloCereda A question of funds: current machine was £1050, a 16Gb would be at least £1399 @TeXnician ooh :) @JosephWright WOW 9:06 PM @PauloCereda That's the Ubuntu version: the Windows one is a bit more ... @JosephWright I tried to work out what mine would have cost but "dell latitude 5580" seems pretty meaningless, it could be any machine in a range of 800 to 1500 pounds by the look of it (as you may guess I didn't actually pay for this one) @DavidCarlisle Oh, same here: mine's a 9360, but that covers a massive range of machines @DavidCarlisle Like I said, since I got mine they've upgraded the CPU (mine's an i5, the new ones are i7) @DavidCarlisle Tags look OK? @JosephWright seems like it: I decapitated myself to look round a couple of the tagged releases and it all seemed to make sense @DavidCarlisle I'm not really sure if I have the right commits ... tracking it through the history is tricky! @JosephWright yes and also in at least one case I did all the svn tag stuff then Petra made me change something and I am pretty sure I didn't re-branch If the tags get within a day or so of the code at that point in time, I'm sure nothing will go wrong... 9:20 PM @DavidCarlisle Indeed: for future ones we should be OK as I'll tag each time there is a release @JosephWright as I say to get in the ### comments in the change log I ended up downloading every archived latex release going back to the 1980s and compared change log files (seemed safer than relying on the svn log to determine what the releases were) @egreg Sep 5 '16 at 15:54, by Paulo Cereda 8 secs ago, by Paulo Cereda 8 secs ago, by Paulo Cereda 8 secs ago, by Paulo Cereda 49 mins ago, by egreg Aug 2 at 12:55, by egreg Jun 12 at 17:55, by David Carlisle @egreg excuses excuses @DavidCarlisle Even then can be tricky, I suspect, as we could have released before or after things like babel commits, etc. @JosephWright yes although there I was just putting in comments in the change log so they are fairly accurate, as if I downloaded a 2001 release and the top comment in that change log was zzzz then I put a # 2001... comment in the line above zzzzz @DavidCarlisle Sure @JosephWright meanwhile I should get back to my utf8 branch.... 9:29 PM @DavidCarlisle If we need to go back to a tag and need to be 'spot on', we might have to double-check and move them ... @DavidCarlisle Looking forward to it! @DavidCarlisle git checkout -b utf8? @JosephWright utf8-and-filenames but currently it's not on the remote @DavidCarlisle git push --set-upstream origin utf8-and-filenames 9:45 PM @JosephWright oh sure i know how to do it, I just want to get it being a bit more coherent first:-) @JosephWright on a similar subject though If I'd like a .md file so when it does go to github I can highlight some design choices and changes (so temporary notes just for this branch) i could add them to the readme but will that complicate merges later or I could add a separate file or I could I suppose just open an issue and leave notes there or ... @DavidCarlisle But I see that also you get inspiration from my comments tex.stackexchange.com/a/400365/4427 @egreg excuses excuses:-) 10:01 PM @DavidCarlisle Just add a separate .md I guess, or note them in the log, or use the GitHub wiki, or ... @DavidCarlisle Can always remove the extra .md before a merge, and if we want it can 'disappear' from the history (by rebasing)
# What structure does the alternating group preserve? A common way to define a group is as the group of structure-preserving transformations on some structured set. For example, the symmetric group on a set $X$ preserves no structure: or, in other words, it preserves only the structure of being a set. When $X$ is finite, what structure can the alternating group be said to preserve? As a way of making the question precise, is there a natural definition of a category $C$ equipped with a faithful functor to $\text{FinSet}$ such that the skeleton of the underlying groupoid of $C$ is the groupoid with objects $X_n$ such that $\text{Aut}(X_n) \simeq A_n$? Edit: I've been looking for a purely combinatorial answer, but upon reflection a geometric answer might be more appropriate. If someone can provide a convincing argument why a geometric answer is more natural than a combinatorial answer I will be happy to accept that answer (or Omar's answer). • I have a copout answer involving a total order on X but I would really not like to introduce a total order to solve this problem. Somehow I feel that the essence of the structure necessary is less than that. – Qiaochu Yuan Sep 22 '10 at 4:02 • You can just take X_n to be a Cayley graph of A_n with some natural generating set. You probably don't count this as natural enough though. – Alon Amit Sep 22 '10 at 4:08 • What is the precise meaning of "natural definition"? – Hans-Peter Stricker May 27 '11 at 15:19 • @Hans: in this context, there really isn't one. – Qiaochu Yuan May 27 '11 at 15:33 The alternating group preserves orientation, more or less by definition. I guess you can take $C$ to be the category of simplices together with an orientation. I.e., the objects of $C$ are affinely independent sets of points in some $\mathbb R^n$ together with an orientation and the morphisms are affine transformations taking the vertices of one simplex to the vertices of another. Of course this is cheating since if you actually try to define orientation you'll probably wind up with something like "coset of the alternating group" as the definition. On the other hand, some people find orientations of simplices to be a geometric concept, so this might conceivably be reasonable to you. • I considered this, but I wonder if the definition can be made as set-theoretic as possible, with as little geometry as possible. – Qiaochu Yuan Sep 22 '10 at 4:25 • @Qiaochu, since orientation has to do with determinants, maybe you could whittle the above down to a discrete interpretation of determinants -- the determinants of permutation matrices, perhaps. Or would that be circular? The determinant does have an independent definition unrelated to the signature of a permutation. – Rahul Sep 22 '10 at 4:42 • So the question is: what is an oriented set? Answer: It's a set with an orientation. So what is an orientation on a set? ${}\qquad{}$ – Michael Hardy May 15 '15 at 21:51 $A_n$ is the symmetry group of the chamber of the Tits building of $\mathbb{P}GL_n$. The shape of this chamber is independent of what coefficients you insert into the group scheme $\mathbb{P}GL_n$, just the number and configuration of chambers changes. If you insert the finite fields $\mathbb{F}_p$ then you get finite simplicial complexes as buildings, and the smaller $p$ gets, the fewer chambers you have. You can analyse and even reconstruct the group in terms of its action on this building. The natural limit case would be just having one chamber and the symmetry group of this chamber - the Weyl group - is $A_n$. This is how Tits first thought that there is a limit case to the sequence of finite fields - which he called the field with one element. Maybe somewhat more algebraically you can think in terms of Lie algebras - as I said the shape of the chamber does not change with different coefficients. The reason is that it is determined just by the Lie algebra of the group and thus describable by a Dynkin diagram or by a root system (ok, geometry creeps in again). The Wikipedia page about Weyl groups tells you that the Weyl group of the Lie algebra $sl_n$ is $S_n$. If have no experience with Lie algebras, but maybe you can get $A_n$ the same way. If you can get hold of it, you can read Tits' original account, it's nice to read (but geometric) see the reference on this Wikipedia page. Edit: Aha, I found a link now: Lieven Le Bruyn's F_un is back online. You can look there under "papers" and find Tits' article. And, since you are picking up the determinant ideas, you should definitely take a look at Kapranov/Smirnov! • Doesn't GL(n) involve ordering an n-element set (of basis vectors)? The question Qiaochu is raising is something like, can we construct A(X), with X a set or a set with "Alt-structure" weaker than an ordering. The A_n and S_n and GL_n with their standard embeddings into each other assume the standard ordering. One can discuss GL(V) for V a vector space, but constructing it as an algebraic group uses an order, at least in the usual presentation of the theory. – T.. Sep 29 '10 at 17:12 • Does it? Aren't the group operations still there when you forget about the coordinates, which you used to define it via matrix multiplication? The construction of the building then uses just coordinate-free notions like "Borel subgroups", as far as I remember... – Who Sep 29 '10 at 17:25 • I am also starting to find that strange now: Does an embedding of S_n into GL_n give me a choice of basis? Maybe via picking eigenvectors? If not one could define A_n as the intersection of S_n with O_n (which is definable coordinate-free) under any embedding - and then use coordinates to show independence of the embedding. – Who Sep 29 '10 at 17:32 • I should have added that in addition to defining A(X) (which requires no structure except the finite set X), the problem here is to define even bijections from X to Y. This cannot be done without extra data, but does not require the full strength of an ordering. Functorial construction of Tits building from X and Y would, for any bijection, give a map from the X-building to the Y-building and asking whether it is an isomorphism of whatever structure in the building defines A(X) would define even bijections. So I suspect some sort of extra data is hidden in the construction. – T.. Sep 30 '10 at 7:30 • I don't know whether the construction of the Tits building is functorial. I didn't mean to imply this with anything I wrote. It would be nice of course, then an embedding of S_n might correspond to the inclusion of a particular chamber. – Who Sep 30 '10 at 15:33 Here is one idea, although I do not find it very satisfying. An object of $C$ is a finite set $X$ equipped with $\frac{|X|!}{2}$ (or $1$ if $|X| = 1$) total orders, all of which are even with respect to each other (in other words, basically a coset of $A_n$ in $S_n$). A morphism between two objects in $X$ is a map of sets preserving these orders (in other words, take one of the orderings on $X$ and apply a function $f : X \to Y$ to its elements. The result, after throwing out repeats, must be compatible with an ordering on $Y$.) This is more or less a discretization of Omar's answer. Again, I would like to do better than this, or at least see the data described above packaged in a more satisfying way. You can do with something very slightly weaker than an ordering: an identification of each $n$-element set with a single "universal" (unordered) $n$-set. This data canonically identifies any two $n$-element sets and thus associates a permutation to any bijection of such sets. Even bijections can then be defined in terms of the cycle structure of the permutations. This is not much of an improvement, but you are in effect asking for a lifting of the alternating group to a groupoid of maps between finite sets. It is hard to see how to determine whether a bijection of finite sets is even (reducing to the usual notion when the sets are the same, and also "transporting structure" along the whole category) without having a coordinatization of the sets. (This is, essentially, just a «repackaging» of your answer. Still, I find this version somewhat more satisfying — at least, it avoids even mentioning total orders.) For a finite set $X$ consider projection $\pi\colon X^2\to S^2 X$ (where $S^2 X=X^2/S_2$ is the symmetric square). To a section $s$ of the projection one can associate a polynomial $\prod\limits_{i\neq j,\,(i,j)\in\operatorname{Im}s}(x_i-x_j)$ — and since any two such products coincide up to a sign, this gives a partition of the set $\operatorname{Sec}(\pi)$ into two parts. Now, $A_X$ is the subgroup of $S_X$ preserving both elements of this partition. (I.e. the structure is choice of one of two elements of the described partition of $\operatorname{Sec}(\pi)$.) The oriented sets are considered in combinatorial species area; a definition of them may be found here Combinatorial Addition Formulas and Applications at page 10. En±(X) = Xn/An The polynomial $$p\in K\left[X_1,...,X_n\right]$$ given by $$p\left(X_1,...,X_n\right)=\prod_{i
• Magoosh Study with Magoosh GMAT prep Available with Beat the GMAT members only code • Award-winning private GMAT tutoring Register now and save up to $200 Available with Beat the GMAT members only code • Free Veritas GMAT Class Experience Lesson 1 Live Free Available with Beat the GMAT members only code • Get 300+ Practice Questions 25 Video lessons and 6 Webinars for FREE Available with Beat the GMAT members only code • 5-Day Free Trial 5-day free, full-access trial TTP Quant Available with Beat the GMAT members only code • Free Practice Test & Review How would you score if you took the GMAT Available with Beat the GMAT members only code • 1 Hour Free BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • Free Trial & Practice Exam BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • 5 Day FREE Trial Study Smarter, Not Harder Available with Beat the GMAT members only code • FREE GMAT Exam Know how you'd score today for$0 Available with Beat the GMAT members only code ## Telephone Call This topic has 1 expert reply and 4 member replies kanha81 Master | Next Rank: 500 Posts Joined 10 Jan 2009 Posted: 431 messages Followed by: 1 members 16 #### Telephone Call Mon Jun 01, 2009 11:31 am The charge for a telephone call between City R and City S is $0.42 for each of the first 3 minutes and$0.18 for each additional minute. A certain call between these two cities lasted for x minutes, where x is an integer. How many minutes long was the call? (1) The charge for the first 3 minutes of the call was $0.36 less than the charge for the remainder of the call. (2) The total charge for the call was$2.88. _________________ Want to Beat GMAT. Always do what you're afraid to do. Whoooop GMAT mike22629 Master | Next Rank: 500 Posts Joined 27 Mar 2009 Posted: 322 messages 24 Test Date: May 26th GMAT Score: 710 Mon Jun 01, 2009 11:47 am IMO D. Info in Passage: .42$per minute for first 3 minutes, so 1.26$ for first 3 minutes. Statement 1 Since the charge for the first 3 minutes of the call was .36 less than the remainder Total Cost = 1.26 + (1.26 + .36) = 2.88 1.62/.18 + 3 = total minutes Notice I did not calculate actual answer because it is not necessary. kanha81 Master | Next Rank: 500 Posts Joined 10 Jan 2009 Posted: 431 messages Followed by: 1 members 16 Mon Jun 01, 2009 11:58 am mike22629 wrote: IMO D. Info in Passage: .42$per minute for first 3 minutes, so 1.26$ for first 3 minutes. Statement 1 Since the charge for the first 3 minutes of the call was .36 less than the remainder Total Cost = 1.26 + (1.26 + .36) = 2.88 1.62/.18 + 3 = total minutes Notice I did not calculate actual answer because it is not necessary. Mike, To be honest, I quite did not get your deduction of total cost. Trivial as it may see, I am not able to understand. Can you please help me understand this? mike22629 wrote: Total Cost = 1.26 + (1.26 + .36) = 2.88 1.62/.18 + 3 = total minutes _________________ Want to Beat GMAT. Always do what you're afraid to do. Whoooop GMAT brick2009 Master | Next Rank: 500 Posts Joined 01 Jun 2009 Posted: 171 messages 8 Test Date: July 25 Target GMAT Score: 760 Mon Jun 01, 2009 9:32 pm a.) if X is the total minutes, (x-3)x18 cent = (3x.42)-.36 ...so you can solve for X b.) is straight forward aj5105 Legendary Member Joined 06 Jul 2008 Posted: 1169 messages Followed by: 1 members 25 Tue Jun 02, 2009 6:29 am Statement (1) : 1.26 + 0.36 = 0.18 * C [C = each additional minute over 3] Sufficient Statement (2) :1.26 + 0.18 * C = 2.88 Sufficient [D] ### GMAT/MBA Expert Scott@TargetTestPrep GMAT Instructor Joined 25 Apr 2015 Posted: 983 messages Followed by: 6 members 43 Tue Dec 05, 2017 5:28 pm kanha81 wrote: The charge for a telephone call between City R and City S is $0.42 for each of the first 3 minutes and$0.18 for each additional minute. A certain call between these two cities lasted for x minutes, where x is an integer. How many minutes long was the call? (1) The charge for the first 3 minutes of the call was $0.36 less than the charge for the remainder of the call. (2) The total charge for the call was$2.88. We are given that the charge for a telephone call between City R and City S is $0.42 for each of the first 3 minutes and$0.18 for each additional minute. We are also given that a call between the two cities lasted for x minutes. We can create the following equation: 3(0.42) + 0.18(x - 3) = total cost of the call 1.26 + 0.18x - 0.54 = total cost of the call 0.18x + 0.72 = total cost of the call We must determine the value of x. Statement One Alone The charge for the first 3 minutes of the call was $0.36 less than the charge for the remainder of the call. Using the information in statement one, and from the given information, we can create the following equation: 1.26 = (0.18x - 0.54) - 0.36 0.78 = 0.18x - 0.54 2.16 = 0.18x x = 12 Statement one alone is sufficient to answer the question. We can eliminate answer choices B, C, and E. Statement Two Alone The total charge for the call was 2.88. Using the information in statement two, and from the given information, we can create the following equation: 0.18x + 0.72 =$2.88 0.18x = \$2.16 x = 12 _________________ Scott Woodbury-Stewart Founder and CEO ### Top First Responders* 1 GMATGuruNY 64 first replies 2 Rich.C@EMPOWERgma... 48 first replies 3 Brent@GMATPrepNow 39 first replies 4 Jay@ManhattanReview 24 first replies 5 Terry@ThePrinceto... 10 first replies * Only counts replies to topics started in last 30 days See More Top Beat The GMAT Members ### Most Active Experts 1 GMATGuruNY The Princeton Review Teacher 128 posts 2 Rich.C@EMPOWERgma... EMPOWERgmat 114 posts 3 Scott@TargetTestPrep Target Test Prep 92 posts 4 Jeff@TargetTestPrep Target Test Prep 90 posts 5 Max@Math Revolution Math Revolution 85 posts See More Top Beat The GMAT Experts
Homework Help: Pair of dice probability 1. Oct 20, 2012 CAF123 1. The problem statement, all variables and given/known data A and B alternate rolling a pair of dice, stopping either when A rolls the sum 9 or when B rolls the sum 6. Assuming that A rolls first, find the probability that the final roll is made by A. 3. The attempt at a solution A rolls a sum 9 on each roll with prob 1/9 B rolls a sum 6 on each roll with prob 5/36 Given that A wins, he will win on an odd number of turns. (since A starts) Let E be the event that the game finishes on an odd number of turns Then P(E) = (1/9)(1-5/36)(1/9)(1-5/36).... Where do I go from here? 2. Oct 20, 2012 tiny-tim Hi CAF123! No, that's the probability that A wins on every throw, but generously pretends that he didn't, because he wants to let B win. Try again. 3. Oct 20, 2012 CAF123 Hi tinytim. Can you give me a hint to start? 4. Oct 20, 2012 tiny-tim Try ∑ P(Wn) where Wn is the event of A winning on his nth throw. 5. Oct 20, 2012 CAF123 The event that A wins on his turn is just 1/9. What does this sum represent? 6. Oct 20, 2012 tiny-tim That's P(W1). What's P(W2) ?​ 7. Oct 20, 2012 CAF123 I thought that the probability of A winning on any of his turns is 1/9. Is this not correct? If not, why not? Surely whether A wins is dependent only on what he throws and not B's result. Or did I misunderstand something? 8. Oct 20, 2012 tiny-tim no, that's the probability of A winning on say his 10th turn, given that neither A nor B has already won 1/9 = P(Wn | neither A nor B has already won before the nth turn) you want P(Wn) for example, P(W2) is 1/9 times the probability that neither A nor B won on their first throws … that's obviosuly less than 1/9 ! 9. Oct 20, 2012 CAF123 Ok, I think I understand now. So P(W2) = (1/9)(1-1/9)(1-5/36) P(W3) = (1/9)(1-1/9)^2 (1-5/36)^2.. Can I write this as $$\frac{1}{9} \sum_{i}^{∞} (\frac{8}{9})^i \sum_{i}^{∞} (\frac{31}{36})^i$$ 10. Oct 20, 2012 HallsofIvy In order that P win on the first turn, he must roll a 9. The probability of that is 1/9. In order that P win on the second turn, he must roll anything except a 9 on the first roll, B must roll anything but a 6, and P must roll a 9 on. The probability of that is (8/9)(31/36)(1/9)= (2/9)(31/9)(1/9)= 62/729. On any one turn, the probability that P does NOT roll a 9 and B does NOT roll a 6 is (8/9)(31/36)= 62/81. In order that P win on the nth turn both P and B must NOT roll the correct number the previous n- 1 turns and P must roll a 9 on the last turn- the probability of that is (62/81)n-1(1/9). 11. Oct 20, 2012 tiny-tim (type "\left(" and "\right)", and they come out the correct size ) almost that's not the same as $\frac{1}{9} \sum_{i}^{∞} \left(\frac{8}{9}\frac{31}{36}\right)^i$ is it? (and starting at i = … ?) 12. Oct 20, 2012 CAF123 i would start at 1 here. Many thanks again for your help. Can I ask, in general, what advice you would offer when tackling probability problems. It is by far the hardest course I am doing this semester and I feel I have difficulties starting the problems, what area of probability to apply etc.. Any advice would be appreciated - thanks. 13. Oct 20, 2012 tiny-tim (isn't it i = 0?) Probability problems are usually solved by rewriting the events in English first, so that you know clearly what the events are. Then you can start translating them into maths. 14. Oct 20, 2012 Ray Vickson Often it is best to forget formulas for a while and concentrate on understanding the nature of the "sample space" underlying the problem. In this case, it would help to write down the first few instances where A wins: Step 1: A wins---stop Step 1: A does not win; go to step 2 Step 2: B does not win; go to step 3 Step 3: A wins---stop Step 3: A does not win; go to step 4 Step 4: B does not win; go to step 5 Step 5: A wins--stop Step 5: A does not win; go to step 6 etc., etc. For these first few steps it is easy enough to write out the probabilities associated with the outcomes "A wins", and you can use the revealed pattern to develop a formula for the entire game. RGV
# Changes between Version 3 and Version 4 of Documentation/DevelopmentCenter/CreateNewViews Ignore: Timestamp: 06/06/12 10:07:08 (9 years ago) Comment: -- ### Legend: Unmodified v3 The Locked state of a view is set if a operation is in progress and !ReadOnly is used if the content object itself indicates that it must not be changed. Most times the enabled property corresponds to the Locked state and additionally to !ReadOnly if not separates handling is provided by the control. == Setting the AutoScaleMode of ContainerControls == To guarantee that HeuristicLab is rendered correctly with different font sizes and language settings (see #1688 for more information), the '''AutoScaleMode''' of ContainerControls has to be set to '''Inherit'''. This property has to be configured in the Visual Studio Windows Forms designer in the properties window for all ContainerControls: [[Image(autoscalemode.png)]] This covers the main parts that must be kept in mind when developing new views for HeuristicLab. You can find the source code of the StringConvertibleValueView used as an example attached to this page.
# A StackExchange website aimed at graduate students? There has been a lot of discussion about questions that do not get answered on MSE, but are not considered 'research level' on MO and thus get closed. The problem seems to be that certain questions are too easy for MO, but get lost on MSE between the hundreds of easy-to-answer questions. (Even if you stick to a single tag (algebraic geometry in my case), many questions are easy exercises.) One could argue that the target audience of MO is primarily faculty, and that MSE seems to be used mostly by undergraduates (this is certainly the impression one gets by looking at the active questions on the main page). If this analysis is correct, could it be useful to create a new 'in between' SE page aimed mostly at graduate students and postdocs? The advantage of such a website would be that it would constitute a single place where graduate students can post and answer questions; where no question is too hard; but where one can still filter out elementary questions so that the interesting ones stick to the front page. This might also encourage more of the senior MO members to engage with the not-quite-research-level questions. Graduate school is a weird time, and the types of questions one asks and is capable of answering are a big step away from both undergraduate work and research-level mathematics. Remark. The reason I post this question is that I think there is a real problem with intermediate level questions, experienced by many people. This is one suggestion for dealing with it, but really I want to stimulate discussion rather than propose a single solution. • My opinion is that questions by graduate students are fine on MO. I conceive of the difference between MO and MSE as essentially the difference between serious graduate students and undergraduates. – Joel David Hamkins May 28 '16 at 0:01 • @Joel, where should humorous graduate students post? – Gerry Myerson May 28 '16 at 0:25 • @Joel: unfortunately, the community often seems to disagree. This may not be the view of any individual, or even of many individuals, but collectively we have decided that most questions a graduate student has are not appropriate for MO. And even if they are, graduate students often feel discouraged from asking them. – R. van Dobben de Bruyn May 28 '16 at 0:46 • I don't think we've collectively decided any such thing. There are a huge number of very successful questions asked here on MO by graduate students, perhaps thousands of such questions. I encourage graduate students everywhere to ask serious well-thought-out questions here on MO that arise naturally in their graduate studies. We shall all get to consider some interesting mathematics this way. – Joel David Hamkins May 28 '16 at 1:16 • @GerryMyerson I guess they shoud post the snarky second comments... – Joel David Hamkins May 28 '16 at 1:17 • Some related discussion at meta.math.SE: Postgrad Mathematics (which mentions a short-lived area51 proposal), Concern about lesser attention towards relatively advanced questions, Would splitting the site into more elementary and more advanced questions help?. – Martin Sleziak May 28 '16 at 2:07 • @Martin: thanks, those discussions are very relevant indeed. My [selection biased?] impression is that the issue is acknowledged, but perhaps my proposal (which apparently is not a new idea) is not the correct solution. – R. van Dobben de Bruyn May 28 '16 at 3:02 • I feel that the issue has not been addressed appropriately. However, this solution has been proposed in the past, often with the same conclusion. This is cause for me to vote to close the question at this time. (Technical note: it is not a duplicate, because all occurrences of this proposal are on meta.MSE) – R. van Dobben de Bruyn May 28 '16 at 3:47 • As far as I have seen, if you are asking questions not to get solutions, and you've shown your efforts and explained where you got stuck, then your question will be well received. If you don't, then you run the risk of having it closed and/or downvoted and not answered. – Asaf Karagila May 28 '16 at 4:54 • I don't see a need for a site in between MO and Math.SE for graduate students. -- Interesting questions by graduate students are definitely on-topic on MO, and for the 'rest' (e.g. textbook exercises, questions regarding the understanding of some standard definitions, etc.) there is still Math.SE. – Stefan Kohl May 28 '16 at 21:57 • I've always thought (and have repeatedly advocated that) the threshold for MO should be "could a strong second-year graduate student ask this question?" This is the most generous interpretation of "research-level" that I think is possible, and I think (not coincidentally) the most beneficial interpretation for the MO community to take. – Steve Huntsman May 30 '16 at 14:14 • Graduate students should use mathoverflow so that they can get to know the larger mathematical community outside of their own university. Furthermore, there are plenty of good graduate students who have asked and answered many questions here on mathoverflow. Not only can graduate students ask interesting questions but sometimes the best person to answer a certain MO question is a graduate student because that graduate student may have some specialized knowledge or may simply be more interested in the question than other mathoverflow users. – Joseph Van Name Jun 2 '16 at 5:44 • there are declared charters/ scope, and then there is voting/ emergent community feedback/ dynamics, and these are not always the same thing, and it seems sometimes a lot of dialog/ discussion on this tends to miss that. there also exists a kind of "SE culture". here is a vaguely similar meta discussion on cstheory that mentions (eg in comments) too much rigor/ intimidation for graduate students where it might be affecting overall community involvement/ engagement. is interesting activity on cstheory declining? – vzn Jun 5 '16 at 17:03 I believe it is impossible in general to distinguish questions asked by a graduate student and those asked by a mathematician in a domain in which they are not specialist. Since the latter are allowed on MathOverflow, it would make little sense to disallow the former or relegate them elsewhither. It is hard to argue that MathOverflow receives too many questions, or that there it is at a serious risk of being overflooded by questions by graduate students. There are also a number of questions being asked (and to avoid pointing the finger, I plead guilty myself), and often well-received, which are obviously motivated not by actual mathematical research, but just "general intellectual curiosity" (as in "I need to know this": I hope we can all agree that this is healthy). Again, it would make little sense to allow those and not questions asked by graduate students. I think the main criteria should be something like: (1) the question's answer is not easy to find in standard textbooks on the subject, and (2) it has mathematical interest (either on its own or in order to solve a problem that does). This should exclude most cases of homework. I believe graduate students should be encouraged to take the time to think on their own before asking a question, and be sure to frame it carefully (and make sure it's not homework), but so long as they do so, and show their efforts, they should feel perfectly welcome to post on MathOverflow, and this should be made clear. In a practical sense, apart from posters who will blatantly disregard guidelines and rules because they don't even read them, and a few cranks, I suspect there's more of a tendency to err on the side of self-censorship than in the other direction. I've mentioned MathOverflow to a number of colleagues, and surprisingly many of them are apparently too shy to join, even to discuss their domain of research, because they feel intimidated by the level of the existing discussions, or for fear of seeming foolish (he who never hesitated to raise his hand and ask a question at a seminar, let him be the first to throw a stone ☺). So I would find most welcome a change that could make the site seem just a little less "elitist". • What if I don't raise my hand, but just wait for a lull, then cut in with my question? – Asaf Karagila May 30 '16 at 4:01 • (3) the experts of the domain are happy to share their knowledge with the students – reuns Nov 5 '16 at 6:06 It is doubtful that SE would allow another "mathematical" mathematics site to open. In fact some time ago (2012, so even prior to MO's move to the SE network) there was an Area 51 proposal for a "Postgrad Mathematics" site. The proposal itself has been deleted, but a discussion or two live on. It is highly likely that an SE employee closed it after deeming that it would "tend to drain audience from an existing Stack Exchange site". That math.se has grown immensely since 2012 probably wouldn't sway the SE folk. Frequently proposals for sites centred on specific programming languages/technologies are closed because they would "tend to drain audience from" Stack Overflow. • The split into two (MO and M.se) would itself not be possible nowadays, so a third site is pretty clearly impossible. – Noah Snyder Jun 3 '16 at 5:35 Personal opinion: I think that the current system of having several of separate sites for different math-related topics (apart from MO and MSE, there are also related ones like scientific computing, math educators, history of science...) is not the best one, and adding new sites doesn't help; it just makes the community more fragmented. Maybe we need tags like / on a common site, and a better automated system to filter out questions based on the preferences of the individual users. After all, Stack Overflow is a very high-traffic site that covers all levels of programming questions, from helloworld.c to monads and variadic templates, and no one bats an eye. • A filtering system would only be as good as the tagging, and it is not clear to me that users will tag their own question correctly (or indeed tag other people's questions correctly). – Yemon Choi Jun 2 '16 at 22:46 • Also (as discussed before), SE websites discourage the use of meta-tags. See for example the help centre. – R. van Dobben de Bruyn Jun 3 '16 at 2:01 • @YemonChoi I don't see it as a specific downside of my proposal, but as a more general issue: we'd have the same problem if you replace "tag" with "SE site". Often users don't post their question to the correct site, and we rely on high-rep users and moderators to correct them. – Federico Poloni Jun 3 '16 at 7:03 • What possible benefit would this bring to MO? It certainly has downsides (e.g. a much lower signal to noise ratio). There is a reason I don't participate on math.se... – Andy Putman Jun 3 '16 at 14:55 • @AndyPutman "What benefits would this bring to MO" seems a strange point of view to me: I could not care less about MO as an abstract entity. Better question: what benefits would it bring to you and me as users? I'd love an automatic system that aggregates the most interesting questions (based on my favorite tags and activity) that are currently scattered along five separate sites. It would let me find the good stuff more easily. It would eliminate the reason why you don't participate on math.se, for instance. – Federico Poloni Jun 3 '16 at 21:01 • @FedericoPoloni: I very much doubt that I would be interested in questions at math.se or the other sites. Given how poorly people use tags, combining the sites would definitely force me to wade through many more poor questions than I do now (though probably I would just get fed up and leave). What is more, I really do not want eg the math.se people voting/commenting on the questions and answers at MO. There are so many of them that they would quickly overwhelm the knowledgeable users. – Andy Putman Jun 3 '16 at 22:30 • @Andy, given the 101 association bonus for users of multiple SE sites, "the math.se people voting/commenting on the questions and answers at MO" is already possible now. (Certainly, I'm one such user.) – J. M. is not a mathematician Jun 4 '16 at 0:31 • Setting aside a bit unclear phrase math.SE people, the issues pointed out in @AndyPutman's comment are more closely related to some older discussions at meta.MO, such as The Association Bonus and Measures to separate math overflow from the rest of the stack exchange network. – Martin Sleziak Jun 4 '16 at 6:37 • "I'd love an automatic system that aggregates the most interesting questions (based on my favorite tags and activity) that are currently scattered along five separate sites." There is a cross-site filtering system. Here is one I set up for myself focussing on geometry: link. Here is some explanation of Tag Sets. – Joseph O'Rourke Jun 4 '16 at 12:52 • @AndyPutman I don't find most MSE questions too interesting either, and rarely use it (not that I'm a heavy MO user either), but occasionally there are questions I like. So I wouldn't say the site is completely useless to "senior" mathematicians, though I agree there is too much "noise" to make me want to browse MSE. – Kimball Jun 4 '16 at 19:58 • @Kimball There were some suggestions on how to find interesting questions on Math.SE. This is not exactly related to this discussion, so I have posted some that I am aware of in chat. (We can continue this discussion there, if needed.) – Martin Sleziak Jun 5 '16 at 9:15 After carefully reading all your thoughts, I conclude that MO is a perfectly acceptable place for graduate students to post well-thought through questions. Yet, many graduate students (and even professors, cf. for example Gro-Tsen's comment on this question) are intimidated by how harshly the community votes. What this leads me to conclude is that there is a serious disconnect between the type of question that is theoretically acceptable and the collective demand of the community. As I said in a comment to my original question: this may not result from the opinion of any single individual, or even of many individuals, but should be viewed rather as a collective behaviour. (Come to think of it, I probably have been guilty of downvoting perfectly reasonable questions myself.) As for a solution, I think one thing we can do is try to be more understanding of each other's background (and potential lack thereof), and vote more with the community guidelines in mind. • I think the "level" of a question is not the right parameter here. -- The required minimum "level" of a question on MO is not that high, and at times there are also well-received soft questions with not much concrete mathematical content at all. What is probably rather the issue here is how a question is formulated, and I think it is not without reason that the community usually doesn't like questions which are written down like on a personal piece of scrap paper (regardless of "level"). – Stefan Kohl Jun 13 '16 at 20:27 • I second Stefan. I personally haven't seen any technical question above undergraduate level which is well written and shows at least some effort on the part of the author to find an answer that is not well treated. The issue I think many new comers don't know how to write a clear concise question demonstrating at least a bit of personal effort to find an answer. (Soft questions from new comers are treated more harshly and have to satisfy higher expectations, but that I think is intentional.) – Kaveh Jun 28 '16 at 7:49 • @Kaveh I feel like I have seen some in the recent years (though "above undergraduate" is hard to define; in terms of triviality or in terms of it being a part of the standard curriculum). – user138661 May 20 at 7:40
A level Maths part 05 • May 23rd 2007, 06:50 AM A level Maths part 05 thanks for everyone helping me ;) i am really greatful ,and the explations are makiing understand more thank you ...i need help in this homework too ,and i need answers for all please..there is another part continued. regards • May 24th 2007, 05:04 AM CaptainBlack Quote: thanks for everyone helping me ;) i am really greatful ,and the explations are makiing understand more thank you ...i need help in this homework too ,and i need answers for all please..there is another part continued. regards http://www.mathhelpforum.com/math-he...untitled-1.jpg 1. Vector sum of forces: $ \bold{V} = (2\bold{i} + \bold{j}) + (3\bold{i} - 2\bold{j}) + (-\bold{i} -3\bold{j}) = (2+3-1) \bold{i} +(1-2-3)\bold{j} = 4 \bold{i} -4\bold{j} \mbox{ N} $ The the acceleration $\bold{a} = \bold{V}/m$ 2. let the tension be $\bold{T}$ then the total force on the first particel, and hence the acceleration is: $ \bold{T}-m_1\bold{g} = m_1 \bold{a_1} $ and for the second particle: $ \bold{T}-m_2\bold{g} = m_2 \bold{a_2} $ but $\bold{a_2} = -\bold{a_1}$ so: $ \bold{T}/m_2-\bold{g} = -[\bold{T}/m_1-\bold{g}] $ which can then b solved for $\bold{T}$ RonL • May 24th 2007, 05:22 AM topsquark Start by drawing a Free-Body-Diagram for the ball. I have a +x axis to the right and a +y axis upward. There is a weight (w) acting straight downward, a tension (T) acting at 50 degrees above the -x axis, and an applied force (F) of 50 N acting in the +x direction. Since the ball is in equilibrium we know that $\sum F_x = 0$ $\sum F_y = 0$ So: $\sum F_x = -Tcos(50) + F = 0$ which implies $T = \frac{F}{cos(50)} = \frac{50 \, N}{cos(50)} = 77.7862 \, N$ and $\sum F_y = Tsin(50) - w = 0$ which implies $w = Tsin(50) = \frac{F}{cos(50)} \cdot sin(50) = F tan(50)$ $= (50 \, N)tan(50) = 59.5877 \, N$ -Dan • May 24th 2007, 01:41 PM
Comments 1 to 20 out of 7282 in reverse chronological order. On Raffaele Lamagna left comment #7810 on Proposition 15.78.3 in More on Algebra Typo: I think that in the definition of $\pi$ should be $\pi\colon L^\bullet\to (\bigoplus_{\lambda\in\Lambda\setminus\Lambda'}R)[-c]$ On Peng Du left comment #7809 on Lemma 49.11.7 in Discriminants and Differents I think in condition (1), it needs add that U=Spec(A)⊂X is an (affine) open neighbourhood of x. On Peng Du left comment #7808 on Remark 49.11.6 in Discriminants and Differents Change "After a change of coordinates with may assume" to "After a change of coordinates we may assume". On Peng Du left comment #7807 on Lemma 49.11.5 in Discriminants and Differents Needs a period at the end of statement. On David Liu left comment #7806 on Section 29.39 in Morphisms of Schemes Lemma 29.39.8. : The last line : $S \rightarrow ...$, should it be $X \rightarrow ...$? On left comment #7805 on Section 37.16 in More on Morphisms Because French is better? No, it's just that I love saying that phrase. Is that OK? Please feel free to complain! On left comment #7804 on Section 29.8 in Morphisms of Schemes I guess my pedantic reply would be: what is a schematically dense subset? Anyway, if I read it as just a "dense subset" then the inclusion of the generic point of a variety would not be a dominant morphism (in general). So I think that would be very different for morphisms of general schemes. For a morphism between varieties, it would give the same notion. On left comment #7803 on Lemma 65.21.4 in Properties of Algebraic Spaces You use Remark 65.7.6 which in tern uses the preceding Definition 65.7.5 to define what this means. So it makes sense even if the local ring does not make sense. OK? On Nicolás left comment #7802 on Definition 13.41.1 in Derived Categories It seems that (2) is missing an explicit mention of the inductive construction, as it is not specified which morphism $Y_0 \to X_0$ are we working with. Something like "If $n=1$, then it is a choice of a Postnikov system for $X_0$ and a choice of a distinguished [...]." (And probably, in that case (2) and (3) could be unified.) On 羽山籍真 left comment #7801 on Lemma 65.21.4 in Properties of Algebraic Spaces I would like to ask where the notion "local rings" is defined for general algebraic space, since (2) used this (maybe it would be nice if we recall it here). I only know that for decent algebraic space we have Henselian local rings and for geometric points on general algebraic space we have strict Henselian local rings... On Jonas Ehrhard left comment #7800 on Lemma 33.39.5 in Varieties Dieudonne's Topics in Local Algebra contains the statement that $A' \otimes_A A^\wedge$ is the integral closure of $A^\wedge$ in its total ring of quotient for a reduced noetherian G-ring $A$ of arbitrary dimension (Theorem 6.5 ib.). Is there something comparable in the stacks-project? On Xiaolong Liu left comment #7799 on Lemma 10.128.1 in Commutative Algebra One should repalce '$x\in\mathfrak{m},x\notin\mathfrak{m}^2$' by '$x\in\mathfrak{m}_R,x\notin\mathfrak{m}_R^2$'. On Anonymous left comment #7798 on Lemma 13.9.5 in Derived Categories In the lemma statement, should "If $g$ is a split surjection" instead read "If $g$ is a termwise split surjection" ? On Ravi Vakil left comment #7797 on Section 29.8 in Morphisms of Schemes I have a question, from an interesting comment by Hikari Iwasaki. What about "dominant" defined to mean "the preimage of a schematically dense subset is always schematically dense"? On Laurent Moret-Bailly left comment #7792 on Proposition 98.8.4 in Quot and Hilbert Spaces In the proof, the "coherent sheaf $\mathcal{Q}$" is only quasi-coherent of finite presentation. On Heiko Braun left comment #7791 on Lemma 13.30.1 in Derived Categories typo: "an s" in the first line of the proof On Jakob Werner left comment #7790 on Lemma 20.11.4 in Cohomology of Sheaves In the displayed equation of the proof, the index $n$ should be a $p$. On left comment #7789 on Equation 10.118.3.2 in Commutative Algebra Yes, but for technical reasons it's currently not a link (nor is it possible to toggle the tag to be a number). On left comment #7788 on Equation 10.118.3.2 in Commutative Algebra On the right hand side, the subscript should likely refer to tag 051U instead of being simply "(051U)" On Bogdan left comment #7787 on Lemma 59.97.2 in Étale Cohomology Is it clear that the isomorphism constructed in the proof coincides with the morphism induced by the cup product?
# CfE: Mixture Properties ### Mixing: One Phase We know from experience that: In other words, since the interactions between molecules of A-A and B-B differs from the interaction between A-B, we get: $V = V_{mixture} \ne V_A + V_B$ but we know that $n_{mixture} = n_A + n_B$ Combining these, we can show that the intensive property $v$ gives $v = v_{mixture} \ne x_Av_A + x_Bv_B$ ##### Note: This is actually quite general. That is, typically the mixture properties (either extensive or intensive) are not simply the sum (or weighted average) of the individual pure component properties. Recall that, by the state postulate, we can write (note that we will now drop the subscript "mixture"): $v = F(P, T)$ We know that this works for a single phase pure component (we have a common example, $Pv=RT$). We can also write $V = F (P, T, n)$ which we also have experience with ($PV = nRT$). It turns out that for multiple components, the Duhem relation tells us that we can write $V = F (P, T, n_1, n_2, n_3, ... , n_m)$ Therefore, following the same procedure that we used for fundamental property relations, we can write $dV = \left (\frac{\partial V}{\partial P} \right )_{T, n_{i}} dP + \left (\frac{\partial V}{\partial T} \right )_{P, n_{i}} dT + \sum_j \left (\frac{\partial V}{\partial n_j} \right )_{P, T, n_{i\ne j}} dn_j$ ##### Outcome: Write exact differentials for extensive properties in terms of m+2 independent variables for mixtures of m species since at equilibrium we have $dP = dT = 0$, we can reduce this to $dV = \sum_j \left (\frac{\partial V}{\partial n_j} \right )_{P, T, n_{i\ne j}} dn_j$ ##### Definition: We define the partial molar properties as being $\bar V_j = \left (\frac{\partial V}{\partial n_j} \right )_{P, T, n_{i\ne j}}$ which we can think of as the contribution of species $j$ to the mixture property. We can rewrite our expression for $dV$ then as $dV = \sum_j \bar V_j dn_j$ Integrating this (and noting that the constant of integration is zero -- we will simply accept this for now): $V = \sum_j \bar V_j n_j$ dividing by $n_{tot}$ we can also write: $v = \sum_j \bar V_j x_j$ Just to be sure that we have our nomenclature right ... ##### Definition: We define the total solution properties as being the mixture properties which we will denote $v_{mixture}$ or simple $v$ (with specific volume as an example). ##### Definition: We define the pure component properties as being the properties of the individual components when they are not in a mixture and will denote it as $v_i$. ##### Definition: We define the partial molar properties as being $\bar V_j = \left (\frac{\partial V}{\partial n_j} \right )_{P, T, n_{i\ne j}}$ which we can think of as the contribution of species $j$ to the mixture property. ##### Outcome: Define and explain the difference between the terms: pure species property, total solution property, and partial molar property
# What is the net area between f(x)=ln(x+1) in x in[1,2] and the x-axis? Mar 21, 2017 $3 \ln 3 - 2 \ln 2 - 1$ #### Explanation: ${\int}_{1}^{2} \ln \left(x + 1\right) \mathrm{dx}$ First change variables: Let $\textcolor{red}{w = x + 1}$ $\mathrm{dw} = \mathrm{dx}$ Next, solve $\int \ln \left(\textcolor{red}{w}\right) \mathrm{dw}$ using integration by parts. $\int \left(\ln w\right) \mathrm{dw}$ $\textcolor{b l u e}{u = \ln w \text{ } v = w}$ $\textcolor{b l u e}{\mathrm{du} = \frac{1}{w} \mathrm{dw} \text{ } \mathrm{dv} = 1}$ $\int \left(\ln w\right) \mathrm{dw} = w \ln w - \int \left(w \cdot \frac{1}{w}\right) \mathrm{dw}$ $= w \ln w - \int \left(1\right) \mathrm{dw}$ $= \textcolor{red}{w} \ln \textcolor{red}{w} - \textcolor{red}{w}$ $= \left(x + 1\right) \ln \left(x + 1\right) - \left(x + 1\right)$ $= \left(x + 1\right) \ln \left(x + 1\right) - x - 1$ Now, go back to the definite integral: ${\int}_{1}^{2} \ln \left(x + 1\right) \mathrm{dx}$ $= {\left[\left(x + 1\right) \ln \left(x + 1\right) - x - 1\right]}_{1}^{2}$ $= \left[\left(2 + 1\right) \ln \left(2 + 1\right) - 2 - 1\right] - \left[\left(1 + 1\right) \ln \left(1 + 1\right) - 1 - 1\right]$ $= 3 \ln 3 - 3 - \left(2 \ln 2 - 2\right)$ $= 3 \ln 3 - 3 - 2 \ln 2 + 2$ $= 3 \ln 3 - 2 \ln 2 - 1$
# Nonparametric changepoint detection for series with variable number of measurements across time I have been looking at a lot of recent changepoint detection algorithms ( *-PELT, NEWMA, ...) but it seems they all work on a single (or multiple) variable(s) that are composed of each a single value for each date. My problem is a bit different as I have a variable amount of values per "date" (could even be represented using CDF or KDE) and I'd like to detect changes in behavior of those values. (For example changes in mean, standard deviation, shape, etc). So instead of having series of single values, for example: x0 = 0.1 x1 = 0.5 x2 = 0.3 x3 = 0.4 x4 = 2.5 x5 = 2.1 x6 = 2.3 I instead have series of multiple values (count per "date" can change), for example: x0 = (0.1,0.11,0.45,0.26,...) x1 = (0.5,0.3,0.4,0.43,...) x2 = (0.3,0.2) x3 = (0.4,0.21,0.32,0.54) x4 = (2.5,2.1,2.65,2.57,...) x5 = (2.1,2.15,2.6,2.33, 2.41) x6 = (2.3, 2.12, 2.39, 2.54, 2.16) I had a few ideas but that I don't like very much: • Computing a descriptive statistic (mean, median, stddev) for each date, and apply changepoint detections to those • This can get quite expensive • This doesn't seem reliable • assign each value of a "date" to multiple fake "dates" • Can and will skew the results • There is a big loss of information Is there some algorithm that could could work with such data? Edit: http://www.jmlr.org/papers/v20/16-155.html Could be answering this question, still have to read it. • What does a "changepoint" look like in this context? In a univariate or regular multivariate problem what is changing is clear. Are the smaller dimensional dates simply missing values in some of the dimensions, i.e. x2=(NA,0.3,NA,NA,0.2,NA,...)? – adunaic Jul 29 '20 at 10:14 • No the values are measures of the same system, I just don't have the same amount of those per date. What I'm trying to detect is if there is a change in distribution of those values. – Lectem Jul 30 '20 at 9:50 • So is it that there are several measurements per day but taken at different times in the day and you have just stacked them by day? If you have the individual times within the day then you should create a univariate time series using that. If you don't have the individual times but know that the order is the same order they were collected then you can analyze them in that order without the dates, without loss of power. The only time this becomes a problem is if you are wanting to fit seasonal or auto-correlation in your changepoint model. – adunaic Jul 31 '20 at 12:42 • Well it's kind of a weird spot I think as my date isn't really a date but a version of the system being measured, and I could have multiple measurements of the same version happening at the same time or overlapping, but each measurement has multiple values. It's not easy to describe but basically my X axis is not the real time but the version/configuration of what is being measured. In theory I could even have a measurement done on day 1 for version A, another on day2 for version B, and then another one on day3 for version A. (each measurement yielding multiple values) – Lectem Jul 31 '20 at 21:31 • Take a look at the approach of Adams & MacKay: arxiv.org/abs/0710.3742 it is easily generalizable to data in any kind of space. And also the approach of Scargle: ui.adsabs.harvard.edu/abs/1998ApJ...504..405S/abstract – pglpm Aug 5 '20 at 8:10 If you are open to using R, here is a solution using mcp. mcp can infer the location of changes in means (worked examples), variances (worked example), autocorrelation (worked example), and any combination of these. Set up data: x0 = c(0.1,0.11,0.45,0.26) x1 = c(0.5,0.3,0.4,0.43) x2 = c(0.3,0.2) x3 = c(0.4,0.21,0.32,0.54) x4 = c(2.5,2.1,2.65,2.57) x5 = c(2.1,2.15,2.6,2.33, 2.41) x6 = c(2.3, 2.12, 2.39, 2.54, 2.16) df = data.frame( x = c(rep(0, length(x0)), rep(1, length(x1)), rep(2, length(x2)), rep(3, length(x3)), rep(4, length(x4)), rep(5, length(x5)), rep(6, length(x6))), y = c(x0, x1, x2, x3, x4, x5, x6) ) We model this as a single change in intercept and a change in variance: model = list( y ~ 1 + sigma(1), # intercept and variance ~ 1 + sigma(1) # new intercept and new variance ) fit = mcp(model, df, par_x = "x") Here are some plots of the fitted means (left) and variances (right): plot(fit) + plot(fit, which_y = "sigma") Here are the parameter estimates: > summary(fit) # Population-level parameters: # name mean lower upper Rhat n.eff # cp_1 3.50 3.053 4.00 1 6433 # int_1 0.32 0.239 0.41 1 5015 # int_2 2.35 2.229 2.47 1 5276 # sigma_1 0.15 0.095 0.22 1 3067 # sigma_2 0.22 0.139 0.32 1 3171 cp_1 is the change point, int_k is the intercept in segment $$k$$ and sigma_k is the variance in segment $$k$$. In this case, the change in intercept clearly is very informative while the change in variance is less pronounced. You can also model continuously increasing variances (y ~ sigma(1 + x), etc.
# Topic of The Week [frozen due to lack of community support] This is the second quarter 2014 Topic of The Week thread, where we are gathering suggestions for which of our main site tags should receive more attention and we should focus on providing more contents for them during the week they're featured. Old threads are here: We'll be gathering suggestions for future ToTW roughly each quarter into a new thread to reduce clutter, as new suggestions in old threads tend to not attract as many votes being pushed on the page down low and barely visible. Rules are still the same and simple enough: Each Sunday, one of the mods will pick a highly voted answer to this question for which to select as the topic of the week. This will keep going as long as there is interest, both from the mods and the community as a whole. During that week, we encourage all of the active members to ask at least one question on that specific topic. Do research if you have to. In the end, we will be able to fill out this site, to include things which we don't have enough activity yet. Please give a single tag per question which you feel is underrated, and give an explanation of that tag to the group, so everyone can understand enough to start asking questions. Linking relevant Wikipedia articles can be of help, or other useful starting points for research. Finally, if you have a request for a specific week, then please include it in your answer. For instance, if there is a significant anniversary, a new launch, flyby, etc, then it seems logical we should have a topic of the week built around that subject. Once this thread becomes active, we'll change its title to reflect currently running ToTW and feature it, so it shows in our Community Bulletin (the yellow box to the right hand side of our main site pages). Here are a few last selected ToTW to the list for easier reference (links on dates point to the suggestion), and we'll add new ones as they gather sufficient support by voting on them: Topics of The Week (currently running one in bold): Ideally, we need at least 3 upvotes on suggestions to make it as our ToTW! Please support those you're interested in and haven't yet been selected with your vote, and add your own below, each in a new answer, starting with the tag name (markup is in [tag:tag_name] format and links to main site tag with a given tag_name), followed by short description of it and optionally why you believe it should be selected and/or links to any already existing example questions, so it's clearer what you had in mind. Used on the week of Apr. 20 - Apr. 26, 2014 It seems we could come up with more questions than we have about various near-term targets for sample return and the different challenges that have to be overcome to accomplish a sample return mission. Specifically, we could talk about the Mars 2020 rover and details on how it plans to returns samples, assuming specific plans or ideas are out. Used on the week of Apr. 27 - May 03, 2014 Since space exploration on stack is still in it's beta I'd guess upping the sits hit count would be a good move; space debris is a hot topic in the media and will continue to be a hot topic as the problem persists/agencies spend money trying to fix it. It would be good to see a few question on the modelling of space debris - the environment evolution is a pretty well define (albeit inaccurately predicted) topic in of it's own right. My main thinking is a lot of people have questions about space debris, and the space industry seems to be putting some real money behind the problem now! No specific date for when to have this as a topic of the week (unless there's a massive in orbit collision - but if you could predict those then they wouldn't exist!). There are currently 24 questions with the debris tag: But I think we could knock out a cool 100 or so without too much of a hitch. I find spaceflight related terminology simply fascinating. There's so much history and tradition associated with it, that just investigating deeper meaning and historical use of some of the frequently used terms often reveals whole stories. Since they borrow from nearly any field imaginable, scientific or otherwise, I believe this could be interesting for all of us. Some example questions discussing spaceflight terminology from a variety of sources: There's currently only 14 terminology related questions, and even adding to these a few of those asking about the countdown procedures, polling station acronyms, and terminology used with that, I believe we could have many, many more. From mythology, history, various fields of science, to popular culture, spaceflight terminology doesn't seem shy in coining new terms from nearly anywhere, and more often than not, there's an interesting story behind them, too. I was a bit surprised that we only have a single question tagged as space observatory, but we do have a few other questions about , , , , and so on, so these could use a few additional tags so they're easier to find and score better on search engines. That could be one reason for selecting this as one of our ToTW. For orbiters, we don't even have a specific tag yet, which is also surprising, but is doing quite well with roughly 40 questions. This seems a bit too selective and I suspect some of those could use retagging. But my main idea is that we often only discuss orbits, trajectories, station-keeping, technical troubles, life-expectancy, and limits of specific space observatories and orbiters, and less so their main purpose, discoveries made and scientific literature their data was used for and help us better understand our solar system and beyond. And there are some absolutely fantastic observatories already deployed (Hubble, Chandra, Kepler, Herschel, Planck, Messenger, Stereo, SDO,...), some that have just now started doing their science or are about to (Gaia, LADEE, Juno,...), and even more fantastic ones that will relatively soon be joining them (Webb, BepiColombo, LISA Pathfinder, Juice,..). Well, soon in complex and expensive space exploration mission terms. Still, let's discuss these more, create new, more specific tags for them, and retag those that could use that. Drag augmentation is probably a new term for most, so I thought it would be an interesting concept to discuss a bit more on our site. If I had to define it, it would go something like this: Drag augmentation is one of the proposed space debris mitigation methods of de-orbiting Low Earth Orbit (LEO) space debris and defunct satellites that lost means of active propulsion or control, by increasing their atmospheric drag so their orbits decay faster than they would naturally, and artificially increasing their cross-sectional area and with it their air resistance. How could this be done? Some proposed methods are using gossamer (very light fabric) drag augmentation structures and deployable sails, while others suggest spraying debris with adhesive expanding foam (PDF). And there might be other methods, each with their own set of challenges. Since we currently don't have any questions tagged as , I've temporarily made it a synonym of , so it's available in the list before the tag gets its own proper questions. I'll make it a standalone tag and add a description for it as soon as we get some questions. Some other suggested tags it could be used together with are , and . Questions regarding mission or spacecraft plans which are not material, but are only concepts. We currently have only 5 questions tagged as conceptual but that doesn't seem to be a good representative number of how many questions asking about space exploration conceptual projects and ideas that are still competing for attention (funding, approval, feasibility studies,...) we actually have. For example, certainly falls in this category, and there are other examples, so if this suggestion is selected, we could take this opportunity to add this tag to questions it applies to. It is also a broad topic, and while I believe that draming big is an important part of development of space technologies and outlining possible missions, I also believe it is important to clearly classify it as such, so they can be considered for what they are - concepts.
# Show that $\operatorname{rank}(A)=\operatorname{rank}(B)$ [duplicate] Let $A$ and $B$ be two $n\times n$ real matrices, satisfying $A^2=A,\,\, B^2=B$. Suppose $A+B-I$ is invertible. Show that $\operatorname{rank}(A)=\operatorname{rank}(B)$. Since $A^2=A,\,\, B^2=B$ we see that $A$, $B$ are either singular matrices or matrices with determinant $1$. Any ideas on how to proceed from here? • These matrices are called idempotent, and with the exception of the identity matrix these matrices are singular – Naweed G. Seldon May 3 '18 at 23:07 • noted.But even if they are singular, that doesnt their ranks are the same. – DRPR May 3 '18 at 23:08 Let $C=I-A$. Then $$C^2 =(I-A)^2=I-2A+A^2=I-A=C\ .$$ So $C$ is a projection onto its image. An element $v$ in the vector space $V=\Bbb R^n$ has then the unique decomposition $v=(I-A)v+Av$ as a sum of an element in the kernel of $A$, $(I-A)v$, and an element in the image of $A$, $Av$. The kernel of $A$ is the image of $C$ and conversely. So we have to show that if $B-C$ is invertible, the $B,C$ have complementary dimensions for the kernels. Or for the images, whatever we prefer. If the kernels have sum of dimensions $>n$, we find a non-zero $v$ in the intersection of these kernels, so $(B-C)v=Bv-Cv=0-0=0$, contradiction. If the kernels have sum of dimension $<n$, then the images exceed, we find a non-zero $w$ with $w=Bw=Cw$, so $(B-C)w=0$. Contradiction again. If $A^2=A$ we have that the minimum polynomial of $A$ has distinct roots and so $A$ is diagonalizable and its only eigenvalues are $1$ and $0$. We conclude the geometric multiplicities of eigenvalues $1$ and $0$ add $n$. So if we suppose $\dim(\ker(A)) < \dim(\ker(B))$ we conclude the eigenspace of $1$ of $A$ and the kernel of $B$ intersect non-trivially. Any such non-zero vector in this intersection is an eigenvector for the eigenvalue $1$ of $A+B$. Or equivalently, is in the kernel of $A+B-I$.
Home > Standard Error > Calculate Standard Error Measurement # Calculate Standard Error Measurement ## Contents Estimate the sample mean for the given sample of the population data. 2. Close Yeah, keep it Undo Close This video is unavailable. If the test included primarily questions about American history then it would have little or no face validity as a test of Asian history. A test has convergent validity if it correlates with other tests that are also measures of the construct in question. http://galaxynote7i.com/standard-error/calculate-standard-error-of-a-measurement.php For the sake of simplicity, we are assuming there is no partial knowledge of any of the answers and for a given question a student either knows the answer or guesses. This is not a practical way of estimating the amount of error in the test. Uploaded on Sep 28, 2011A presentation that provides insight into what standard error of measurement is, how it can be used, and how it can be interpreted. BHSChem 7,002 views 15:00 Module 10: Standard Error of Measurement and Confidence Intervals - Duration: 9:32. http://home.apu.edu/~bsimmerok/WebTMIPs/Session6/TSes6.html ## Calculate Standard Error Of Mean Think about the following situation. As the reliability increases, the SEMdecreases. Bionic Turtle 94,767 views 8:57 Reliability Analysis - Duration: 5:18. The smaller the standard deviation the closer the scores are grouped around the mean and the less variation. Polite way to ride in the dark Best practice for map cordinate system How are aircraft transported to, and then placed, in an aircraft boneyard? This feature is not available right now. How To Calculate Standard Error Of Measurement In Excel As the r gets smaller the SEM gets larger. These concepts will be discussed in turn. How To Calculate Standard Error Of Measurement In Spss How to copy from current line to the n-th line? This could happen if the other measure were a perfectly reliable test of the same construct as the test in question. Measurement of some characteristics such as height and weight are relatively straightforward. In most contexts, items which about half the people get correct are the best (other things being equal). Calculate Standard Error Of Estimate FelsInstitute 30,617 views 6:42 Measurement Error - Duration: 8:42. more hot questions question feed about us tour help blog chat data legal privacy policy work here advertising info mobile contact us feedback Technology Life / Arts Culture / Recreation Science In the first row there is a low Standard Deviation (SDo) and good reliability (.79). ## How To Calculate Standard Error Of Measurement In Spss Missing \right ] Beautify ugly tabu table What happens if no one wants to advise me? This standard error calculator alongside provides the complete step by step calculation for the given inputs. Example Problem: Estimate the standard error for the sample data 78.53, 79.62, 80.25, 81.05, 83.21, Calculate Standard Error Of Mean While calculating the Standard Error of Measurement, should we use the Lower and Upper bounds or continue using the Reliability estimate. Standard Error Of Measurement Formula For example, if a test with 50 items has a reliability of .70 then the reliability of a test that is 1.5 times longer (75 items) would be calculated as follows Sign in 4 Loading... Check This Out Every test score can be thought of as the sum of two independent components, the true score and the error score. As the SDo gets larger the SEM gets larger. This would be the amount of consistency in the test and therefore .12 amount of inconsistency or error. Calculate Reliability Coefficient In practice, it is not practical to give a test over and over to the same person and/or assume that there are no practice effects. Therefore, reliability is not a property of a test per se but the reliability of a test in a given population. The estimation with lower SE indicates that it has more precise measurement. Source True Scores / Estimating Errors / Confidence Interval / Top Estimating Errors Another way of estimating the amount of error in a test is to use other estimates of error. The difference between the observed score and the true score is called the error score. How To Calculate Standard Error In R Instead, the following formula is used to estimate the standard error of measurement. Generated Wed, 05 Oct 2016 16:53:47 GMT by s_hv972 (squid/3.5.20) ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.9/ Connection ## The SEM is in standard deviation units and canbe related to the normal curve.Relating the SEM to the normal curve,using the observed score as the mean, allows educators to determine the Let's assume that each student knows the answer to some of the questions and has no idea about the other questions. Brandon Foltz 68,124 views 32:03 2-3 Uncertainty in Measurements - Duration: 8:46. The below step by step procedures help users to understand how to calculate standard error using above formulas. 1. How To Calculate Standard Error Without Standard Deviation Using the formula: {SEM = So x Sqroot(1-r)} where So is the Observed Standard Deviation and r is the Reliability the result is the Standard Error of Measurement(SEM). Taking the extremes, if the reliability is 0 then the standard error of measurement is equal to the standard deviation of the test; if the reliability is perfect (1.0) then the Tabitha Vu 847 views 7:41 SPSS Video #8: Calculating the Standard Error Of The Mean In SPSS - Duration: 2:35. Sign in Share More Report Need to report the video? have a peek here He can be about 99% (or ±3 SEMs) certainthat his true score falls between 19 and 31. Show more Language: English Content location: United States Restricted Mode: Off History Help Loading... share|improve this answer answered Apr 8 '11 at 20:40 chl♦ 37.4k6124243 add a comment| up vote 1 down vote There are 3 ways to calculate SEM. How do I debug an emoticon-based URL? Dividing the sample standard deviation by the square root of sample mean provides the standard error of the mean (SEM). Solved Example The below solved example for to estimate the It is also known as standard error of mean or measurement often denoted by SE, SEM or SE. Session 6 Lecture Standard Error of Measurement True Scores / Estimating Errors / Confidence Interval True Scores Every time a student takes a test there is a possibility that the raw In the last row the reliability is very low and the SEM is larger. This can be written as: Download PDF of derivation It is important to understand the implications of the role the variance of true scores plays in the definition of reliability: If Also it is important if you want to have SEM agreement or SEM consistency. His true score is 107 so the error score would be -2. For simplicity, assume that there is no learning over tests which, of course, is not really true. Perspectives on Psychological Science, 4, 274-290. More Information on Reliability from William Trochim's Knowledge Source Validity The validity of a test refers to whether the test measures what it is supposed to measure.
]> 16.2 Linear Restoring Force 16.2 Linear Restoring Force An ordinary spring has behavior described by a linear restoring force. The spring possesses a normal length, $x e$ , and if stretched or compressed, it experiences a force of strength $k | x − x e |$ pushing or pulling it back toward its normal position. This can be described by the following force law $F = − k ( x − x e )$ We can describe such a force by a potential energy function $V$ given by $k ( x − x e ) 2 2$ , and so again have conservation of energy. If the motion is characterized by mass $m$ , the differential equation obeyed by the system is $m d 2 x d t 2 = F = − k ( x − x e )$ which can be rewritten as $d 2 ( x − x e ) d t 2 = − k m ( x − x e )$ This is exactly the differential equation obeyed by $sin ⁡ w t$ with $w 2 = k m$ (and $cos ⁡ w t$ as well). We may therefore conclude that $x − x 0 = c sin ⁡ ( w ( t − t 0 ) + d )$ where $c$ and $d$ are constants that depend on the initial values of position and velocity of the spring .The general solution can also be written as the sum of a sine and a cosine term. This system has the interesting property that the parameter $w$ that appears here, which by the way is a measure of the frequency of the sinusoidal motion, and its period, depends only on $m$ and $k$ , and not on the initial conditions. The Potential Energy function for this system is $− k ( x − x e ) 2 2$ . You can again use the conservation of energy to deduce the relation between speed and extension of the spring once you know the energy in the system. The solution here, that the spring oscillates on for ever is obviously unrealistic. Springs stop moving. This is because the model we have just used obeys conservation of energy in the spring motion, while in reality there is air resistance and some friction in the spring motion, and some of the energy in it when you start gets converted into heat. Similarly air resistance changes the motion of a thrown ball, for example in the constant gravity case. We will consider force laws that model friction and air resistance soon. First we consider an important reformulation of Newton's Laws of motion that can be applied to conservative systems, that is those that are like the two we have considered here, in which energy is conserved. To understand a physical system you want to develop a feeling for how it behaves when left alone, and how that depends on its parameters, and also how it responds to outside stimuli, that is outside forcing functions. At first glance you might think that there are so many different possible stimuli that this is an impossible task. The standard approach to investigating this response is to examine response to a periodic forcing function at a single frequency as a function of that frequency. We then hope to describe response to more general stimuli by using this information. This is done by a process called Fourier Analysis, which is briefly discussed in Section 30.6 . In Section 33.3 we describe how to solve the second order differential equation next mentioned, on a spreadsheet. It is quite easy to do, and you can use the method discussed there to investigate behavior of an oscillator. The differential equation for the oscillator can be written as $M x " = − k x − f x ' + c sin ⁡ w t$ where $x$ is the variable hitherto called $x − x 0 , M$ is the spring mass, $k$ its spring constant, $f$ measures the frictional loss experienced by the system, $w$ is the frequency of the forcing function and $c$ its amplitude. In the free system, $c$ is 0. Exercises: 16.1 First follow the scheme described in Section 33.3 to examine what happens when there is no friction so that $f = 0$ . You can choose your scale for $x$ such that $M = 1$ and suppose your scale for $t$ is such that $k = 1$ as well. You can chart $x$ and $x '$ vs $t$ and $x$ vs $x '$ and look at the following questions: 16.2 What happens when you multiply $k$ by 4? 16.3 Now what happens when you introduce a small positive value for $f$ ? 16.4 What happens as you increase $f$ with $k$ and $M$ fixed? The oscillation with $f > 0$ and $c = 0$ is said to be transient. When $f$ and $c$ are both positive, then there is a transient behavior not unlike that for $c = 0$ , but also a steady state response to the forcing function. An interesting parameter is the ratio of the amplitude of the steady state response to $c$ , as a function of $w$ (with other parameters fixed). 16.5 Find the value of $w$ which makes this parameter maximum, for $M − k = 1$ and $f = .1$ . 16.6 What happens to it when you increase $f$ to .2? .3? .5? 1?
# How can I know the relative number of moles of each substance with chemical equations? May 13, 2014 The relative numbers of moles (called the stoichiometry) is given by the numbers in front of the chemical formulae in the equation - the numbers used to balance the equation. For example, the reaction of sodium with oxygen is represented by the balanced equation: 4Na + ${O}_{2}$ --> 2$N {a}_{2} O$ this equation reads "four moles of sodium react with one mole of oxygen to produce 2 moles of sodium oxide". Of course you can now use these relative numbers of moles to scale up or down any given quantities e.g. 0.2 moles of sodium will produce 0.1 moles of sodium oxide, or 20 moles of oxygen will react with 80 moles of sodium...
# How to compute the average distance till intersection within a triangle in R^2? Lots of simple questions because I am a noob. You are given 3 points in R^2; A, B, C forming a triangle with area > 0. You pick an arbitrary point inside ABC and an arbitrary direction. After some distance d you will intersect some side of a triangle. The task is to compute the expected d for a given triangle. Also, it would be nice to know the whole distribution. What kind of distribution (e.g. uniform, normal) would it be? Next, what if I have a complex polygon? Can I combine my knowledge from individual triangles somehow to compute mean and distr. for the convex polygon? Finally, how about a non-convex polygon? - If you call the random variable you described X, then it's much easier to compute the expected value of X2. Indeed, for any point P inside triangle ABC, the expected value of X2 where X is the length of a ray through P intersected with the interior of ABC is just the area of ABC divided by π—imagine trying to compute the area of ABC using polar coordinates with origin at P. You can use a similar trick to write the expected value of X in terms of the average value of 1/d(P, Q) where P and Q are selected uniformly at random from the interior of triangle ABC, but this doesn't seem terribly helpful. - We can start by solving the problem for an arbitrary triangle and a fixed direction, say the upward direction, as we can always rotate the whole figure into that position. In the generic case, no edge of the triangle is horizontal. Thus the triangle has a "top" vertex at (0, 0) -- this is the vertex with the largest y-coordinate. Without loss of generality, rescale so that the edge opposite the vertex at (0, 0) passes through (-1, 0). We'll say that the "height" of this triangle is 1, where by height we mean the length of the vertical dropped from the top vertex. Then the set of points of vertical distance at least r from the top of the original triangle form a triangle of height 1-r, similar to the original triangle. Thus $$Prob(\hbox{vertical distance from top } &gt; r) = (1-r)^2$$ and this gives the distribution. In particular the expected value of the vertical distance from a random point to the top of the triangle (after the rescaling given above) is 1/3. If you want to pick a random direction, though, then this gets a lot harder because the rescaling step will act differently for different directions. - Trying to draw this. What is r? –  S. Donovan Jan 2 '10 at 5:30 For a triangle, this seems relatively easy. construct the voronoi diagram of the sides, which in this case is merely the partition formed by the angle bisectors at each vertex. they intersect at the center of the incircle, forming three triangles. In each triangle, the 'distance to boundary' is really the distance to the corresponding side of the original triangle, and the answer is a straight integration. - I think the ray is in a uniformly random direction, not towards the nearest point on the boundary. –  Reid Barton Jan 2 '10 at 5:04 ah dang. good point. missed that –  Suresh Venkat Jan 2 '10 at 5:08 x is in the triangle. Draw segments that connect x to each vertex and segments perpendicular to each side from x. Then, starting from the nearest perpendicular segment, measure the angle formed by your vector and call it $\theta$. Note the length of the perpendicular, call it R. Then the distance, from x to the side in the direction of the vector, is: $R \sec (\theta)$. - This works for the convex... but not for the non-convex. –  S. Donovan Jan 2 '10 at 5:03
## New publication in Topics in Catalysis | categories: | tags: | View Comments Single atom alloys are alloys in the extreme dilute limit, where single atoms of a reactive metal are surrounded by comparatively unreactive metals. This makes the single reactive atoms like single atom sites where reactions can occur. These sites are interesting because they are metallic, but their electronic structure is different than the atoms in more concentrated alloys. This means there is the opportunity for different, perhaps better catalytic performance for the single atom alloys. In this paper, we studied the electronic structure and some representative reaction pathways on a series of single atom alloy surfaces. @article{Thirumalai2018, author = "Thirumalai, Hari and Kitchin, John R.", title = "Investigating the Reactivity of Single Atom Alloys Using Density Functional Theory", journal = "Topics in Catalysis", year = "2018", month = "Jan", day = "25", abstract = "Single atom alloys are gaining importance as atom-efficient catalysts which can be extremely selective and active towards the formation of desired products. They possess such desirable characteristics because of the presence of a highly reactive single atom in a less reactive host surface. In this work, we calculated the electronic structure of several representative single atom alloys. We examined single atom alloys of gold, silver and copper doped with single atoms of platinum, palladium, iridium, rhodium and nickel in the context of the d-band model of Hammer and N{\o}rskov. The reactivity of these alloys was probed through the dissociation of water and nitric oxide and the hydrogenation of acetylene to ethylene. We observed that these alloys exhibit a sharp peak in their atom projected d-band density of states, which we hypothesize could be the cause of high surface reactivity. We found that the d-band centers and d-band widths of these systems correlated linearly as with other alloys, but that the energy of adsorption of a hydrogen atom on these surfaces could not be correlated with the d-band center, or the average reactivity of the surface. Finally, the single atom alloys, with the exception of copper--palladium showed good catalytic behavior by activating the reactant molecules more strongly than the bulk atom behavior and showing favorable reaction pathways on the free energy diagrams for the reactions investigated.", issn = "1572-9028", doi = "10.1007/s11244-018-0899-0", url = "https://doi.org/10.1007/s11244-018-0899-0" } org-mode source Org-mode version = 9.1.6 ## New publication in Molecular Simulation | categories: | tags: | View Comments This paper is our latest work using neural networks in molecular simulation. In this work, we build a Behler-Parinello neural network potential of bulk zirconia. The potential can describe several polymorphs of zirconia, as well as oxygen vacancy defect formation energies and diffusion barriers. We show that we can use the potential to model oxygen vacancy diffusion using molecular dynamics at different temperatures, and to use that data to estimate the effective diffusion activation energy. This is further evidence of the general utility of the neural network-based potential for molecular simulations with DFT accuracy. @article{wang-2018-densit-funct, author = {Chen Wang and Akshay Tharval and John R. Kitchin}, title = {A Density Functional Theory Parameterised Neural Network Model of Zirconia}, journal = {Molecular Simulation}, volume = 0, number = 0, pages = {1-8}, year = 2018, doi = {10.1080/08927022.2017.1420185}, url = {https://doi.org/10.1080/08927022.2017.1420185}, eprint = { https://doi.org/10.1080/08927022.2017.1420185 }, publisher = {Taylor \& Francis}, } org-mode source Org-mode version = 9.1.5 ## 2017 in a nutshell for the Kitchin Research group | categories: news | tags: | View Comments Since the last update a lot of new things have happened in the Kitchin Research group. Below are some summaries of the group accomplishments, publications and activities for the past year. ## 1 Student accomplishments Jacob Boes completed his PhD and began postdoctoral work with Thomas Bligaard at SLAC/Suncat at Stanford. Congratulations Jake! Four new PhD students joined the group: 1. Jenny Zhan will work on simulation of molten superalloys 2. Mingjie Liu will work on the design of single atom alloy catalysts 3. Yilin Yang will work on segregation in multicomponent alloys under reaction conditions 4. Zhitao Guo is also joining the group and will be co-advised by Prof. Gellman. He will work on multicomponent alloy catalysts. Welcome to the group! ## 2 Publications Our publications and citation counts have continued to grow this year. Here is our current metrics according to Researcher ID. We have eight new papers that are online, and two that are accepted, but not online yet. There are brief descriptions below. ### 2.1 Collaborative papers larsen-2017-atomic-simul This is a modern update on the Atomic Simulation Environment Python software. We have been using and contributing to this software for about 15 years now! saravanan-2017-alchem-predic This collaborative effort with the Keith group at UPitt and Anatole von Lilienfeld explored a novel approach to estimating adsorption energies on alloy surfaces. xu-2017-first-princ We used DFT calculations to understand epitaxial stabilization of titania films on strontium titanate surfaces. wittkamper-2017-compet-growt We previously predicted that tin oxide should be able to form in the columbite phase as an epitaxial film. In this paper our collaborators show that it can be done! kitchin-2017-autom-data This paper finally came out in print. It shows an automated approach to sharing data. Also, it may be the only paper with data hidden inside a picture of a library in the literature. ### 2.2 Papers on neural networks in molecular simulation boes-2017-neural-networ We used neural networks in conjunction with molecular dynamics and Monte Carlo simulations to model the coverage dependent adsorption of oxygen and initial oxidation of a Pd(111) surface. boes-2017-model-segreg We used neural networks in conjunction with Monte Carlo simulations to model segregation across composition space for a Au-Pd alloy. geng-2017-first-princ We used a cluster expansion with Monte Carlo simulations to resolve some inconsistencies in simulated Cu-Pd phase diagrams. There is an interesting transition from an fcc to bcc to fcc structure across the composition space that is subtle and difficult to compute. ### 2.3 Papers accepted in 2017 but not yet in press 1. Chen Wang, Akshay Tharval, John R. Kitchin, A density functional theory parameterized neural network model of zirconia, Accepted in Molecular Simulation, July 2017. 2. Hari Thirumalai, John R. Kitchin, Investigating the Reactivity of Single Atom Alloys using Density Functional Theory, Topics in Catalysis, Accepted November 2017. ## 3 New courses After a five year stint of teaching Master's and PhD courses, I taught the undergraduate chemical engineering course again. This was the first time I taught the course using Python. All the lectures and assignments were in Jupyter notebooks. You can find the course here: https://github.com/jkitchin/s17-06364. The whole class basically ran from a browser using a Python Flask app to serve the syllabus, lectures and assignments. Assignments were submitted and returned by email through the Flask app. It was pretty interesting. I did not like it as much as using Emacs/org-mode like I have in the past, but it was easier to get 70 undergraduates up and running. I did not teach in the Fall, because I was on Sabbatical! In August 2017 I started my first sabbatical! I am spending a year in the Accelerated Science group at Google in Mountain View, California. I am learning about machine learning applications in engineering and science. This is a pivotal year in my research program, so stay tuned for our new work! It has been great for my family, who moved out here with me. We have been seeing a lot of California. I have been biking to work almost every day, usually 15-20 miles. I have logged over 1200 commuting miles already since August. ## 5 Emacs and org-mode org-ref remains in the top 15% of downloaded MELPA packages, with more than 24,000 downloads since it was released. It has been pretty stable lately. It remains a cornerstone of my technical writing toolbox. I have spent some time improving org-mode/ipython interactions including inline images, asynchronous execution and export to jupyter notebooks. It is still a work in progress. I spent a fair bit of time learning about dynamic modules for writing compiled extensions to Emacs to bring features like linear algebra, numerical methods and database access to it. I wish I had more time to work on this. I think it will be useful to make org-mode even better for scientific research and documentation. ## 6 Social media I have continued exploring the use of social media to share my work. It still seems like a worthwhile use of time, but we need continued efforts to make this really useful for science. ### 6.1 kitchingroup.cheme.cmu.edu I use my blog to share technical knowledge and news about the group. We had 48 blog posts in 2017. A lot of them were on some use of org-mode and Emacs. I also introduced a new exporter for org-mode to make jupyter notebooks. I spent November exploring automatic differentiation and applications of it to engineering problems. Visits to the site continue to grow. Here is the growth over the past two years. The big spike in Oct 2017 is from this article on Hacker News about one of my posts! I continue to think that technical blogging is a valuable way to communicate technical knowledge. It provides an easy way to practice writing, and with comments enabled to get feedback on your ideas. It has taken several years to develop a style for doing this effectively that is useful to me, and to others. I have integrated my blog into Twitter so that new posts are automatically tweeted, which helps publicize the new posts. It has some limitations, e.g. it is not obvious how to cite them in ways that are compatible with the current bibliometric driven assessment tools used in promotion and tenure. Overall, I find it very complementary to formal publications though, and I wish more people did it. ### 6.2 Github I was a little less active on Github this year than last year, especially this fall as I started my sabbatical. Github remains my goto version control service though, and we continue using it for everything from code development and paper writing to course serving. scimax finally has more Github stars than jmax does! Another year with over 100,000 minutes of Youtube watch time on our videos. org-mode is awesome was most popular, with almost 50,000 views. We have six videos with over 2500 views for the past year! I have not made too many new videos this year. Hopefully there will be some new ones on the new features in scimax in the next year. org-mode source Org-mode version = 9.1.3 ## Solving an eigenvalue differential equation with a neural network | categories: | tags: | View Comments The 1D harmonic oscillator is described here. It is a boundary value differential equation with eigenvalues. If we let let ω=1, m=1, and units where ℏ=1. then, the governing differential equation becomes: $$-0.5 \frac{d^2\psi(x)}{dx^2} + (0.5 x^2 - E) \psi(x) = 0$$ with boundary conditions: $$\psi(-\infty) = \psi(\infty) = 0$$ We can further stipulate that the probability of finding the particle over this domain is equal to one: $$\int_{-\infty}^{\infty} \psi^2(x) dx = 1$$. In this set of equations, $$E$$ is an eigenvalue, which means there are only non-trivial solutions for certain values of $$E$$. Our goal is to solve this equation using a neural network to represent the wave function. This is a different problem than the one here or here because of the eigenvalue. This is an additional adjustable parameter we have to find. Also, we have the normalization constraint to consider, which we did not consider before. ## 1 The neural network setup Here we setup the neural network and its derivatives. This is the same as we did before. import autograd.numpy as np def init_random_params(scale, layer_sizes, rs=npr.RandomState(42)): """Build a list of (weights, biases) tuples, one for each layer.""" return [(rs.randn(insize, outsize) * scale, # weight matrix rs.randn(outsize) * scale) # bias vector for insize, outsize in zip(layer_sizes[:-1], layer_sizes[1:])] def swish(x): "see https://arxiv.org/pdf/1710.05941.pdf" return x / (1.0 + np.exp(-x)) def psi(nnparams, inputs): "Neural network wavefunction" for W, b in nnparams: outputs = np.dot(inputs, W) + b inputs = swish(outputs) return outputs psip = elementwise_grad(psi, 1) # dpsi/dx psipp = elementwise_grad(psip, 1) # d^2psi/dx^2 ## 2 The objective function The important function we need is the objective function. This function codes the Schrödinger equation, the boundary conditions, and the normalization as a cost function that we will later seek to minimize. Ideally, at the solution the objective function will be zero. We can't put infinity into our objective function, but it turns out that x = ± 6 is practically infinity in this case, so we approximate the boundary conditions there. Another note is the numerical integration by the trapezoid rule. I use a vectorized version of this because autograd doesn't have a trapz derivative and I didn't feel like figuring one out. We define the params to vary here as a dictionary containing neural network weights and biases, and the value of the eigenvalue. # Here is our initial guess of params: nnparams = init_random_params(0.1, layer_sizes=[1, 8, 1]) params = {'nn': nnparams, 'E': 0.4} x = np.linspace(-6, 6, 200)[:, None] def objective(params, step): nnparams = params['nn'] E = params['E'] # This is Schrodinger's eqn zeq = -0.5 * psipp(nnparams, x) + (0.5 * x**2 - E) * psi(nnparams, x) bc0 = psi(nnparams, -6.0) # This approximates -infinity bc1 = psi(nnparams, 6.0) # This approximates +infinity y2 = psi(nnparams, x)**2 # This is a numerical trapezoid integration prob = np.sum((y2[1:] + y2[0:-1]) / 2 * (x[1:] - x[0:-1])) return np.mean(zeq**2) + bc0**2 + bc1**2 + (1.0 - prob)**2 # This gives us feedback from the optimizer def callback(params, step, g): if step % 1000 == 0: print("Iteration {0:3d} objective {1}".format(step, objective(params, step))) ## 3 The minimization Now, we just let an optimizer minimize the objective function for us. Note, I ran this next block more than once, as the objective continued to decrease. I ran this one at least two times, and the loss was still decreasing slowly. params = adam(grad(objective), params, step_size=0.001, num_iters=5001, callback=callback) print(params['E']) Iteration 0 objective [[ 0.00330204]] Iteration 1000 objective [[ 0.00246459]] Iteration 2000 objective [[ 0.00169862]] Iteration 3000 objective [[ 0.00131453]] Iteration 4000 objective [[ 0.00113132]] Iteration 5000 objective [[ 0.00104405]] 0.5029457355415167 Good news, the lowest energy eigenvalue is known to be 0.5 for our choice of parameters, and that is approximately what we got. Now let's see our solution and compare it to the known solution. Interestingly we got the negative of the solution, which is still a solution. The NN solution is not indistinguishable from the analytical solution, and has some spurious curvature in the tails, but it is approximately correct, and more training might get it closer. A different activation function might also work better. %matplotlib inline import matplotlib.pyplot as plt x = np.linspace(-6, 6)[:, None] y = psi(params['nn'], x) plt.plot(x, -y, label='NN') plt.plot(x, (1/np.pi)**0.25 * np.exp(-x**2 / 2), 'r--', label='analytical') plt.legend() ## 4 The first excited state Now, what about the first excited state? This has an eigenvalue of 1.5, and the solution has odd parity. We can naively change the eigenvalue, and hope that the optimizer will find the right new solution. We do that here, and use the old NN params. params['E'] = 1.6 Now, we run a round of optimization: params = adam(grad(objective), params, step_size=0.003, num_iters=5001, callback=callback) print(params['E']) Iteration 0 objective [[ 0.09918192]] Iteration 1000 objective [[ 0.00102333]] Iteration 2000 objective [[ 0.00100269]] Iteration 3000 objective [[ 0.00098684]] Iteration 4000 objective [[ 0.00097425]] Iteration 5000 objective [[ 0.00096347]] 0.502326347406645 That doesn't work though. The optimizer just pushes the solution back to the known one. Next, we try starting from scratch with the eigenvalue guess. nnparams = init_random_params(0.1, layer_sizes=[1, 8, 1]) params = {'nn': nnparams, 'E': 1.6} step_size=0.003, num_iters=5001, callback=callback) print(params['E']) Iteration 0 objective [[ 2.08318762]] Iteration 1000 objective [[ 0.02358685]] Iteration 2000 objective [[ 0.00726497]] Iteration 3000 objective [[ 0.00336433]] Iteration 4000 objective [[ 0.00229851]] Iteration 5000 objective [[ 0.00190942]] 0.5066213334684926 That also doesn't work. We are going to have to steer this. The idea is pre-train the neural network to have the basic shape and symmetry we want, and then use that as the input for the objective function. The first excited state has odd parity, and here is a guess of that shape. This is a pretty ugly hacked up version that only roughly has the right shape. I am counting on the NN smoothing out the discontinuities. xm = np.linspace(-6, 6)[:, None] ym = -0.5 * ((-1 * (xm + 1.5)**2) + 1.5) * (xm < 0) * (xm > -3) yp = -0.5 * ((1 * (xm - 1.5)**2 ) - 1.5) * (xm > 0) * (xm < 3) plt.plot(xm, (ym + yp)) plt.plot(x, (1/np.pi)**0.25 * np.sqrt(2) * x * np.exp(-x**2 / 2), 'r--', label='analytical') Now we pretrain a bit. def pretrain(params, step): nnparams = params['nn'] errs = psi(nnparams, xm) - (ym + yp) return np.mean(errs**2) step_size=0.003, num_iters=501, callback=callback) Iteration 0 objective [[ 1.09283695]] Here is the new initial guess we are going to use. You can see that indeed a lot of smoothing has occurred. plt.plot(xm, ym + yp, xm, psi(params['nn'], xm)) That has the right shape now. So we go back to the original objective function. params = adam(grad(objective), params, step_size=0.001, num_iters=5001, callback=callback) print(params['E']) Iteration 0 objective [[ 0.00370029]] Iteration 1000 objective [[ 0.00358193]] Iteration 2000 objective [[ 0.00345137]] Iteration 3000 objective [[ 0.00333]] Iteration 4000 objective [[ 0.0032198]] Iteration 5000 objective [[ 0.00311844]] 1.5065724128094344 I ran that optimization block many times. The loss is still decreasing, but slowly. More importantly, the eigenvalue is converging to 1.5, which is the known analytical value, and the solution is converging to the known solution. x = np.linspace(-6, 6)[:, None] y = psi(params['nn'], x) plt.plot(x, y, label='NN') plt.plot(x, (1/np.pi)**0.25 * np.sqrt(2) * x * np.exp(-x**2 / 2), 'r--', label='analytical') plt.legend() We can confirm the normalization is reasonable: # check the normalization print(np.trapz(y.T * y.T, x.T)) [ 0.99781886] ## 5 Summary This is another example of using autograd to solve an eigenvalue differential equation. Some of these solutions required tens of thousands of iterations of training. The groundstate wavefunction was very easy to get. The first excited state, on the other hand, took some active steering. This is very much like how an initial guess can change which solution a nonlinear optimization (which this is) finds. There are other ways to solve this particular problem. What I think is interesting about this is the possibility to solve harder problems, e.g. not just a harmonic potential, but a more complex one. You could pretrain a network on the harmonic solution, and then use it as the initial guess for the harder problem (which has no analytical solution). org-mode source Org-mode version = 9.1.2 ## Solving ODEs with a neural network and autograd | categories: | tags: | View Comments In the last post I explored using a neural network to solve a BVP. Here, I expand the idea to solving an initial value ordinary differential equation. The idea is basically the same, we just have a slightly different objective function. $$dCa/dt = -k Ca(t)$$ where $$Ca(t=0) = 2.0$$. Here is the code that solves this equation, along with a comparison to the analytical solution: $$Ca(t) = Ca0 \exp -kt$$. import autograd.numpy as np def init_random_params(scale, layer_sizes, rs=npr.RandomState(0)): """Build a list of (weights, biases) tuples, one for each layer.""" return [(rs.randn(insize, outsize) * scale, # weight matrix rs.randn(outsize) * scale) # bias vector for insize, outsize in zip(layer_sizes[:-1], layer_sizes[1:])] def swish(x): "see https://arxiv.org/pdf/1710.05941.pdf" return x / (1.0 + np.exp(-x)) def Ca(params, inputs): "Neural network functions" for W, b in params: outputs = np.dot(inputs, W) + b inputs = swish(outputs) return outputs # Here is our initial guess of params: params = init_random_params(0.1, layer_sizes=[1, 8, 1]) # Derivatives k = 0.23 Ca0 = 2.0 t = np.linspace(0, 10).reshape((-1, 1)) # This is the function we seek to minimize def objective(params, step): # These should all be zero at the solution # dCadt = -k * Ca(t) zeq = dCadt(params, t) - (-k * Ca(params, t)) ic = Ca(params, 0) - Ca0 return np.mean(zeq**2) + ic**2 def callback(params, step, g): if step % 1000 == 0: print("Iteration {0:3d} objective {1}".format(step, objective(params, step))) step_size=0.001, num_iters=5001, callback=callback) tfit = np.linspace(0, 20).reshape(-1, 1) import matplotlib.pyplot as plt plt.plot(tfit, Ca(params, tfit), label='soln') plt.plot(tfit, Ca0 * np.exp(-k * tfit), 'r--', label='analytical soln') plt.legend() plt.xlabel('time') plt.ylabel('$C_A$') plt.xlim([0, 20]) plt.savefig('nn-ode.png') Iteration 0 objective [[ 3.20374053]] Iteration 1000 objective [[ 3.13906829e-05]] Iteration 2000 objective [[ 1.95894699e-05]] Iteration 3000 objective [[ 1.60381564e-05]] Iteration 4000 objective [[ 1.39930673e-05]] Iteration 5000 objective [[ 1.03554970e-05]] Huh. Those two solutions are nearly indistinguishable. Since we used a neural network, let's hype it up and say we learned the solution to a differential equation! But seriously, note that although we got an "analytical" solution, we should only rely on it in the region we trained the solution on. You can see the solution above is not that good past t=10, even perhaps going negative (which is not even physically correct). That is a reminder that the function we have for the solution is not the same as the analytical solution, it just approximates it really well over the region we solved over. Of course, you can expand that region to the region you care about, but the main point is don't rely on the solution outside where you know it is good. This idea isn't new. There are several papers in the literature on using neural networks to solve differential equations, e.g. http://www.sciencedirect.com/science/article/pii/S0255270102002076 and https://arxiv.org/pdf/physics/9705023.pdf, and other blog posts that are similar (https://becominghuman.ai/neural-networks-for-solving-differential-equations-fa230ac5e04c, even using autograd). That means to me that there is some merit to continuing to investigate this approach to solving differential equations. There are some interesting challenges for engineers to consider with this approach though. When is the solution accurate enough? How reliable are derivatives of the solution? What network architecture is appropriate or best? How do you know how good the solution is? Is it possible to build in solution features, e.g. asymptotes, or constraints on derivatives, or that the solution should be monotonic, etc. These would help us trust the solutions not to do weird things, and to extrapolate more reliably. org-mode source Org-mode version = 9.1.2
## Trigonometry (11th Edition) Clone Published by Pearson # Chapter 8 - Complex Numbers, Polar Equations, and Parametric Equations - Section 8.1 Complex Numbers - 8.1 Exercises - Page 364: 90 #### Answer The solution set in standard form is $$\Big\{-\frac{3}{4}\pm\frac{\sqrt7}{4}i\Big\}$$ #### Work Step by Step $$2x^2+3x=-2$$ First, write the equation in standard form. $$2x^2+3x+2=0$$ Now use the quadratic formula. $$x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$$ As $a=2, b=3, c=2$ $$x=\frac{-3\pm\sqrt{3^2-4\times2\times2}}{2\times2}$$ $$x=\frac{-3\pm\sqrt{9-16}}{4}$$ $$x=\frac{-3\pm\sqrt{-7}}{4}$$ Now we rewrite $\sqrt{-7}=i\sqrt7$ $$x=\frac{-3\pm i\sqrt7}{4}$$ The solution set in standard form is $$\Big\{-\frac{3}{4}\pm\frac{\sqrt7}{4}i\Big\}$$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
For Alison's approach: What happens to the numbers as you go down the rows? What happens as you go up the rows? For Bernard's approach: Which numbers end in a 0 in row $A_2$? Which numbers end in a 0 in row $A_3$? Which of these sequences will hit 1000? For Charlie's approach: Can you find a similar method to Charlie's to describe the other rows? Which descriptions include 1000?
What Do dx and dy Mean? We’ve looked at the meaning of the derivative, and of its various notations, including dy/dx. This leads to the next question: What does dx or dy mean on its own? This was touched on last time, but there’s a lot more to say that I couldn’t fit there. We’ll look at more advanced approaches to differentials in themselves, then at two perspectives on what they mean in integrals. Differentials as functions We’ll start with the page two of us referred to in our answers last time, which comes from 1998: Differentials I have to reach this conclusion: If you can get the differentials of a function, you can differentiate it, but if you can differentiate it, you can not necessarily get its differentials. Please help. As we’ve already seen, differentials can be discussed from several different perspectives. This question, lacking clear context, doesn’t indicate what kind of function is in view, or what approach to differentials is being taken. What does it mean here to “get a function’s differentials”? Doctor Jerry answered by suggesting one possible context, giving a definition that is  quite different from what we’ve seen so far, where differentials were just infinitesimal numbers: Hi Maria, The standard definition of the differential of a real-valued function f of a real variable is: At a given point x, the differential df_x (df sub x; usually the x is omitted) of f is the linear function defined on R by: df_x(h) = f'(x) * h Everyday usage of the differential often suppresses the fact that the differential is a linear function. For example, if y = f(x) = x^2, then we write: dy = df = 2x * dx where dx is used instead of h. This is for good reason. The finite numbers dy and dx appearing in dy = 2x * dx can be manipulated to obtain: dy/dx = 2x I feel that I haven't replied directly to your question. I think that this is because I don't fully understand your question. Please write again if my answer has not helped. This definition takes the differential of a function to be itself a function, namely the function whose value is the vertical change $$\Delta y$$ along the tangent line for a given horizontal change (h or $$\Delta x$$ or dx). In this way, we don’t have to think of dy as a number-that-is-not-really-a-number (an infinitesimal), yet we get the action of multiplying the derivative by any number dx. In his example, the differential of $$f(x) = x^2$$ at x = 3 is $$df(h) = df_3(h) = f'(3)\cdot h = 6h$$. From this perspective, the usual way of writing the differential as if it were a number is just a shortcut. Retaining the variable x, we could say, fully, $$df_x(dx) = 2x dx$$, or briefly, just $$dy = 2x dx$$. For a very slightly different version of this definition, see here. Maria asked for more, giving a little more context but still not quite making it clear what level she is at: Thanks for your answer. I know that the question is a little bit confusing, and at the beginning I thought it was a problem of the translation from English of the Math books. Your answer helped a little, so I am going to try to rephrase it. What is the difference between finding the derivatives of a function (dy/dx), and finding its differentials (dy, dx)? In the books I've seen they define differentials supposing that f(x) is differentiable. My teacher gave a hint to reach this conclusion: if you can find the differentials of f, then f is differentiable, but if f is differentiable you can't necessarily find its differentials. That is why I can prove this, starting with a function that is differentiable. It is still  unclear what “the derivatives of a function” means; perhaps she doesn’t intend a plural. Doctor Jerry started his answer by restating the previous definition: Hi Maria, Suppose f(x) = x^2. To find the derivative of f we use the definition of derivative: f'(x) is the limit as h->0 of the quotient f(x+h) - f(x) ------------- h For this function, f'(x) = 2x. Okay, this much is clear; there is no possible ambiguity. The differential of f at x is defined to be the linear function df, which is defined on all of R by: df(h) = f'(x) * h Often, the notation df(h) is shortened to df or, if y = f(x), then we write dy instead of df. Then the above definition is: dy = f'(x)*dx or dy/dx = f'(x) Unless you are studying differential geometry, in which dx is interpreted slightly differently, dx is not the differential of a function. It is a variable, the same as h. I’m going to omit the rest of the answer, because I don’t think the question and its context were ever clarified, so it isn’t clear what answer is needed. If you want to dig deeper … Doctor Jerry mentioned differential geometry in passing, as a place where differentials are defined more deeply. We have only occasionally gone into that territory; I want to just quote the conclusion to an unarchived answer to a question about differentials, by Doctor Fenton in 2009, in case you are interested: There is also a more sophisticated viewpoint in which what is integrated is not a function f(x), but rather what is called a "differential form". This viewpoint involves a lot of complicated mathematical structure and is more commonly seen in calculus of functions of several variables (see, for example, http://en.wikipedia.org/wiki/Differential_form ) but it can also be used in one-dimensional calculus as well (e.g. in David Bressoud's book _Second Year Calculus_). So, the easiest viewpoint is the purely formal one, in which you do useful but basically meaningless computations (du=g'(x)dx which does the bookkeeping), but there is also a more complicated viewpoint in which the computations are not meaningless, but they require you to learn more abstract mathematics. For example, the one-dimensional differential form dx becomes a mapping from intervals on the real line to R, and dx([a,b]) = b-a , while the differential form 3x^2dx (to use one of Bressoud's examples) is the mapping which takes the interval [a,b] to b / b^3 a^3 | 3x^2 dx = --- - --- . / 3 3 a This becomes the viewpoint used in modern differential geometry. Differentials in definite integral notation Last week we talked about the use of differentials within symbols for the derivative. Let’s look at a couple questions about their use in integration. First, we have this, from 2002: The Meaning of 'dx' in an Integral No matter how many times it's explained to me, and even though I've taken several advanced math courses (diff eq, linear algebra, etc), nobody has ever given me a satisfactory explanation for the meaning of the notation in which an integral has dx appended to the end if x is the variable which we are integrating with respect to. In physics, for example, dx seems to mean a very small amount of x, and then we use it in an integral to integrate whatever physical quantity is being discussed. I just don't understand. Or, when a differential is defined, all of a sudden the dx has a meaning, but then when an integral is being evaluated, the teacher says, "Oh, the dx is just a formality." So, sometimes it's a formality, sometimes a vital concept, sometimes a physical quantity, sometimes a derivative: What is it? When we write $$\int f(x) dx$$, we read it as “the integral [or antiderivative] of f(x) with respect to x,” assigning no meaning to “dx” other than telling us what variable we care about. (In fact, sometimes the dx can just be omitted entirely, when the variable is clear!) This is not very different from its use in a derivative, where it also means “with respect to x“. What does it mean here? Doctor Jeremiah took the question, focusing on the idea of a definite integral: Hi Nosson, An integral gives you the area between the horizontal axis and the curve. Most of the time this is the x axis. y | | --|-- ----|---- f(x) / | \ / | / | -------- | | / | | -----|------- | | | | | | | | ----------|--------------+--------------------|----- x a b And the area enclosed is: b / Area = | f(x) dx / a This is a definition of the definite integral, in a broad sense; what follows defines how it can be calculated in principle (and therefore, how it is formally defined): But say you didn't want to use an integral to measure the area between the x axis and the curve. Instead you just calculate the average value of the graph between a and b and draw a straight flat line y = avg(x) (the average value of x in that range). Now you have a graph like this: y | | - | - - - | - - f(x) | / | \ / | -----|-----------------------------------|---- avg(x) | / | | - - -|- - - - | | | | | | | | ----------|--------------+--------------------|----- x a b And the area enclosed is a rectangle: Area = avg(x) w where w is the width of the section The height is avg(x) and the width is w = b-a or in English, "the width of a slice of the x axis going from a to b." His width w would often be called $$\Delta x$$; we’ll see that later. But say you need a more accurate area. You could break the graph up into smaller sections and make rectangles out of them. Say you make 4 equal sections: y | | |----|---| |-------|---- f(x) | | | | | | | |--------| | | | | | | | -----|---------| | | | | | | | | | | | | | | | | ----------|---------|----+---|--------|-------|----- x a b And the area is: Area = section 1 + section 2 + section 3 + section 4 = avg(x,1) w + avg(x,2) w + avg(x,3) w + avg(x,4) w where w is the width of each section. The sections are all the same size, so in this case w=(b-a)/4 or in English, "the width of a thin slice of the x axis or 1/4 of the width from a to b." We are starting to develop the Reimann integral (though many details are needed to make a complete definition, as for example the widths don’t really have to be the same). And if we write this with a summation we get: 4 +--- \ Area = / avg(x,n) w +--- n=1 But it's still not accurate enough. Let's use an infinite number of sections. Now our area becomes a summation of an infinite number of sections. Since it's an infinite sum, we will use the integral sign instead of the summation sign: / Area = | avg(x) w / where avg(x) for an infinitely thin section will be equal to f(x) in that section, and w will be "the width of an infinitely thin section of the x axis." So instead of avg(x) we can write f(x), because they are the same if the average is taken over an infinitely small width. Again, a lot of details are being omitted to keep things intuitive. And we can rename the w variable to anything we want. The width of a section is the difference between the right side and the left side. The difference between two points is often called the delta of those values. So the difference of two x values (like a and b) would be called delta-x. But that is too long to use in an equation, so when we have an infinitely small delta, it is shortened to dx. If we replace avg(x) and w with these equivalent things: / Area = | f(x) dx / So, as in the infinitesimal approach to the derivative, dx is thought of (informally) as a very small change in x. So what the equation says is: Area equals the sum of an infinite number of rectangles that are f(x) high and dx wide (where dx is an infinitely small distance). So you need the dx because otherwise you aren't summing up rectangles and your answer wouldn't be total area. dx literally means "an infinitely small width of x". This, of course, applies specifically to the definite integral. From this perspective, we can think of the indefinite integral as inheriting the same notation via the Fundamental Theorem of Calculus, which ties the two together. The differential doesn’t have to be at the end! One consequence of teaching students that the differential in an integral means only “… with respect to x” can be seen in the following question, from 2003, about a relatively unusual variation in the notation: Integral Notation - Missing Integrands I have seen some integral notation used that I am not familiar with. It looks like this: / | dx f(x) + ... / There does not seem to be an integrand (i.e. a function being integrated). I'm not sure if f(x) is to be integrated. I have two theories, but I can't see the point in writing the expression as it is if either of my theories is correct. My theories about what this might mean: 1) The above notation is the same as writing: / | 1 dx f(x) + ... (note the explicit 1 here) / = (x + C) * f(x) + ... (where C is a constant of integration) 2) The rest of the expression is to be integrated with respect to x. If (1) is correct, then what was the point of writing the integral - why wasn't (x + C) just written instead? If (2) is correct, then how does one know when to "stop integrating" (i.e. if there is some term to be added on to the expression that is not to be integrated, how is it distinguished?). I have seen this recently in multi-variate calculus, i.e. when x is in R^n rather than R: does this situation justify the use of the integral notation somehow? Chris’s first guess is that the dx closes off the integral, so that what follows is to be multiplied; the second (which is correct) is that it doesn’t matter where the dx is placed. He is right that this notation is particularly common in calculus with more than one variable. One might write, for example, $$\int_0^b dy\int_0^a dx f(x,y)$$ or $$\int_0^b dy\int_0^a f(x,y) dx$$ rather than $$\int_0^b\int_0^a f(x,y)dx dy$$ to indicate that we are to integrate first with respect to x, and then integrate the result with respect to y. One benefit is that it makes it easier to see which limits go with which variable. Hi, Chris. It is common to learn about integration in such a way that the "dx" seems to be a marker for the end of the integral, as if the "long S" were a left parenthesis and the "dx" were the right parenthesis. But it doesn't work that way. In fact, what you are integrating is the product of a function and dx; and multiplication is commutative! So these mean the same thing: / / | f(x) dx and | dx f(x) / / If you then add something, you must use parentheses if it is to be part of the integral: / / | dx f(x) + g(x) = [ | f(x) dx] + g(x) / / is the sum of an integral and a function, while / / | dx (f(x) + g(x)) = | (f(x) + g(x)) dx / / is the integral of the sum of two functions. That is, presumably the integral has higher precedence than addition, so you "stop integrating" at the first plus sign. But even then, I'm not positive that this rule I just made up is always followed; let me know if you think it doesn't fit the practice in your text, and show me an example. Seeing the differential as part of a product is necessary in order to understand the notation. This can be done whether you think of dx as a mere notation, so that the “product” is as illusory as the “quotient” in a derivative, or you think explicitly about the Riemann sum. I don’t see my ideas about parentheses followed universally; it is not uncommon to see $$\int x^2-2x+3 dx$$ rather than $$\int (x^2-2x+3) dx$$. This is probably due to the common use of the differential to terminate the integrand, and the fact that it would be meaningless to take the dx as associated only with the last term, despite the usual order of operations. This laxity may carry over into integrals where dx is written first, though the ambiguity is much greater there. Too often, as in some other aspects of order of operations, you ultimately just have to recognize what interpretation makes sense in context. In writing this, it has occurred to me that my reference to commutativity is not quite valid, specifically when it comes to definite integrals. The following are not the same: $$\int_0^b\int_0^a f(x,y)dx dy\ne\int_0^b\int_0^a f(x,y)dy dx$$ That’s because the order of the differentials determines the meaning of the limits of integration. Everything about calculus notation is a little slippery. Chris replied, Doctor Peterson, I was indeed taught that integration begins with the "long S" and ends with the (for example) dx. I have, however, seen the following notation: / | dx | ------------ | f(x) + g(x) / and assumed it was a convenient notation rather than being a justifiable mathematical expression. Perhaps I need to go and look at calculus from first principles again to see why this is the case. That is both convenient notation and justifiable! Again, we are thinking of the dx as being multiplied by a fraction, and therefore equivalent to part of the numerator. A particularly good example of the usefulness of the differential in an indefinite integral arises in the substitution method, where we can replace the dx with an expression that we actually multiply: Why Does Integration by Substitution Work? I looked at that page in the post Integration by Substitution. This site uses Akismet to reduce spam. Learn how your comment data is processed.
# Question 0297a Jan 9, 2017 ${\text{Molarity"="0.4 mol"*"L}}^{-} 1$ #### Explanation: $\text{Molarity"="Moles of solute"/"Volume of solution}$ =(2*"mol")/(0.500*"L")=0.4*"mol"*"L"^-1#. So two questions for you: (i) What is the concentration in ${\text{g"*"L}}^{-} 1$? (ii) What are (i) the $\text{pOH}$, and (ii) the $\text{pH}$ of this solution?
# low disk space on "filesystem root" I have the warning says low disk space on "filesystem root". I use df -h to check, and the output is like this: Filesystem Size Used Avail Use% Mounted on rootfs 5.0G 4.5G 195M 96% / devtmpfs 5.9G 0 5.9G 0% /dev tmpfs 5.9G 240K 5.9G 1% /dev/shm tmpfs 5.9G 804K 5.9G 1% /run /dev/sda12 5.0G 4.5G 195M 96% / tmpfs 5.9G 0 5.9G 0% /sys/fs/cgroup tmpfs 5.9G 0 5.9G 0% /media /dev/sda9 194M 78M 107M 43% /boot /dev/sda11 20G 311M 19G 2% /usr/local /dev/sda10 30G 2.2G 26G 8% /home /dev/sda13 2.0G 725M 1.2G 38% /var /dev/sda15 1007M 18M 939M 2% /tmp What should I do to free some space in rootfs? Thank you edit retag close merge delete you should use LVM ( 2013-03-02 16:37:27 -0500 )edit Sort by » oldest newest most voted The free space is probably being used up by multiple kernel versions. You only need one to run the system, but it is handy to keep older versions in case the current one stops working (which does happen.) Run this command to see how many you have: rpm -q kernel To remove older kernels, run this command: sudo yum remove kernel-version You can also configure your system to only keep a specific number of kernels. Once this is enabled, future kernel updates will automatically remove older kernels. To do that, update /etc/yum.conf, and modify this line to the number that you want to keep. If you change that to 2, you will have 1 backup version (I have never needed more than that.) installonly_limit=2 more There is a separate /boot partition so all the kernels are there. ( 2013-03-02 21:09:36 -0500 )edit You seem to have an unused /usr/local partition which is conveniently located between the partitions / and /home. /dev/sda10 30G 2.2G 26G 8% /home /dev/sda11 20G 311M 19G 2% /usr/local /dev/sda12 5.0G 4.5G 195M 96% / You could delete that /usr/local partition and expand the / partition, but since it is located after the empty space, this is a bit tricky. You could use something like Gparted LiveCD (http://sourceforge.net/projects/gparted/) to move and resize the / partition. After this operation the names of the partitions after sda10 will change so you might have to adjust you /etc/fstab. more Can I just delete /usr/local partition? since using Gparted LiveCD is kinda complicated and I am trying to take a easier method to free some space. Thank you so much for your reply. ( 2013-04-15 21:11:23 -0500 )edit Download,install and use bleachbit: http://bleachbit.sourceforge.net/ The first time you run it, you'll have, if I'm not mistaken, the chance to download both some extras and a version that runs as Administrator (i.e., root). Get all of that. (You should see two new entries on your System menu.) Running it as a normal user will let you clean up a remarkable amount of cruft in your own files, including browser catches and so on, although I'd suggest that you exit such programs first. Running it as Admin, of course, requires the appropriate password. This will allow you to clean up lots and lots of stuff on your root file system that you can't otherwise touch, and it's quite possible that this is all you'll need. more
# Conflict between arydshln and tabu and X column type? \documentclass{article} \usepackage{tabu} \usepackage{arydshln} \begin{document} \begin{table} \begin{tabu}{l X} a & b \end{tabu} \end{table} \end{document} This gives me the following two error messages over and over: ! Missing \cr inserted. <inserted text> \cr l.10 \end{tabu} I'm guessing that you meant to end an alignment here. ! Misplaced \cr. <inserted text> \cr l.10 \end{tabu} I can't figure out why you would want to use a tab mark or \cr or \span just now. If something like a right brace up above has ended a previous alignment prematurely, you're probably due for more error messages, and you might try typing S' now just to see what is salvageable. If I do one of the following, the error disappears: • Change the X column to l • Remove \usepackage{arydhsln} However, I have a big document that uses a lot of tabu X columns in some sections, and I need arydshln for the \hdashline command when drawing complex (not in the sqrt(-1) sense) mathematical matrices in other places. Can this conflict be resolved? How? You have to swap the loading order as tabu modifies some internals arydshln uses (as noted in section 9 of the documentation). \documentclass{article} \usepackage{arydshln} \usepackage{tabu} \begin{document} \begin{table} \begin{tabu}{l X} a & b \end{tabu} \end{table} \end{document} • Swapping arydshln and tabu works when using the article class, but not when using memoir`. Any ideas? Should I post a separate question? – rudolfbyker Apr 16 '18 at 12:16 • @rudolfbyker Yes, you should, because memoir has completely different mechanics and implements many features right away (maybe even table features). – TeXnician Apr 16 '18 at 12:16
Cache - Maple Programming Help Home : Support : Online Help : Programming : Procedures and Functions : Cache Package : Cache/TemporaryIndices Cache TemporaryIndices return a sequence of the temporary indices Calling Sequence TemporaryIndices( cache ) Parameters cache - cache table or procedure: the object whose entries are to be returned Description • The TemporaryIndices command returns the temporary indices of the given cache table.  The cache table can be given directly as cache, or cache can refer to a procedure that has, or can have, a cache remember table.  If such a procedure is given and it has a cache remember table, the temporary indices from that table are returned.  If the procedure does not have a remember table, NULL is returned. • TemporaryIndices returns the indices in same format as indices, that is, a sequence of lists where the contents of each list is the key of a temporary entry from the table. • The TemporaryEntries command can be used to get the values of the temporary entries. Examples > $\mathrm{c1}≔\mathrm{Cache}\left(\right)$ ${\mathrm{c1}}{≔}{\mathrm{Cache}}{}\left({512}\right)$ (1) > $\mathrm{c1}\left[1\right]≔2$ ${{\mathrm{c1}}}_{{1}}{≔}{2}$ (2) > $\mathrm{c1}\left[2\right]≔3$ ${{\mathrm{c1}}}_{{2}}{≔}{3}$ (3) > $\mathrm{Cache}:-\mathrm{TemporaryEntries}\left(\mathrm{c1}\right)$ $\left[{2}\right]{,}\left[{3}\right]$ (4) > $\mathrm{Cache}:-\mathrm{TemporaryIndices}\left(\mathrm{c1}\right)$ $\left[{1}\right]{,}\left[{2}\right]$ (5) > p := proc( x ) option cache; x^2; end proc; ${p}{:=}{\mathbf{proc}}\left({x}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{option}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathrm{cache}}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{x}{^}{2}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}{\mathbf{end proc}}$ (6) > $p\left(2\right)$ ${4}$ (7) > $p\left(3\right)$ ${9}$ (8) > $\mathrm{Cache}:-\mathrm{TemporaryEntries}\left(p\right)$ $\left[{4}\right]{,}\left[{9}\right]$ (9) > $\mathrm{Cache}:-\mathrm{TemporaryIndices}\left(p\right)$ $\left[{2}\right]{,}\left[{3}\right]$ (10)
### A structural alternative to the deformable brain atlas paradigm Jean-Francois Mangin Commissariat Energie Atomique Orsay France December 6, 2001 at  3:00 PM MC 437 Abstract: --------- The talk will describe a complete system allowing automatic recognition of the main sulci of the human cortex. Sulci are folds of the cortical surface that are the only macroscopic features that can be used to match individual cortices. These sulci, however, present very different shapes across individuals, which make their recognition a challenging problem. Our system relies on a complex preprocessing of MR images leading to abstract structural representations of the cortical folding. This preprocessing consists of a sequence of automatic algorithms mainly based on Mathematical Morphology. The representation nodes are cortical folds, which are given a sulcus name by a contextual pattern recognition method. This method can be interpreted as a graph matching approach, which is driven by the minimization of a global function made up of local potentials. Each potential is a measure of the likelihood of the labelling of a restricted area. This potential is given by a multi-layer perceptron trained on a learning database. A base of 26 brains manually labelled by a neuroanatomist is used to validate our approach. The system developed for the right hemisphere is made up of 265 neural networks. The whole system is a symbolic alternative to the usual deformable atlas principle. This alternative consists of using a higher level of representation of the data to overcome some of the difficulties induced by the complexity and the striking variability of the cortical folding. Reference: ---------- D. Rivi\ere, J.-F. Mangin, D. Papadopoulos, J.-M. Martinez, V. Frouin and J. R\'egis, {\em Automatic recognition of cortical sulci using a congregation of neural networks}, MICCAI'2000, Pittsburgh, LNCS-1935, Springer Verlag, pp 40-49 Speaker background: ------------------ {Jean-Fran\c{c}ois Mangin} received the engineering degree from \'{E}cole Centrale Paris in 1989, the {\bf M.Sc.} degree in numerical analysis from Pierre et Marie Curie University (Paris VI) in 1989, and the {\bf PhD} degree in signal and image processing from \'{E}cole Nationale Sup\'{e}rieure des T\'{e}l\'{e}communications of Paris in 1995. Since october 1991, he has been working with Service Hospitalier Fr\'{e}d\'{e}ric Joliot, Commissariat \{a} l'\'{E}nergie Atomique, Orsay, France, on image analysis problems related to brain mapping. Since 1999, he has been leading a group, which project consists of the development of a brand-new set of brain mapping methods designed from a structural point of view. The goal is to get closer to the class of neuroscience approaches dealing with structural models. The underlying image analysis tools are mainly Mathematical Morphology, Markovian random fields and Graph based representations.
Chain conditions in quotients of power sets Several days ago a friend asked me the following: We know that in $\mathcal P(\mathbb N)$ we can find a family of size continuum that every [distinct] two intersect in a finite set. Can we do that with $\mathcal P(\mathbb R)$, that is a family of size $2^{\frak c}$ many subsets of real numbers that the intersection of any [distinct] two is finite, or at least less than $\frak c$? The question, if so, asks about $(2^{\frak c})^+$-c.c. in the Boolean algebras $\mathcal B_\kappa=\mathcal P(\mathbb R)/\sim_\kappa$ where $\sim_\kappa$ is the equivalence relation defined as $A\sim B\iff |A\triangle B|<\kappa$. The first question asks for $\mathcal B_\omega$ and the latter asks for $\mathcal B_\mathfrak c$. Assuming GCH (or at least that $2^{\frak c}=\aleph_2$) gives a relatively simple positive answer to the latter question: Consider the tree $2^{<\omega_1}$, it is of size $\aleph_1$ so we can encode the nodes as a real numbers. This tree has $2^{\omega_1}=\aleph_2$ many branches, each defines a subset of $\mathbb R$ using the encoding, and every distinct two branches meet at most at countable set of points. I consulted with several other folks from the department and I was told that most of these questions are very well known, so an answer about consistency and provability is almost certainly out there. Naive Google search got me nowhere, so I came to ask here the following: 1. In the particular case of the question above, can we say anything in ZFC about the chain-condition of $\mathcal B_\kappa$ for $\omega\leq\kappa\leq\frak c$? 2. My partial answer above shows that with GCH we have an answer for $\cal B_\frak c$, but does that also answer $\cal B_\omega$ or do we need to assume stronger principles as $\lozenge$ for suitable cardinals? 3. How far does this generalized, when replacing $2^\omega$ by any infinite cardinal $\mu$, and asking the similar question about $(2^\mu)^+$-c.c. in the similar quotients? I'd be glad to have a reference to a survey of such results, if it exists. - There is a nice but old survey by Milner and Prikry in Surveys in combinatorics 1987 LMS Lecture Notes 123 - ams.org/mathscinet-getitem?mr=905279 – François G. Dorais Jun 8 '12 at 14:18 Indeed, if $X$ is an infinite set and $I$ has cardinality greater than that of $X^{\aleph_0}$ then $X$ can't contain $I$ distinct subsets with pairwise finite intersection. This answers your question since $c^{\aleph_0}=c$. Indeed, let $(A_i)_{i\in I}$ be a family of subsets of $X$ with pairwise finite intersection. Let $B_i$ be the set of infinite countable subsets of $X$ contained in $A_i$. Then the $B_i$ are pairwise disjoint. Moreover, $B_i$ is empty only when $A_i$ is finite, and we can remove such exceptional $i$'s because the number of finite subsets of $X$ is only the cardinality of $X$. The $B_i$ live in the set of infinite countable subsets of $X$, which has cardinality $X^{\aleph_0}$. So $I$ is at most the cardinal of $X^{\aleph_0}$. Edit: the obvious generalization of the argument is the following: if $\alpha,\beta,\gamma$ are infinite cardinals, and if $\alpha$ admits $\beta$ subsets with pairwise intersection of cardinal $<\gamma$, then $\beta\le\alpha^\gamma$. In particular, if $\alpha=2^\delta$ and $\gamma\le\delta$ then $\alpha^\gamma=\alpha$, so the conclusion reads as: $2^\delta$ does not admit more that $2^\delta$ subsets with pairwise intersection of cardinal $<\delta$. - So this argument can be extended to "countable intersection" if $X$ is large enough, right? – Asaf Karagila Jun 8 '12 at 17:43 @Asaf: you mean, if $|X|^{\aleph_1}=|X|$ where $|X|$ is the cardinal of $X$; this is true if $|X|=2^\alpha$ with $\alpha\ge\aleph_1$ but this does not mean it is true for every cardinal large enough (at least I don't claim it). – YCor Jun 8 '12 at 21:53 It is consistent with ZFC that a set of size $\aleph_1$ does not have $2^{\aleph_1}$ subsets, each of size $\aleph_1$, with all pairwise intersections countable. This is an old result of Jim Baumgartner; see "Almost-disjoint sets, the dense-set problem, and the partition calculus", Ann. Math. Logic 9(1976), 401-439, particularly Theorem 5.6(d) and the remark on page 422 after it. [Caution: I can't check the paper itself now; I'm going by an old e-mail from Jim.] - I'll add in that Shelah has used pcf theory to investigate related questions. Typically these results are tucked away inside long papers dealing with other questions, but I know that the last section of [Sh:410] explicitly deals with strongly almost disjoint families", and characterizes their existence in terms of pcf. For example, if $\aleph_0<\kappa\leq\kappa^{\aleph_0}<\lambda$, then the existence of a family of $\lambda^+$ sets in $[\lambda]^{\kappa}$ with pairwise finite intersection is equivalent to a pcf statement''. I'm not sure which version of the paper to link to, as the published version has been reworked a few times. I THINK that the most recent version is here: Sh:410 -
Typography with TeX and LaTeX ## Texmaker 1.7 released April 25th, 2008 by Stefan Kottwitz The version 1.7, released April 24 2008, provides new features and changes: • Spell checking is now based on hunspell and uses OpenOffice.org dictionaries • New LaTeX log errors detection • New “search” interface • Indentation “memory” • Code completion Texmaker is a free LaTeX IDE running under Linux, Mac OS X and Windows and is published under the GPL 2. ## Ubuntu Linux 8.04 LTS released April 24th, 2008 by Stefan Kottwitz Ubuntu Linux version 8.04 (code name Hardy Heron) with Long Term Support has been released today. It brings TeXlive 2007-13, KILE 2.0 and Texmaker 1.6. It still comes with pgf/TikZ version 1.18, I recommend to install version 2.00 (2008-02-20). Category: Linux/ Ubuntu Linux | No Comments » ## How to declare the appendix April 13th, 2008 by Stefan Kottwitz Some LaTeX tutorials and at least one wellknown online reference manual explain the declaration of an appendix by an environment, they recommend to write: \begin{appendix} … \end{appendix}. Though this will be compiled without error it actually is not functioning like we expect of an environment. Everything that follows \end{appendix} will also be treated as part of the appendix! The correct usage is: \appendix This command is defined by the standard classes by \newcommand, not by \newenvironment, there’s no \endappendix. For example here’s the original code of book.cls: \newcommand\appendix{\par \setcounter{chapter}{0}% \setcounter{section}{0}% \gdef\@chapapp{\appendixname}% \gdef\thechapter{\@Alph\c@chapter}} If you want to end the appendix and add further chapters or sections like list of figures etc. you would have to undo the changes made by \appendix or use just a common chapter or section labeled as appendix. The appendix package provides more facilities for typesetting appendices and even allows subappendices. This topic was discussed in the LaTeX Community Forum and on Matheplanet. Category: Sectioning | 1 Comment » ## eqnarray vs. align April 12th, 2008 by Stefan Kottwitz There’s a lot of freely available documentation for LaTeX, but there’s a pitfall: some documents that are still online are outdated and therefore contain obsolete information. Documents like “Obsolete packages and commands” (”l2tabu”) address the need of up-to-date information. For instance the obsolete eqnarray environment frequently appears in questions of new LaTeX users and many people including me usually answer: don’t use eqnarray and give advice how to use the align environment of amsmath instead. Here’s a summary of the problems with eqnarray: • the spacing around relation symbols are inconsistent, • long equations might collide with the equation numbers, • there could be problems with labels and references. Here is one small example document just to illustrate the space inconsistany problem: \documentclass[a4paper,12pt]{article} \usepackage{amsmath} \begin{document} \begin{minipage}{0.5\textwidth} equation: \begin{equation*} z_0 = d = 0 \end{equation*} \begin{equation*} z_{n+1} = z_n^2+c \end{equation*} align: \begin{align*} z_0 &= d = 0 \\ z_{n+1} &= z_n^2+c \end{align*} eqnarray: \begin{eqnarray*} z_0 &=& d = 0 \\ z_{n+1} &=& z_n^2+c \end{eqnarray*} \end{minipage} \end{document} Compile for yourself and examine it, if you want. For a quick look here’s a screenshot of the output: Notice the difference of the spacing around the equal sign in the eqnarray environment compared to equation and even compared to the other equal sign inside the first eqnarray line. If you try to repair the spacing by adjusting \arraycolsep you will notice that all other arrays including matrices will be affected too. So the best solution is to use amsmath, this package provides even more environments useful for multiline formulas and a lot more enhancement for mathematical typesetting. See the amsmath user’s guide. For further information regarding this topic you may have a look at the article “Avoid eqnarray!” by Lars Madsen published in the PracTeX Journal 4 2006. This topic was discussed on MatheBoard. Category: Mathematics | 87 Comments »
Virtual Lab 05 - Uniformly Accelerated Motion in 1 - Dimensions.pdf • 2 • 100% (2) 2 out of 2 people found this document helpful This preview shows page 1 - 2 out of 2 pages. Orange Coast College Physics 185 Dr. Arnold Guerra III Uniformly Accelerated Motion in 1-Dimension I. Introduction. When a particle of mass m has a net force pointing in a certain direction, the particle also has an acceleration vector pointing in the same direction as that of the net force. If the acceleration vector points in the same direction as the velocity vector, then the particle speeds u p. If the acceleration vector points in the opposite direction as the velocity vector, then the particle slows dow n. If the acceleration vector points in a direction perpendicular to the direction of the velocity vector, then the particle turn s (changes direction of travel). 1
# LieElement @ LieElement -- formal multiplication of Lie elements ## Synopsis • Operator: @ • Usage: u=x (symbol @) y • Inputs: • x, an instance of the type LieElement, $x$ is of type $L$, where $L$ is of type LieAlgebra • y, an instance of the type LieElement, $y$ is of type $L$ • Outputs: • u, an instance of the type LieElement, u is of type $L$, the formal Lie product of $x$ and $y$ ## Description The "at sign" $@$ is used as infix notation for a "formal" Lie multiplication (and also formal multiplication by scalars) where no simplifications are performed. (The formal addition is written as ++ and / is used as formal subtraction.) In this sense, it is different from the use of SPACE as multiplication operator, which always gives an object of normal form as output. The formal operations are useful when relations are introduced in a big free Lie algebra, since then it might be too hard to compute the normal form of the relations, which is not needed in order to define a quotient Lie algebra. For an example, see Minimal models, Ext-algebras and Koszul duals. i1 : L = lieAlgebra{a,b} o1 = L o1 : LieAlgebra i2 : (b@b)@a/3@b@a@b++2@a@b@b o2 = (b b a) - (b b a) - 3 (b a b) + 2 (a b b) o2 : L i3 : (b b) a - 3 b a b + 2 a b b o3 = 3 (b b a) o3 : L
< Terug naar vorige pagina # A Characterization of Circle Graphs in Terms of Multimatroid Representations ### Tijdschriftbijdrage - Tijdschriftartikel The isotropic matroid M[IAS(G)] of a looped simple graph G is a binary matroid equivalent to the isotropic system of G. In general, M[IAS(G)] is not regular, so it cannot be represented over fields of characteristic not equal 2. The ground set of M[IAS(G)] is denoted W(G); it is partitioned into 3-element subsets corresponding to the vertices of G. When the rank function of M[IAS(G)] is restricted to subtransversals of this partition, the resulting structure is a multimatroid denoted Z(3)(G). In this paper we prove that G is a circle graph if and only if for every field F, there is an F-representable matroid with ground set W(G), which defines Z(3)(G) by restriction. We connect this characterization with several other circle graph characterizations that have appeared in the literature. Tijdschrift: ELECTRONIC JOURNAL OF COMBINATORICS ISSN: 1077-8926 Issue: 1 Volume: 27 Aantal pagina's: 35 Jaar van publicatie:2020
Free Version Difficult When Celsius Equals Fahrenheit ACTMAT-V4ETYE The formula used to convert a temperature from degrees Fahrenheit ( F ) to degrees Celsius ( C ) is: $$C=\frac { 5 }{ 9 } (F-32)$$ For which of the following Celsius temperatures is the Celsius and Fahrenheit temperatures the same? A $57.6°$ B $212°$ C $-98.6°$ D $-40°$ E $-25.6°$