text
stringlengths
104
605k
# Vector class compas.geometry.Vector(x, y, z=0.0, **kwargs)[source] A vector is defined by XYZ components and a homogenisation factor. Parameters • x (float) – The X component of the vector. • y (float) – The Y component of the vector. • z (float) – The Z component of the vector. Attributes • data (dict) – The data representation of the vector. • x (float) – The component along the X axis of the vector. • y (float) – The component along the Y axis of the vector. • z (float) – The component along the Z axis of the vector. • length (float, read-only) – The length of the vector. Examples >>> u = Vector(1, 0, 0) >>> v = Vector(0, 1, 0) >>> u Vector(1.000, 0.000, 0.000) >>> v Vector(0.000, 1.000, 0.000) >>> u.x 1.0 >>> u[0] 1.0 >>> u.length 1.0 >>> u + v Vector(1.000, 1.000, 0.000) >>> u + [0.0, 1.0, 0.0] Vector(1.000, 1.000, 0.000) >>> u * 2 Vector(2.000, 0.000, 0.000) >>> u.dot(v) 0.0 >>> u.cross(v) Vector(0.000, 0.000, 1.000) Methods Construct a unit vector along the X axis. Construct a unit vector along the Y axis. Construct a unit vector along the Z axis. angle(other) Compute the smallest angle between this vector and another vector. angle_signed(other, normal) Compute the signed angle between this vector and another vector. angle_vectors(left, right) Compute the smallest angle between corresponding pairs of two lists of vectors. angles(other) Compute both angles between this vector and another vector. angles_vectors(left, right) Compute both angles between corresponding pairs of two lists of vectors. Make a copy of this vector. cross(other) The cross product of this vector and another vector. cross_vectors(left, right) Compute the cross product of two lists of vectors. dot(other) The dot product of this vector and another vector. dot_vectors(left, right) Compute the dot product of two lists of vectors. from_data(data) Construct a vector from a data dict. from_json(filepath) Construct an object from serialized data contained in a JSON file. from_jsonstring(string) Construct an object from serialized data contained in a JSON string. from_start_end(start, end) Construct a vector from start and end points. Invert the direction of this vector Returns a inverted copy of this vector length_vectors(vectors) Compute the length of multiple vectors. Scale this vector by a factor n. Returns a scaled copy of this vector. sum_vectors(vectors) Compute the sum of multiple vectors. Convert an object to its native data representation. to_json(filepath[, pretty]) Serialize the data representation of an object to a JSON file. to_jsonstring([pretty]) Serialize the data representation of an object to a JSON string. Transform this vector. transform_collection(collection, X) Transform a collection of vector objects. Return a transformed copy of this vector. transformed_collection(collection, X) Create a collection of transformed vectors. Scale this vector to unit length. Returns a unitized copy of this vector. Validate the object's data against its data schema (self.DATASCHEMA). Validate the object's data against its json schema (self.JSONSCHEMA).
# 1 WF compounked monthly: Tey Ceattzil 1 hi wish nccu Iu UccIal places) 1 deposit 2 heir each month born ###### Question: 1 WF compounked monthly: Tey Ceattzil 1 hi wish nccu Iu UccIal places) 1 deposit 2 heir each month born ' cducation oavings count wish 1 W 1 855. 3 goal Tate the 76d 51f0 1 'much " I 3 li 1 1 42 OCCIL 1 places) Ue #### Similar Solved Questions ##### To see social support enhances recovery Irom surgery, researchers asked 15 people who had recently had heart surgery how many days they used pain pills alter their surgery Dala below are those numbers of = days group of people who have many friends, group who have tew frends and group who had no friends Test the hypothesis that social support influences recovery- Use steps andMank mnendsFew lriendsfriends To see social support enhances recovery Irom surgery, researchers asked 15 people who had recently had heart surgery how many days they used pain pills alter their surgery Dala below are those numbers of = days group of people who have many friends, group who have tew frends and group who had no fri... ##### Question 1 Chanda wants t0 invest 8500 dollars in an investment account with APR 4.6% compounded 7 times per year: Show all your work t0 compute how long it will take to quadruple her investment: Your answer should be given as the smallest year greater than or equal to the actual lime_ Question 1 Chanda wants t0 invest 8500 dollars in an investment account with APR 4.6% compounded 7 times per year: Show all your work t0 compute how long it will take to quadruple her investment: Your answer should be given as the smallest year greater than or equal to the actual lime_... ##### Quant Changa is the operations manager at a food processingplant and they recently started working with a new customer. Thiscustomer specifically is interested in rice, flour, and granolabars. The current demands for these items are 150 lbs, 120 lbs, and100 boxes, respectively. The company policy states that the amountof flour processed (produced) cannot be more than the amount ofrice produced. The cost of processing each lb of rice is $0.5,whereas it costs$0.45 to process each lb of flour. In Quant Changa is the operations manager at a food processing plant and they recently started working with a new customer. This customer specifically is interested in rice, flour, and granola bars. The current demands for these items are 150 lbs, 120 lbs, and 100 boxes, respectively. The company polic... ##### Use the Ratio Test to determine if each series converges absolutely or diverges. $\sum_{n=2}^{\infty} \frac{3^{n+2}}{\ln n}$ Use the Ratio Test to determine if each series converges absolutely or diverges. $\sum_{n=2}^{\infty} \frac{3^{n+2}}{\ln n}$... ##### The function of a wave isE (x, t) = 5N / C * sin [(2 * Ï€ / 3m) * (x-3E8m / s * t-10m)] (Ï€ is "pi")Indicates the magnitude of:e) its initial phase (NOTE: it is NOT 10m, but it is related to that) f) its phase at x = 2.50m when t = 3.00 s: in radiansg) its phase at x = 2.50m when t = 3.00 s subtracting from it (subtracting) all integer multiples of 2Ï€ that fit in it: in radiansh) its amplitude: in N / C * Note: "radians", although they speak of the size of an angle, θ, is N The function of a wave isE (x, t) = 5N / C * sin [(2 * Ï€ / 3m) * (x-3E8m / s * t-10m)] (Ï€ is "pi")Indicates the magnitude of:e) its initial phase (NOTE: it is NOT 10m, but it is related to that) f) its phase at x = 2.50m when t = 3.00 s: in radiansg) its phase at x = 2.50m when t =... 6 ! Required information The general ledger of Red Storm Cleaners at January 1, 2021, includes the following account balances: Part 5 of 8 Credits 0.63 points Debits $13,000 6,600 2,600 17,000 Accounts Cash Accounts Receivable Supplies Equipment Accumulated Depreciation Salaries Payable Common Stock... 5 answers ##### Combining 0.342 mol Fez O3 with excess carbon produced 18.3 g Fe: Fe,O, + 3C _ + Fe + 3COWhat is the actual yield of iron in moles?actual yield:InolWhat is the theoretical yield of iron in moles?theoretical yield:InolWhat is the percent yield?percent yield: Combining 0.342 mol Fez O3 with excess carbon produced 18.3 g Fe: Fe,O, + 3C _ + Fe + 3CO What is the actual yield of iron in moles? actual yield: Inol What is the theoretical yield of iron in moles? theoretical yield: Inol What is the percent yield? percent yield:... 5 answers ##### What is one reason for the recombinant GST-DHFR-His proteinNOT be functional after finishing the experiment, what is oneissue that went wrong to cause the protein to becomenon-functional? Where in the procedure would this issue haveoccurred? What is one reason for the recombinant GST-DHFR-His protein NOT be functional after finishing the experiment, what is one issue that went wrong to cause the protein to become non-functional? Where in the procedure would this issue have occurred?... 2 answers ##### 3. [4 pts] 8, p.255 [Tlbx: and [Tlbz J are the matrix representations of & linear operator relatice to bases Bi and 8z Find the transition matrix [IJe; and show that the matrices are similar where, B1 {e1,ez} } and B2 ={[-:] [8]} 3. [4 pts] 8, p.255 [Tlbx: and [Tlbz J are the matrix representations of & linear operator relatice to bases Bi and 8z Find the transition matrix [IJe; and show that the matrices are similar where, B1 {e1,ez} } and B2 ={[-:] [8]}... 4 answers ##### ~/1 pointsZILLDIFFEQ9 8.3.013.1Use variation of parameters to solve the given nonhomogeneous system.dx dt dy dtSx - Sy + 64x - 4y - 1(x(t), y(t))Need Help?Talk to a TutorRead ItWatch ItSubmit AnswerPractice Another VersionLi- pointsZILLDIFFEQ9 8.3.014.2Use variation of parameters to solve the given nonhomogeneous system.Xp2x -ydt dy JP3x - 2y + 8t(x(t), y(t))Need Help?Watch ItTalk to a TutorRead It ~/1 points ZILLDIFFEQ9 8.3.013. 1 Use variation of parameters to solve the given nonhomogeneous system. dx dt dy dt Sx - Sy + 6 4x - 4y - 1 (x(t), y(t)) Need Help? Talk to a Tutor Read It Watch It Submit Answer Practice Another Version Li- points ZILLDIFFEQ9 8.3.014. 2 Use variation of parameters to... 1 answer ##### 2) What force, applied tangentially to the Earth along 45° of latitude in the direction of... 2) What force, applied tangentially to the Earth along 45° of latitude in the direction of rotation for 1 day would result in a new day length that is shorter by 5.3 seconds? The Earth has mass 5.98x1024kg and radius 6380 km. Answer: X1018N. Check... 1 answer ##### A proton travels through uniform magnetic and electric fields. The magnetic field is$\vec{B}=-2.50 \hat{\mathrm{i}} \mathrm{mT} .$At one instant the velocity of the proton is$\vec{v}=2000 \hat{\mathrm{j}} \mathrm{m} / \mathrm{s} .$At that instant and in unit-vector notation, what is the net force acting on the proton if the electric field is (a)$4.00 \mathrm{k} \mathrm{V} / \mathrm{m}$(b)$-4.00 \hat{\mathrm{k}} \mathrm{V} / \mathrm{m},$and (c)$4.00 \hat{\mathrm{i}} \mathrm{V} / \mathrm{ A proton travels through uniform magnetic and electric fields. The magnetic field is $\vec{B}=-2.50 \hat{\mathrm{i}} \mathrm{mT} .$ At one instant the velocity of the proton is $\vec{v}=2000 \hat{\mathrm{j}} \mathrm{m} / \mathrm{s} .$ At that instant and in unit-vector notation, what is the net forc... ##### Claim sizes for a population follow a distribution with hazard rate Λ, where Λ varies by... Claim sizes for a population follow a distribution with hazard rate Λ, where Λ varies by individual. The distribution of Λ is a gamma distribution with α = 2 and θ = 0.001. Calculate the median claim size over the entire population.... ##### 1 out of 3 attempts Round your answer to three decimal places. 1-m3 of saturated liquid... 1 out of 3 attempts Round your answer to three decimal places. 1-m3 of saturated liquid water at 190°C is expanded isothermally in a closed system until its quality is 76 percent. Determine the total work produced by this expansion, in kJ. x 105 kJ b, out... ##### What are two examples of intensive properties? What are two examples of intensive properties?... ##### Julie-Annie Corporation is in the mature stage of its corporate life cycle. The firm has a... Julie-Annie Corporation is in the mature stage of its corporate life cycle. The firm has a current price of $65.00and Problem its expected dividend, D,, is$4.55. If you require a 12 percent rate of return, then what is the expected price of the firm's common stock in five years?... ##### 4. (a) A function f has first derivative f (r) - and second derivative f"(z) It is also known that the function f h... 4. (a) A function f has first derivative f (r) - and second derivative f"(z) It is also known that the function f has r-intercept at (-3,0), and a y-intercept at (0,0) (i) Find all critical points, and use them to identify the intervals over which you will examine the behaviour of the first deri... ##### 5. A manufacturer of sprinkler systems used for fire protection in office buildings claims that the... 5. A manufacturer of sprinkler systems used for fire protection in office buildings claims that the true average system-activation temperature is 130°F. A sample of 9 systems, when tested, yields a sample average activation temperature of 131.08°F. Assume that the distribution of activation ... ##### Find the Taylor series about the indicated center and determine the interval of convergence.$f(x)= rac{1}{x}, c=1$ Find the Taylor series about the indicated center and determine the interval of convergence. $f(x)=\frac{1}{x}, c=1$... ##### (65 pts) Nuclear Fusiorn (a) What is nuclear fusion ? (b) Give an example (different than... (65 pts) Nuclear Fusiorn (a) What is nuclear fusion ? (b) Give an example (different than those in this problem below) of a nuclear fusion reaction (c) When a tritium nucleon and a deuterium nucleon under a fusion reaction they produce a helium aucleus and a neutron. If the masses of the species are... ##### How many grams of potassium hydroxide are required to prepare 600. mL of 0.450 M KOH solution? 4.509 of Koh15.19of KOH56.2 9 of KOH12.4 (90f [ KOH How many grams of potassium hydroxide are required to prepare 600. mL of 0.450 M KOH solution? 4.509 of Koh 15.19of KOH 56.2 9 of KOH 12.4 (90f [ KOH... ##### Sketch a continuous function that is not differentiable. Sketch a continuous function that is not differentiable.... ##### Spectrum 1914duoaonaEooj1500IQ0QhavehurceRIJ466 1135 J114 3091 9067 062 90362966 2962 2939 2094 1742 1490 14661981 1363 4084 1028 J0o3 68 967922 699 910 644 83; 614 1 1 604 826 578 3 505 491Spectrum 201AmminAtnntT900[0ooHAVEMUNEERI3302 3087 2ne D2926 2873 09ET1837 1604 JEaCe1344 [282 IOi1033 cua 76n Spectrum 19 1 4duo aona Eooj 1500 IQ0Q havehurceRI J466 1135 J114 3091 9067 062 9036 2966 2962 2939 2094 1742 1490 1466 1981 1363 4084 1028 J0o3 68 967 922 699 910 644 83; 614 1 1 604 826 578 3 505 491 Spectrum 20 1 Ammin Atnnt T900 [0oo HAVEMUNEERI 3302 3087 2ne D 2926 2873 09ET 1837 1604 JEaCe 134... ##### How do you use the order of operations to answer 18200xx100-:91000? How do you use the order of operations to answer 18200xx100-:91000?... ##### Computc the following integrals by pproxmating them Fitb fnitc guIDs:r dCompute the following dcfinite integral by recognizing that TcprescntsL I-l dzVo-zdIrldrUs propertics of dcfinite intcgrals and the examples above intcgrale:compulcfollowing dcfiniteJ (4-2)d.La+va-P)dr Computc the following integrals by pproxmating them Fitb fnitc guIDs: r d Compute the following dcfinite integral by recognizing that Tcprescnts L I-l dz Vo-zd Irldr Us propertics of dcfinite intcgrals and the examples above intcgrale: compulc following dcfinite J (4-2)d. La+va-P)dr... ##### Let demand be given by Q = 150 - P + 2Y. This is the same... Let demand be given by Q = 150 - P + 2Y. This is the same for all problems of this type. Let r = 10%. Let Y = 50 in the present but Y = 100 in the future. Let MC = 0. Let reserves = 200. Consider the basic two-period model. What is consumption of the resource in the present? 48.32 55.23 ... ##### 6) Enzyme that catalyzes phosphodiester bond formation between a nucleoside triphosphate and the 3’ nucleotide in... 6) Enzyme that catalyzes phosphodiester bond formation between a nucleoside triphosphate and the 3’ nucleotide in a newly synthesized complementary strand of DNA. (a) Primase (b) Topoisomerase (c) DNA polymerase (d) DNA helicase... ##### Just question 3a 3b Simple harmonic motion: pendulums Equipment Pendulum bobs String and support Meter stick... just question 3a 3b Simple harmonic motion: pendulums Equipment Pendulum bobs String and support Meter stick & mass balance Photogate & Smart Timer Preliminary question 1. Describe simple harmonic motion in your own words. 2. Describe how mechanical energy is conserved during a pendulum&... ##### 12. In the figure two large, thin insulate plates are parallel and close to each other.... 12. In the figure two large, thin insulate plates are parallel and close to each other. They have surface charge densities of opposite signs and magnitude 8.40 x 10 C/m2. Find the magnitude of the electric field at points between them... ##### 6 Consider the matrixA =H[~ V]Calculate A2,43,A4.b) What is A2018 6 Consider the matrix A = H[~ V] Calculate A2,43,A4. b) What is A2018... ##### A sample 76 obese adults was put on a special low carbohydrate diet for a year.... A sample 76 obese adults was put on a special low carbohydrate diet for a year. The average weight loss was 11 lb and the standard deviation was 19 lb (note that, positive weight loss imply reduced weight over the time). a. Calculate the 99% confidence interval for the true mean reduction. b. Do you... ##### [erntane Dnnamceiaindividunl setnd: 05uecroumterareeneeda 41*00 parcarA" polnta Darcanteot Ialdualenerarerreatlon college atudert rinde !hatPnt[dnBuntunal cunemono 7 mruteuon Tet Petank474 Qaenadon) INote: Januat ] rupre_entedEhandinocein rult'percontagelpanusHeclititha Unite maneentent pdinteporcuni onbudaolrorallonpalcunia uFdecdecruationmntnKercaniac peinta retnamnnatmuntna [erntane Dnnamceia individunl setnd: 05 uecroumter areeneeda 41*00 parcarA" polnta Darcanteot Ialdualenera rerreatlon college atudert rinde !hat Pnt[dn Buntunal cunemono 7 mruteuon Tet Petank474 Qaenadon) INote: Januat ] rupre_ented Ehandino cein rult 'percontage lpanus Heclititha Unite ma...
$$\require{cancel}$$ # 1: Preliminary Concepts A field is the continuum of values of a quantity as a function of position and time. The quantity that the field describes may be a scalar or a vector, and the scalar part may be either real- or complex-valued. In electromagnetics, the electric field intensity $${\bf E}$$ is a real-valued vector field that may vary as a function of position and time, and so might be indicated as “$${\bf E}(x,y,z,t)$$,” “$${\bf E}({\bf r},t)$$,” or simply “$${\bf E}$$.” When expressed as a phasor, this quantity is complex-valued but exhibits no time dependence, so we might say instead “$$\widetilde{\bf E}({\bf r})$$” or simply “$$\widetilde{\bf E}$$.” An example of a scalar field in electromagnetics is the electric potential, $$V$$; i.e., $$V({\bf r},t)$$. A wave is a time-varying field that continues to exist in the absence of the source that created it and is therefore able to transport energy. • 1.1: What is Electromagnetics? The topic of this book is applied engineering electromagnetics. This topic is often described as “the theory of electromagnetic fields and waves,” which is both true and misleading. The truth is that electric fields, magnetic fields, their sources, waves, and the behavior these waves are all topics covered by this book. The misleading part is that our principal aim shall be to close the gap between basic electrical circuit theory and the more general theory. • 1.2: Electromagnetic Spectrum ectromagnetic fields exist at frequencies from DC (0 Hz) to at least 1020 Hz – that’s at least 20 orders of magnitude! At DC, electromagnetics consists of two distinct disciplines: electrostatics, concerned with electric fields; and magnetostatics, concerned with magnetic fields. At higher frequencies, electric and magnetic fields interact to form propagating waves. Waves having frequencies within certain ranges are given names based on how they manifest as physical phenomena. • 1.3: Fundamentals of Waves In this section, we formally introduce the concept of a wave and explain some basic characteristics. • 1.4: Guided and Unguided Waves Broadly speaking, waves may be either guided or unguided. Unguided waves include those that are radiated by antennas, as well as those that are unintentionally radiated. Once initiated, these waves propagate in an uncontrolled manner until they are redirected by scattering or dissipated by losses associated with materials. Examples of guided waves are those that exist within structures such as transmission lines, waveguides, and optical fibers. • 1.5: Phasors In many areas of engineering, signals are well-modeled as sinusoids. Also, devices that process these signals are often well-modeled as linear time-invariant (LTI) systems. The response of an LTI system to any linear combination of sinusoids is another linear combination of sinusoids having the same frequencies. • 1.6: Units The term “unit” refers to the measure used to express a physical quantity • 1.7: Notation The list below describes notation used in this book Thumbnail: Examples of phasors, displayed here as points in the real-imaginary plane.
1-1-2021 Dissertation Ph.D. in Physics Breese Quinn Breese Quinn Nathan Hammer #### Relational Format dissertation/thesis #### Abstract The Muon g-2 experiment at Fermilab (E989) aims to measure the anomalous magnetic moment of the muon, $a_{\mu}= (g-2)/2$, to a groundbreaking precision of $140$ ppb, obtaining a near four-fold increase in precision over the previous experiment, E821, at the Brookhaven National Laboratory (BNL). The value of $a_{\mu}$ from BNL currently differs from the Standard Model prediction by $\sim 3.7$ standard deviations, suggesting the potential for new physics and therefore, motivating a new experiment.Because the theory predicts this number with high precision, testing the g-factor through experiment provides a stringent test of the SM and can suggest physics beyond the Standard Model. The goal of the Fermilab Muon $g-2$ experiment is to increase the statistical precision by more than a factor of 20 and reduce systematic errors by a factor of 3. By measuring muon precession rate ($\omega_a$) in an external magnetic field, the anomalous magnetic moment will be calculated. This is an incredibly challenging experiment with a unique opportunity to provide new insight into nature. \\ The $g-2$ data also provides a great opportunity for setting the most stringent limits on some of the Standard Model Extension CPT Lorentz violating (LV) parameters in the muon sector. One of the CPT and Lorentz violating signatures that we can look for using $g-2$ data is a sidereal variation of $\omega_a(t)$. Extensive simulation studies confirm that the sensitivity regarding the sidereal varation roughly scales with $\omega_a$ uncertainty. Hence, the $g-2$ experiment at FNAL should be able to reach limits of $\sim 5\times10^{-25}$ GeV. Because the CPT and LV analyses are essentially studies of variations in $\omega_a$ as a function of time and charge, performing an $\omega_a$ analysis sets the stage for the CPT and LV measurement. This dissertation focuses on the methodology of a fully functioning framework and analyzing the Fermilab Muon $g - 2$ Run 2 data containing $\sim 11$ billion events above an energy threshold of $1.7$~GeV. COinS
## Are some seasons warming more than others? #### 23 November 2015 /posted in: R I ended the last post with some pretty plots of air temperature change within and between years in the Central England Temperature series. The elephant in the room1 at the end of that post was is the change in the within year (seasonal) effect over time statistically significant? This is the question I’ll try to answer, or at least show how to answer, now. 1. well, one of the elephants; I also wasn’t happy with the AR(7) for the residuals ## Climate change and spline interactions #### 21 November 2015 /posted in: R In a series of irregular posts1 I’ve looked at how additive models can be used to fit non-linear models to time series. Up to now I’ve looked at models that included a single non-linear trend, as well as a model that included a within-year (or seasonal) part and a trend part. In this trend plus season model it is important to note that the two terms are purely additive; no matter which January you are predicting for in a long timeseries, the seasonal effect for that month will always be the same. The trend part might shift this seasonal contribution up or down a bit, but all January’s are the same. In this post I want to introduce a different type of spline interaction model that will allow us to relax this additivity assumption and fit a model that allows the seasonal part of the model to change in time along with the trend. 1. here, here, and here ## User-friendly scaling #### 08 October 2015 /posted in: R Back in the mists of time, whilst programming early versions of Canoco, Cajo ter Braak decided to allow users to specify how species and site ordination scores were scaled relative to one another via a simple numeric coding system. This was fine for the DOS-based software that Canoco was at the time; you entered 2 when prompted and you got species scaling, -1 got you site or sample scaling and Hill’s scaling or correlation-based scores depending on whether your ordination was a linear or unimodal method. This system persisted; even in the Windows era of Canoco these numeric codes can be found lurking in the .con files that describe the analysis performed. This use of numeric codes for scaling types was so pervasive that it was logical for Jari Oksanen to include the same system when the first cca() and rda() functions were written and in doing so Jari perpetuated one of the most frustrating things I’ve ever had to deal with as a user and teacher of ordination methods. But, as of last week, my frustration is no more… ## ESA's publishing deal with Wiley Notes from ESA Council #### 11 August 2015 /posted in: Science One of the big announcements about the society made by ESA in the run up to the annual meeting in Baltimore this week was the news that ESA has chosen to partner with John Wiley & Sons as publisher of the society journals. At the time of the announcement few details about the deal or the process by which this decision was made were available. I was attending the ESA Council as the incoming Chair of the Paleoecology Section where some further details were provided and members of Council were able to ask questions about the deal. These are my notes from that meeting. ## My aversion to pipes #### 03 June 2015 /posted in: R At the risk of coming across as even more of a curmudgeonly old fart than people already think I am, I really do dislike the current vogue in R that is the pipe family of binary operators; e.g. %>%. Introduced by Hadley Wickham and popularised and advanced via the magrittr package by Stefan Milton Bache, the basic idea brings the forward pipe of the F# language to R. At first, I was intrigued by the prospect and initial examples suggested this might be something I would find useful. But as time has progressed and I’ve seen the use of these pipes spread, I’ve grown to dislike the idea altogether. here I outline why. ## Something is rotten in the state of Denmark #### 02 June 2015 /posted in: R On Twitter and elsewhere there has been much wailing and gnashing of teeth for some time over one particular aspect of the R ecosphere: CRAN. I’m not here to argue that everything is peachy — far from it in fact — but I am going to argue that the problems we face do not begin and end with CRAN or one or more of it’s maintainers. ## Drawing rarefaction curves with custom colours #### 16 April 2015 /posted in: R I was sent an email this week by a vegan user who wanted to draw rarefaction curves using rarecurve() but with different colours for each curve. The solution to this one is quite easy as rarecurve() has argument col so the user could supply the appropriate vector of colours to use when plotting. However, they wanted to distinguish all 26 of their samples, which is certainly stretching the limits of perception if we only used colour. Instead we can vary other parameters of the plotted curves to help with identifying individual samples. ## At the frontiers of palaeoecology #### 31 March 2015 /posted in: Science A couple of weeks ago, I had the pleasure of attending and participating in a symposium held to honour John Birks as he retires from the University of Bergen and becomes Professor Emeritus. The symposium, titled “At the Frontiers of Palaeoecology”, took place on 19–20th March in Bergen, Norway, and was a wonderful mix of colleagues old and new discussing John’s contributions to the field of palaeoecology and their collaborations with him. Alongside this reminiscing were several presentations describing new areas of research by colleagues and collaborators of John.
# GnuplotTex: automatic plotting and vertical line indication I am using pdflatex with the gnuplottex package, to plot a function with a local maximum; I'd like to find this maximum automatically and label it. For the most part, I'm already there (using the special "pseudo" file '+' in gnuplot >4.4), apart from a couple of problems -- and I'm not sure whether the problem is in LaTeX or gnuplot ... Here is a minimal working example: % build with: % pdflatex -shell-escape test.tex \documentclass{article} \makeatletter\newwrite\verbatim@out\makeatother \usepackage{gnuplottex} \begin{document} \section{Test} Here a brief test... \begin{figure}[h] \centering \begin{gnuplot}[scale=0.95] # Define helper functions ismax(x) = (x>max)?max=x:0 isxmax(x) = (ismax(f(x))!=0)?xmax=x:0 # # Initialise the 'global' vars max=-1e38 xmax=-1e38 min=1e38 set grid set title 'gnuplottex test' set ylabel '$y$' set xlabel '$x$' set xrange [0:2000] set yrange [0:ymaxrange] # MUST set this! # define the function f(x) = (20*x)/(100000 + 50*x + x**2) set multiplot # plot f(x) # OK, works as usual # to turn off the annoying label in the upper right corner - also f($0) will cause latex crash set nokey # plot the function - which will also calculate xmax (first pass) plot '+' using ($1):(f($1)) with linespoints, \ '+' using ($1):(isxmax($1)) with lines linecolor 2 # '+' using ($1):(ismax(f($1))) with lines linecolor 2 # second part of plot - which needs xmax set grid noxtics noytics # prevent double plot ?! set arrow from xmax,0 to xmax,3 nohead lt 1 linewidth 2 # linewidth doesn't change ?! set label "X" at xmax,f(xmax) set label "(%.0f;",xmax,"%f)",f(xmax) at 0.6*xmax,f(xmax)+ymaxrange/10 plot '+' using ($1) # replot # nope, doubles unset multiplot \end{gnuplot} \end{figure} 500 1000 1500 ... End of test. \end{document} This code results with a rendering like this (which is a screenshot of evince rendering the PDF): And these are my problems: • It seems that upon execution of the second plot, the labels and axes get repeated on top of each other, in spite of a set grid noxtics noytics (notice they are a bit darker in the screenshot) -- can this be prevented? • In principle, without the set yrange ... line, the second plot may contain a different automatic range (though that is not visible in this example). Is there a way to "copy"/duplicate a range of an axis that was computed automatically in gnuplot? • the linewidth (lw) argument of set arrow (which is for implementing a vertical line) seems not to have any effect -- how can I manipulate that? • The line from set arrow and the (very thin dotted) line from the isxmax(\$1) plot do not match; seemingly it is the arrow that is off -- how to fix this? • The linecolor argument seems to have no effect -- how to fix this? (btw, color seems to work fine for \begin{gnuplot}[terminal=pdf,..., however, I'd like to keep the LaTeX 'terminal') References: - ## migrated from stackoverflow.comApr 28 '11 at 9:15 This question came from our site for professional and enthusiast programmers. I would use the pgfplots package for this. You can generate the data with gnuplot by using \addplot gnuplot {<expression>};, and then read the generated data using \pgfplotstableread{\jobname.pgf-plot.table}\table. After sorting this table, you just access the first element, which now contains the local maximum. \documentclass{article} \usepackage{pgfplots} \usepackage{pgfplotstable} \begin{document} \begin{tikzpicture} \begin{axis}[ymin=0,ymax=0.05,xmin=0,xmax=2000,grid=both] gnuplot {(20*x)/(100000 + 50*x + x**2)}; \pgfplotstablesort[sort cmp={float >},sort key={[index] 1}]\sorted{\table} \pgfplotstablegetelem{0}{[index] 1}\of{\sorted} \let\maxy=\pgfplotsretval \pgfplotstablegetelem{0}{[index] 0}\of{\sorted} \let\maxx=\pgfplotsretval \node at (axis cs:\maxx,\maxy) [circle, fill, red,inner sep=1.5pt, pin={ [fill=white]40:{(\pgfmathprintnumber{\maxx}, \pgfmathprintnumber{\maxy}}) } ] {}; \draw [red] (axis cs:\maxx,0) -- (axis cs:\maxx,1); \end{axis} \end{tikzpicture} \end{document} - Hi @Jake - thanks a lot for this answer! It seems like the optimal solution that will utilize the gnuplot syntax... –  sdaau May 8 '11 at 19:21 Many of your questions concern how GnuPlot constructs graphs and don't have much to do with TeX. Unless there are some GnuPlot experts around here, more insight may be found on their mailing list: As for not getting colored output from the latex terminal---that question I can answer: The GnuPlot latex terminal does not include any color information into the picture code that it generates. You need to use a different terminal. ### The epslatex terminal: \begin{gnuplot}[scale=0.95,terminal=epslatex,terminaloptions=color] If using pdflatex, you will also need to convert the resulting EPS files to PDF. This can be done by installing the epstopdf tool and adding the following to your preamble: \usepackage[suffix=]{epstopdf} However, feeding EPS to pdflatex just feels like a hack. There are other GnuPlot terminals you can use to get around this. ### The tikz terminal: \begin{gnuplot}[scale=0.95,terminal=tikz,terminaloptions=createstyle] You also need to change the upper y value on your arrow from: set arrow from xmax,0 to xmax,3 nohead lt 1 linewidth 2 To: set arrow from xmax,0 to xmax,ymaxrange nohead lt 1 linewidth 2 Or it will shoot off into space. The following has to be added to the preamble of your document: \makeatletter % Tell gunuplottex to bring TikZ output in using \input rather % than \includegraphics \def\gnuplottexextension@tikz{\string tex} \makeatother \usepackage{tikz} \usepackage{gnuplot-lua-tikz} % Generated by the createstyle option The results are: Other GnuPlot terminals that may be of interest generate Metapost and PS-Tricks output. - Hi @Sharpie, thanks a lot for the detailed color and terminals explanation! I guess, I wanted to use a gnuplot syntax for function plotting, along with calculation of a max value - and yet have the whole thing coded in a Latex file (i.e. without intermediary image files) ... Cheers! –  sdaau May 8 '11 at 19:24
Find all School-related info fast with the new School-Specific MBA Forum It is currently 02 Aug 2015, 08:34 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Please rate my essay ! exam in few days - -Kudos for rating Author Message Intern Joined: 24 May 2013 Posts: 9 Followers: 0 Kudos [?]: 3 [0], given: 22 Please rate my essay ! exam in few days - -Kudos for rating [#permalink]  02 Aug 2013, 01:35 ESSAY QUESTION: The following appeared in a strategy memorandum of an investment company: “Over the past several years, investment in precious metals, such as gold and silver, has proven to be one of the most profitable investment strategies for our firm. Over the next decade, the demand for these metals is expected to be strong, largely driven by the economic growth of large emerging markets--China, India, and Russia. Thus, our investors are best served by increasing their exposure to precious metals to take advantage of this unique profit-making opportunity.” Discuss how well reasoned you find this argument. Point out flaws in the argument's logic and analyze the argument's underlying assumptions. In addition, evaluate how supporting evidence is used and what evidence might counter the argument's conclusion. You may also discuss what additional evidence could be used to strengthen the argument or what changes would make the argument more logically sound. The strategy cited by the investment company is based on the analysis of previous few years. The memorandum states that valuable metals such as gold and silver have proven to be the most lucrative investment strategies for the company to book higher profits in coming years. The memorandum also predicts a future increase in demand for emerging markets such as India and China . Hence it concludes that to book higher profits the firm should invest more in precious metals. I personally feel that the argument rests on a lot of unstated assumptions to come to such a concrete conclusion and with reasoned arguments I shall prove that the strategy might not meet the needs of the firm. First,the strategy has been based only on the results of previous several years and the author of the memorandum assumes that since past results were positive future results will also be positive. However, the strategy does not cite any evidence or data or statistics that can provide any correlation of the various factors which affect the price of precious metals. The memorandum assumes that the world markets will remain in favor of precious metals and that the previous key metrics which determine the price of valuable metals such as gold and silver will show a positive correlation or increase with time. Secondly, the author assumes that because demand in emerging markets is high it will drive the price of precious metals to new high in the coming future. However the author forgets that the price of gold depends on the price of dollar and overall demand of the precious metals. For example - If the currency of these emerging markets depreciates by a very big factor when compared to the dollar;perhaps due to oil prices . Its quite possible that demand for the precious metals will fall down as the price of the precious metals will go unreasonably high and purchasing power might go down. In addition to above the author has also used the factor of demand in emerging markets as one of the key factors to certain that price of precious metals will increase manifold. Although the demand and supply rule does apply and could increase the price further we are not certain whether the demand will remain constant or increase in the coming years. If demand remains constant its quite possible that price remains constant.Also no evidence is given that can suggest that emerging markets will only invest in precious metals and not in stock markets or bonds which can yield a higher return.Thereby reducing the demand for precious metals. Also the author assumes that investment in one instrument can help his firm achieve great profits. Greater exposure to only one instrument of investment ,such as precious metals, can increase risk for the firm . The argument could have been strengthened had the author provided key factors that govern the price of precious metals . Also the author should have provided some more details on the demand and supply in emerging markets and some evidence to prove that future demand will persist in such markets. Apart from these a reasonable view of return on other instruments when compared to precious metals if provided could have helped make a sound judgement on the effectiveness of the strategy. Thus I find the argument flawed and the argument rests on a lot of unstated assumptions .Hence the company's strategy of taking advantage of this opportunity might not be as lucrative as suggested by the memorandum. Intern Joined: 24 May 2013 Posts: 9 Followers: 0 Kudos [?]: 3 [0], given: 22 Re: Please rate my essay ! exam in few days - -Kudos for rating [#permalink]  02 Aug 2013, 07:40 Hi All, Regards, gmat2805 Princeton Review Representative Joined: 17 Jun 2013 Posts: 163 Followers: 130 Kudos [?]: 225 [1] , given: 0 Re: Please rate my essay ! exam in few days - -Kudos for rating [#permalink]  03 Aug 2013, 13:11 1 KUDOS Expert's post It is clear that you understand the task of the question and your essay lists many assumptions and is well organized. however, your essay could have been improved by listing fewer points and adding more analysis. As written it reads like a list of assumptions. For instance in the first body paragraph you could explain more directly what could happen to make past results not indicative of future results. You could also have added how that would affect the investors. Each paragraph is less analyzed and at the end you simply use one sentence to explain an assumption. If you worked on only 3 points and analyzed them better your essay would receive higher marks. However, overall your essay does a lot of what it is supposed to. therefore I would rate it a 4 or 4.5 out of 6 _________________ Special offer! Save $250 on GMAT Ultimate Classroom, GMAT Small Group Instruction, or GMAT Liveonline when you use the promo code GCVERBAL250. Or, save$150 on GMAT Self-Prep when you use the code GCVERBAL150. Enroll at www.princetonreview.com Intern Joined: 24 May 2013 Posts: 9 Followers: 0 Kudos [?]: 3 [0], given: 22 Re: Please rate my essay ! exam in few days - -Kudos for rating [#permalink]  06 Aug 2013, 10:23 BeckyRobinsonTPR wrote: It is clear that you understand the task of the question and your essay lists many assumptions and is well organized. however, your essay could have been improved by listing fewer points and adding more analysis. As written it reads like a list of assumptions. For instance in the first body paragraph you could explain more directly what could happen to make past results not indicative of future results. You could also have added how that would affect the investors. Each paragraph is less analyzed and at the end you simply use one sentence to explain an assumption. If you worked on only 3 points and analyzed them better your essay would receive higher marks. However, overall your essay does a lot of what it is supposed to. therefore I would rate it a 4 or 4.5 out of 6 Thank you Becky . I appreciate your help and advice. I will work on these aspects of my essay. Thanks once again. Re: Please rate my essay ! exam in few days - -Kudos for rating   [#permalink] 06 Aug 2013, 10:23 Similar topics Replies Last post Similar Topics: just few hours in exam..rate my essay 0 20 Mar 2014, 01:26 Please rate my first essay - Urgent, my exam is in 3 days 0 24 Dec 2013, 23:20 Please rate my essay! Exam in 5 days! 0 23 Aug 2013, 23:20 1 Please rate my essay. Kudos for the same. 2 08 Aug 2013, 23:32 Please rate my essay (exam in four days) 0 28 Jun 2013, 08:07 Display posts from previous: Sort by
## Point domain to cPanel with "A" record Hi, I have a client who have a domain in another company and changed "A" records to point to my server. I tried to add this domain in cPa… | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1830370&goto=newpost ## agile – In fixed-price contracts, how can a story be “negotiable and represnt a starting point of conversation with business? Our company paid for an agile coach, who – without knowing much about our company – went like this: • Story should just encapsulate the value and then and clarified with the business • Avoid detailed specs, that is not agile! We were thinking about that but think that just cannot work in standard fixed-price and fixed-scope delivery projects: • To be able to come up with a precise estimation, even the proposal must be detailed enough • Any unclarities must be resolved as soon as possible to prevent rework effort I do not truly believe this “minimum story level and conversation later” works anywhere in service delivery, and the same goes for postponing design and any other decisions as late as possible. Sure, it can be done, but the rework (and cost) due to e.g. design changes might be enormous… Why is it “not agile” to know very well upfront what you need to do? What value is in reworking and redesigning a solution (pointing to adaptability) in response to changes, when the resultant solution will not be well thought-through but rather a hybrid? ## microsoft powerpoint – How to remove this annoying text from power point? I find a powerpoint on a telegram channel and I want to edit and work on that powerpoint. but there is a text on the above left of the powerpoint I try to delete this but I can’t: I also tried this way: And clicked on the Section Header. but the glowing line below the arrows disappeared. So how can I fix it? ## eigenvalues – How to make a cross-correlation between 2 Fisher matrices from a pure mathematical point of view? Firstly, I want to give you a maximum of informations and precisions about my issue. If I can’t manage to get the expected results, I will launch a bounty, maybe some experts or symply people who have been already faced to a similar problem would be able to help me. 1) I have 2 covariance matrices known $$Cov_1$$ and $$Cov_2$$ that I want to cross-correlate. (Covariance matrix is the inverse of Fisher matrix). I describe my approach to cross-correlate the 2 covariance matrices (the constraints are expected to be better than the constraints infered from a “simple sum” (elements by elements) of the 2 Fisher matrices). • For this, I have performed a diagonalisation of each Fisher matrix $$F_1$$ and $$F_2$$ associated of Covariance matrices $$Cov_1$$ and $$Cov_2$$. • So, I have 2 different linear combinations of random variablethat are uncorraleted, i.e just related by eigen values ($$1/sigma_i^2$$) as respect of their combination. These eigen values of diagonalising are contained into diagonal matrices $$D_1$$ and $$D_2$$. 2) I can’t build a “global” Fisher matrix directly by summing the 2 diagonal matrices since the linear combination of random variables is different between the 2 Fisher matrices. I have eigen vectors represented by $$P_1$$ and $$P_2$$ matrices. That’s why I think that I could perform a “global” combination of eigen vectors where I can respect the MLE (Maximum Likelihood Estimator) as each eigen value : $$dfrac{1}{sigma_{hat{tau}}^{2}}=dfrac{1}{sigma_1^2}+dfrac{1}{sigma_2^2}quad(1)$$ because $$sigma_{hat{tau}}$$ corresponds to the best estimator from MLE method. So, I thought a convenient linear combination of each eigen vectors $$P_1$$ and $$P_2$$ that could allow to achieve it would be under a new matrix P whose each column represents a new eigein global vector like this : $$P = aP_1 + bP_2$$ 3) PROBLEM: : But there too, I can’t sum eigen values under the form $$D_1 + D_2$$ since the new matrix $$P= a.P_1 + b.P_2$$ can’t have in the same time the eigen values $$D_1$$ and also $$D_2$$ eigen_values, can it ? I mean, I wonder how to build this new diagonal matrix $$D’$$ such that I could write : $$P^{-1} cdot F_{1} cdot P + P^{-1} cdot F_{2} cdot P=D’$$ If $$a$$ and $$b$$ could be scalars, I could for example to start from taking the relation : $$P^{-1} cdot F_{1} cdot P = a^2*D_1quad(1)$$ and $$P^{-1} cdot F_{2} cdot P = b^2*D_2quad(2)$$ with $$(1)$$ and $$(2)$$ making appear the relation : $$Var(aX+bY) = a^2 Var(X) + b^2 Var(Y) + 2ab Cov(X,Y) = a^2 Var(X) + b^2 Var(Y)$$ since we are in a new basis $$P$$ that respect $$(1)$$ and $$(2)$$. But the issue is that $$a$$ and $$b$$ seems to be matrices and not scalars, so I don’t know how to proceed to compute $$D’$$. 4) CONCLUSION : Is this approach correct to build a new basis $$P = a.P_1 + b.P_2$$ and $$D’ = a.a.D_1 + b.b.D_2$$ assuming $$a$$ and $$b$$ are matrices ? The key point is : if I can manage to build this new basis, I could return back to the starting space, the one of single parameters (no more combinations of them) by simply doing : $$F_{text {cross}}=P . D’ cdot P^{-1}$$ and estimate the constraints with covariance matrix : $$C_{text{cross}}=F_{text {cross}}^{-1}$$. If my approach seems to be correct, the most difficulty will be to determine $$a$$ and $$b$$ parameters (which is under matricial form, at least I think since with scalar form, there are too many equations compared to 2 unknown). Sorry if there is no code for instant but I wanted to set correctly the problematic of this approach before trying to implement. Hoping I have been clear enough. Any help/suggestion/track/clue is welcome to solve this problem, this would be fine to tell it. ## unity – Transparent objects in the scene only get lit half way by point light I have encountered a weird graphics problem, where all of our transparent objects in the scene only get lit half way (only from the y position downward) by a point light. This happens with all of our grass and tree shaders. Also, when looking through translucent objects, they become really desaturated and gray-is which is an undesired effect. Does anyone know how to deal with this problem? Please refer to the screenshots below. I’m are using Unity 2020.1.14f, built-in render pipeline with a forward renderer. For grass and trees we are using the Fantasy Adventure Environment, but all other grass shaders from other packages have the same problem. ## algorithm – Given a 2D array, how do I generate a random path of fixed length from a random point at length 0 to a random point at max length? I’m trying to generate a random path on a 2D grid given that: • The width and height of the grid are given • The length of the path to generate is given • The path can’t move “back” • The path starts from a random point at height 0 and ends at a random point at max height • A path segment cannot “touch” with a path segment that is at its height – 1 that is not the latest generated segment of the previous height This is what two paths generated from the parameters {Width:11, Length:17, PathLength:30} would look like: and this is an example of a path that should not be generated: The result of the algorithm should be a list of value pairs such as this: (8,6) in any order, which indicates the segments of the path. I’ve been trying to solve this problem for a while, but I have problems understanding how to make this have a given length. If the given length was not a requirement I could just generate it with a nested for cycle and some rules. Please help! ## visual studio 2013 – Manual debugging of SharePoint application: point does not break on new code change lines A SharePoint solution is installed on our dev server. We are working on the new changes and unfortunately we cannot deploy and test our changes on the provided dev server. I am trying to debug manually and understand the existing application execution by attaching it to w3wp process. So I made some changes in my solution and tried debugging manually. Unfortunately the point does not break on my new code changes, it just breaks on the lines that are part of the deployed solution. Will I not be able to test my changes while debugging manually, unless the changes are deployed ? ## linear algebra – 3 non-collinear points can represent every point in R2 So this is supposed to be a quite easy problem using an orthogonal base but I just can’t figure it out :(. So Let $$A,B,C in mathbb{R}^2 text{ be three non-collinear, different from each other, points}$$ Show every point $$Pin mathbb{R}^2$$ can be represented with $$p = lambda a + mu b + nu c$$ with $$lambda + mu + nu = 1$$ The hint given by my professor was to think about orthogonal bases and orthogonal coordinate system but I just can’t wrap my head around it… It would be amazing if you would have some ideas on how to approach this ^^ ## air travel – What’s the point of flight ticket cancellation charges? Imagine a world where there were no change fees, and no cancellation fees. If you bought a ticket and then changed your mind, you could just cancel or change it. In this world, tickets would not be cheaper if you bought them in advance. After all, I could buy a ticket for a year from now then change it the day I was going to fly, and the airline would have to accommodate me. They wouldn’t get a benefit from my making firm plans in advance, so they wouldn’t motivate me with money to make my plans in advance. You probably wouldn’t like this world, because all plane tickets would cost about what “I need to fly this week” plane tickets cost today, which is about 5x what you pay if you plan far enough in advance. Now, imagine the same world with no change fees or cancellations, but with no refunds either. You buy a ticket, use it or not, we don’t care, but it’s paid for. A bit like putting a subway token in a turnstile but then not going through. You wouldn’t like this world either: plans do change and people don’t want to lose all the money they paid for a plane ticket. Travel insurance exists, but doesn’t cover everything. So, ok, the airline is going to charge you some money to change or cancel your plans. There are two ways to establish that charge. One is “what does it cost them” which is a few pennies in IT stuff and then possibly thousands in switching to a bigger plane for the route or whatever. That’s too much of a lottery for passengers to take on. A sort of average charge of a few hundred might be fairer. But the other approach is “what will deter this behaviour?” If changes cost hundreds, you won’t book until you’re really very sure you going to do it. (Example: I book hotel rooms for events I might or might not attend, since they book up fast and can be cancelled no charge. I don’t buy the plane tickets until I know for sure I’m going.) Then on top of that you have to think about the system-gamers. You fly once or twice a year. But there are people who fly every week. And they want to get upgraded, they want maximum status miles, they want to be home half an hour earlier than they would normally be, and all kinds of things that aren’t an option for you or don’t matter for you. They invest time and energy into gaming systems. They book three flights from A to B on the same day, so they can decide on the day which one they want and that’s cheaper than buying a last minute ticket on the day. They do “nested returns” and “hidden cities” and a ton of things you’d never do. The fees have to be robust against that kind of nonsense too. So what this adds up to is that fees must exist, mostly to control your behaviour and make your plans firmer, so that they can plan their staff and equipment usage properly. Sometimes it seems like they would do better if they didn’t charge you that fee — but that’s because you haven’t thought about how to game that if you fly that route every single week. ## dnd 5e – What is the point of origin for a square area of effect? The spellcasting rules for areas of effect state: A spell’s description specifies its area of effect, which typically has one of five different shapes: cone, cube, cylinder, line, or sphere. Every area of effect has a point of origin, a location from which the spell’s energy erupts. The rules for each shape specify how you position its point of origin. Typically, a point of origin is a point in space, but some spells have an area whose origin is a creature or an object. A spell’s effect expands in straight lines from the point of origin. If no unblocked straight line extends from the point of origin to a location within the area of effect, that location isn’t included in the spell’s area. To block one of these imaginary lines, an obstruction must provide total cover. Notably, square is not one of the shapes defined, yet there exist several spells which have a square area of effect, such as entangle or Evard’s black tentacles. The spell grease tells us in its description: Slick grease covers the ground in a 10-foot square centered on a point within range. But this clarification is not present in the descriptions of entangle and Evard’s black tentacles. So what is the point of origin of a square area of effect when it is not specified in the spell description?
# Directions: Copy the table identify what is being asked in each of the following statements. Locate and encircle the word of the, correct answer inside the box. The word/s may be arranged horizontally, vertically, diagonally or inverted,, 1. August, ASSESSMENENT 3 In MUSIC 9 3RD QUARTER, B, 1, T T, O, A, B, G, F, 0421, H, E, K, P, E, 043e, A, D, M, N, H, U, A, M, L, w, N, A, A, A, 1, N, T, A, A, L, L, Y, A, T, M, S, X, Z, L, S, X, A, M, E, A, w, 0421, E, U, E, P, L, N, 1, S, F, B, L, P, O, 03a3039f039d \xab039903b1 -, S, 0, K, C7, O, O, 0, N, M, M, K, S, K, S, 0410, M, V, T, 1, A, L, Z, K, P., E, 0410, E, H, T, E, 0410, 043a, 1, Zimmo, N, S, By ScreenbooksPosted on June 30, 2022 Directions: Copy the table identify what is being asked in each of the following statements. Locate and encircle the word of the correct answer inside the box. The word/s may be arranged horizontally, vertically, diagonally or inverted, 1. August ASSESSMENENT 3 In MUSIC 9 3RD QUARTER B 1 T T O A B G F С H E K P E о A D M N H U A M L w N A A A 1 N T A A L L Y A T M S X Z L S X A M E A w С E U E P L N 1 S F B L P O ΣΟΝ «Ια – S 0 K C7 O O 0 N M M K S K S А M V T 1 A L Z K P. E А E H T E А к 1 Zimmo N S <om 7 OZD S T T T 0 1 1 T T R N M V 1 E R R Y Y​
Variational discretization of a control-constrained parabolic bang-bang optimal control problem Variational discretization of a control-constrained parabolic bang-bang optimal control problem Nikolaus von Daniels111Schwerpunkt Optimierung und Approximation, Universität Hamburg, Bundesstraße 55, 20146 Hamburg, Germany, [email protected], [email protected]    Michael Hinze11footnotemark: 1 July 5, 2019 Abstract: We consider a control-constrained parabolic optimal control problem without Tikhonov term in the tracking functional. For the numerical treatment, we use variational discretization of its Tikhonov regularization: For the state and the adjoint equation, we apply Petrov-Galerkin schemes from [DanielsHinzeVierling] in time and usual conforming finite elements in space. We prove a-priori estimates for the error between the discretized regularized problem and the limit problem. Since these estimates are not robust if the regularization parameter tends to zero, we establish robust estimates, which — depending on the problem’s regularity — enhance the previous ones. In the special case of bang-bang solutions, these estimates are further improved. A numerical example confirms our analytical findings. Keywords: Optimal control, Heat equation, Control constraints, Finite elements, A-priori error estimates, Bang-bang controls. 1 Introduction In this article we are interested in the numerical solution of the optimal control problem Here, is basically the (weak) solution operator of the heat equation, the set of admissible controls is given by box constraints, and is a given function to be tracked. Often, the solutions of () possess a special structure: They take values only on the bounds of the admissible set and are therefore called bang-bang solutions. Theoretical and numerical questions related to this control problem attracted much interest in recent years, see, e.g., [deckelnick-hinze], [wachsmuth1], [wachsmuth2], [wachsmuth3], [wachsmuth4], [wachsmuth5], [gong-yan], [felgenhauer2003], [alt-bayer-etal2], [alt-seydenschwanz-reg1], and [seydenschwanz-regkappa]. The last four papers are concerned with being the solution operator of an ordinary differential equation, the former papers with being a solution operator of an elliptic PDE or being a continuous linear operator. In [dissnvd], a brief survey of the content of these and some other related papers is given at the end of the bibliography. Problem () is in general ill-posed, meaning that a solution does not depend continuously on the datum , see [wachsmuth2, p. 1130]. The numerical treatment of a discretized version of () is also challenging, e.g., due to the absense of formula (10) in the case , which corresponds to problem (). Therefore we use Tikhonov regularization to overcome these difficulties. The regularized problem is given by where denotes the regularization parameter. Note that for , problem () reduces to problem (). For the numerical treatment of the regularized problem, we then use variational discretization introduced by Hinze in [Hinze2005], see also [hpuu, Chapter 3.2.5]. The state equation is treated with a Petrov-Galerkin scheme in time using a piecewise constant Ansatz for the state and piecewise linear, continuous test functions. This results in variants of the Crank-Nicolson scheme for the discretization of the state and the adjoint state, which were proposed recently in [DanielsHinzeVierling]. In space, usual conforming finite elements are taken. See [dissnvd] for the fully discrete case and [SpringerVexler2013] for an alternative discontinuous Galerkin approach. The purpose of this paper is to prove a-priori bounds for the error between the discretized regularized problem and the limit problem, i.e. the continuous unregularized problem. We first derive error estimates between the discretized regularized problem and its continuous counterpart. Together with Tikhonov error estimates recently obtained in [daniels], see also [dissnvd], one can establish estimates for the total error between the discretized regularized solution and the solution of the continous limit problem, i.e. . Here, second order convergence in space is not achievable and (without coupling) the estimates are not robust if tends to zero. Using refined arguments, we overcome both drawbacks. In the special case of bang-bang controls, we further improve those estimates. The obtained estimates suggest a coupling rule for the parameters (regularization parameter), , and (time and space discretization parameters, respectively) to obtain optimal convergence rates which we numerically observe. The paper is organized as follows. In the next section, we introduce the functional analytic description of the regularized problem. We recall several of its properties, such as existence of a unique solution for all (thus especially in the limit case we are interested in), an explicit characterization of the solution structure, and the function space regularity of the solution. We then introduce the Tikhonov regularization and recall some error estimates under suitable assumptions. In the special case of bang-bang controls, we recall a smoothness-decay lemma which later helps to improve the error estimates for the discretized problem. The third section is devoted to the discretization of the optimal control problem. At first, the discretization of the state and adjoint equation is introduced and several error estimates needed in the later analysis are recalled. Then, the analysis of variational discretization of the optimal control problem is conducted. The last section discusses a numerical example where we observe the predicted orders of convergence. 2 The continuous optimal control problem 2.1 Problem setting and basic properties Let , , be a spatial domain which is assumed to be bounded and convex with a polygonal boundary . Furthermore, a fixed time interval , , a desired state , a non-negative real constant , and an initial value are prescribed. With the Gelfand triple we consider the following optimal control problem where is the control space, the (closed and convex) set of admissible controls is defined by with fixed control bounds , fulfilling almost everywhere in , Y:=W(I):={v∈L2(I,H10(Ω))∣∣vt∈L2(I,H−1(Ω))} is the state space, and the control operator as well as the control region are defined below. Note that we use the notation and for weak time derivatives and for “for almost all”. The operator S:L2(I,H−1(Ω))×L2(Ω)→W(I),(f,g)↦y:=S(f,g), (2) denotes the weak solution operator associated with the heat equation, i.e., the linear parabolic problem ∂ty−Δy =f in I×Ω, y =0 in I×∂Ω, y(0) =g in Ω. The weak solution is defined as follows. For the function with satisfies the two equations y(0)= g (3a) (3b) Note that by the embedding , see, e.g., [evans, Theorem 5.9.3], the first relation is meaningful. In the preceding equation, the bilinear form is given by a(f,g):=∫Ω∇f(x)∇g(x) dx. We show below that (3) yields an operator in the sense of (2). For the control region and the control operator we consider two situations. 1. (Distributed controls) We set , and define the control operator by , i.e., the identity mapping induced by the standard Sobolev embedding . 2. (Located controls) We set the control region . With a fixed functional the linear and continuous control operator is given by B:U=L2(I)→L2(I,H−1(Ω)),u↦(t↦u(t)g1). (4) The case of fixed functionals with controls and a control operator , is a possible generalization. To streamline the presentation we restrict ourselves to the case here and refer to [dissnvd] for the case . For later use we observe that the adjoint operator is given by B∗:L2(I,H10(Ω))→U=L2(I),(B∗q)(t)=⟨g1,q(t)⟩H−1(Ω)H10(Ω). If furthermore holds, we can consider as an operator and get the adjoint operator B∗:L2(I,L2(Ω))→U=L2(I),(B∗q)(t)=(g1,q(t))L2(Ω). Note that the adjoint operator (and also the operator itself) is preserving time regularity, i.e., for where is a subspace of depending on the regularity of the (as noticed just before), e.g., or . Lemma 1 (Properties of the solution operator S). 1. For every a unique state satisfying (3) exists. Thus the operator from (2) exists. Furthermore the state fulfills ∥y∥W(I)≤C(∥f∥L2(I,H−1(Ω))+∥g∥L2(Ω)). (5) 2. Consider the bilinear form given by (6) with . Then for , equation (3) is equivalent to A(y,v)=∫T0⟨f,v⟩dt+(g,v(0))L2(Ω)∀ v∈W(I). (7) Furthermore, is the only function in fulfilling equation (7). Proof. This can be derived using standard results, see [dissnvd, Lemma 1]. ∎ An advantage of the formulation (7) in comparison to (3) is the fact that the weak time derivative of is not part of the equation. Later in discretizations of this equation, it offers the possibility to consider states which do not possess a weak time derivative. We can now establish the existence of a solution to problem (). Lemma 2 (Unique solution of the o.c.p.). The optimal control problem () admits for fixed a unique solution , which can be characterized by the first order necessary and sufficient optimality condition where denotes the adjoint operator of , and the so-called optimal adjoint state is the unique weak solution defined and uniquely determined by the equation A(v,¯pα)=∫T0⟨h,v⟩H−1(Ω)H10(Ω)dt∀ v∈W(I). (9) Proof. This follows from standard results, see, e.g., [dissnvd, Lemma 2]. ∎ As a consequence of the fact that is a closed and convex set in a Hilbert space we have the following lemma. Lemma 3. In the case the variational inequality (8) is equivalent to where is the orthogonal projection. Proof. See [hpuu, Corollary 1.2, p. 70] with . ∎ The orthogonal projection in (10) can be made explicit in our setting. Lemma 4. Let us for with consider the projection of a real number into the interval , i.e., . There holds for Proof. See [dissnvd, Lemma 4] for a proof of this standard result in our setting. ∎ We now derive an explicit characterization of the optimal control. Lemma 5. If , then for almost all there holds for the optimal control ¯uα(x)=⎧⎪⎨⎪⎩a(x)if B∗¯pα(x)+αa(x)>0,−α−1B∗¯pα(x)if B∗¯pα(x)+α¯uα(x)=0,b(x)if B∗¯pα(x)+αb(x)<0. (11) Suppose is given. Then the optimal control fulfills a.e. ¯u0(x)={a(x)if B∗¯p0(x)>0,b(x)if B∗¯p0(x)<0. (12) Proof. We refer to [dissnvd, Lemma 5] for a proof of this standard result in our setting. ∎ Remark 6. As a consequence of (12) we have: If vanishes only on a subset of with Lebesgue measure zero, the optimal control only takes values on the bounds of the admissible set . In this case is called a bang-bang solution. Assuming more regularity on the data than stated above, we get regularity for the optimal state and the adjoint state needed for the convergence rates in the numerical realization of the problem. We use here and in what follows the notation ∥⋅∥:=∥⋅∥L2(Ω),∥⋅∥I:=∥⋅∥L2(I,L2(Ω)), (⋅,⋅):=(⋅,⋅)L2(Ω),and(⋅,⋅)I:=(⋅,⋅)L2(I,L2(Ω)). Assumption 7. Let with and . Furthermore, we expect . In the case of distributed controls, we assume , . In the case of located controls, we assume , and . Lemma 8 (Regularity of problem (P), α>0). Let Assumption 7 hold and let . For the unique solution of () and the corresponding adjoint state there holds • in the case of located controls or • in the case of distributed controls. With some constant independent of , we have the a priori estimates ∥∂2t¯y∥I+∥∂tΔ¯y∥I+maxt∈[0,T]∥∇∂t¯y(t)∥≤d1(¯u):=C(∥B¯u∥H1(I,L2(Ω))+∥∇B¯u(0)∥+∥∇Δy0∥),∥∂2t¯p∥I+∥∂tΔ¯p∥I+maxt∈[0,T]∥∇∂t¯p(t)∥≤d0(¯u):=C(∥yd∥H1(I,L2(Ω))+∥∇yd(T)∥+∥B¯u∥I+∥∇y0∥), and∥∂3t¯p∥I+∥∂2tΔ¯p∥I+maxt∈[0,T]∥∇∂2t¯p(t)∥≤d+1(¯u):=d1(¯u)+C(∥∂2tyd∥I+∥∇∂tyd(T)∥+∥∇Δyd(T)∥+∥∇B¯u(T)∥). (13) Proof. See [dissnvd, Lemma 12]. ∎ Remark 9 (Regularity in the case α=0). In the case , we have less regularity: • , and Since (10) does not hold, we can not derive regularity for from that of as above. We only know from the definition of that , but might be discontinuous as we will see later. 2.2 Tikhonov regularization For this subsection, it is useful to rewrite problem () in the reduced form () with , fixed data and the linear and continuous control-to-state operator , . From now onwards we assume a≤0≤b (14) in a pointwise almost everywhere sense where and are the bounds of the admissable set . For the limit problem (), which we finally want to solve, this assumption can always be met by a simple transformation of the variables. To prove rates of convergence with respect to , we rely on the following assumption. Assumption 10. There exist a set , a function with , and constants and , such that there holds the inclusion for the complement of and in addition 1. (source condition) 2. ((-)measure condition) ∀ ϵ>0:meas({x∈A|0≤|B∗¯p0(x)|≤ϵ})≤Cϵκ (16) with the convention that if the left-hand side of (16) is zero for some . For a discussion of this assumption we refer to the texts subsequent to [daniels, Assumption 7] or [dissnvd, Assumption 15]. Key ingredient in the analysis of the regularization error and also of the discretization error considered later is the following lemma, see [daniels, Lemma 8] or [dissnvd, Lemma 16] for a proof. Lemma 11. Let Assumption 10.2 hold. For the solution of (), there holds with some constant independent of and Using this Lemma, we can now state regularization error estimates. Theorem 12. For the regularization error there holds with positive constants and indepent of the following. 1. Let Assumption 10.2 be satisfied with (measure condition holds a.e. on the domain). Then the estimates ∥¯uα−¯u0∥L1(ΩU) ≤Cακ (18) ∥¯uα−¯u0∥U ≤Cακ/2 (19) ∥¯yα−¯y0∥H ≤Cα(κ+1)/2 (20) hold true. If holds and in addition T∗:range(T)→L∞(ΩU)% exists and is continuous, (21) we can improve (20) to ∥¯yα−¯y0∥H≤Cακ. (22) 2. Let Assumption 10 be satisfied with (source and measure condition on parts of the domain). Then the following estimates ∥¯uα−¯u0∥L1(A) ≤Cαmin(κ,21+1/κ) (23) ∥¯uα−¯u0∥U ≤Cαmin(κ,1)/2 (24) ∥¯yα−¯y0∥H ≤Cαmin((κ+1)/2,1) (25) hold true. If furthermore and (21) hold, we have the improved estimate ∥¯uα−¯u0∥L1(A)≤Cακ. (26) For a proof of this recent result, we refer to [daniels, Theorem 11] and [dissnvd, Theorem 19], where also a discussion can be found. We only recall two points for convenience here: The assumption of the first case of the above Theorem implies meas({x∈ΩU|B∗¯p0(x)=0})=0, (27) which induces bang-bang controls, compare Remark 6. By Lemma 8 and Remark 9 we can immediately see that the assumption (21) on is fulfilled for our parabolic problem. 2.3 Bang-bang controls We now introduce a second measure condition which leads to an improved bound on the decay of smoothness in the derivative of the optimal control when tends to zero. This bound will be useful later to derive improved convergence rates for the discretization errors. Definition 13 (¯pα-measure condition). If for the set Iα:={x∈ΩU|αa<−B∗¯pα<αb} (28) the condition ∃ ¯α>0 ∀ 0<α<¯α:meas(Iα)≤Cακ (29) holds true (with the convention that if the measure in (29) is zero for all ), we say that the -measure condition is fulfilled. Theorem 14. Let us assume the -condition ∃ σ>0 ∀′ x∈ΩU:a≤−σ<0<σ≤b. (30) If the -measure condition (29) is valid, Theorem 12.1 holds, omitting its first sentence (“Let Assumption…”). Proof. See [daniels, Theorem 15] or [dissnvd, Theorem 24]. ∎ If the limit problem is of certain regularity, both measure conditions coincide: Corollary 15. Let a bang-bang solution be given, i.e., (27) holds true. In the case of , (21), and the -condition (30), both measure conditions are equivalent. Proof. See [daniels, Corollary 18] or [dissnvd, Corollary 27]. ∎ Let us now consider located controls. Since for by Lemma 8 and Remark 9, we conclude ∥∂tB∗¯pα∥L∞(I)≤C∥∂t¯pα∥L∞(I,L2(Ω))≤C+C∥¯uα∥U≤C with a constant independent of due to the definition of . Recall that , by Assumption 7. With this estimate, the projection formula (10) and the stability of the projection (see [dissnvd, Lemma 11]) we obtain the bound ∥∂t¯uα∥L∞(I)≤1α∥∂tB∗¯pα∥L∞(I)+∥∂ta∥L∞(I)+∥∂tb∥L∞(I)≤C1α, (31) if is sufficiently small. If the -measure condition (29) is valid, this decay of smoothness in terms of can be relaxed in weaker norms, as the following Lemma shows. Lemma 16 (Smoothness decay in the derivative). Let the -measure condition (29) be fulfilled and located controls be given. Then for sufficiently small there holds ∥∂t¯uα∥Lp(I)≤Cmax(Cab,ακ/p−1) (32) with a constant independent of . Here, and . Note that in the case of constant control bounds and . Proof. See [daniels, Lemma 19] or [dissnvd, Lemma 28]. ∎ The question of necessity of Assumption 10 and the -measure condition (28) to obtain the convergence rates of Theorem 12.1 is discussed in [daniels, sections 4 and 5] and [dissnvd, sections 1.4.3 and 1.4.4]. The results there show that in several cases the conditions are in fact necessary to obtain the convergence rates from above. 3 The discretized problem 3.1 Discretization of the optimal control problem Consider a partition of the time interval . With we have . Furthermore, let for denote the interval midpoints. By we get a second partition of , the so-called dual partition, namely , with . The grid width of the first mentioned (primal) partition is given by the parameters and k=max1≤m≤Mkm. Here and in what follows we assume . We also denote by (in a slight abuse of notation) the grid itself. We need the following conditions on sequences of time grids. Assumption 17. There exist constants and independent of such that there holds ∀ m∈{1,2,…,M−1}:κ1≤kmkm+1≤κ2andk≤μminm=1,2,…,Mkm. On these partitions of the time interval, we define the Ansatz and test spaces of the Petrov–Galerkin schemes. These schemes will replace the continuous-in-time weak formulations of the state equation and the adjoint equation, i.e., (7) and (9), respectively. To this end, we define at first for an arbitrary Banach space the semidiscrete function spaces Pk(X): ={v∈C([0,T],X)∣∣v∣∣∣∣Im∈P1(Im,X)}↪H1(I,X), (33a) P∗k(X): ={v∈C([0,T],X)∣∣v∣∣∣∣I∗m∈P1(I∗m,X)}↪H1(I,X), (33b) and Yk(X): ={v:[0,T]→X∗∣∣v∣∣∣∣Im∈P0(Im,X)}. (33d) Here, , , , is the set of polynomial functions in time with degree of at most on the interval with values in . We note that functions in can be uniquely determined by elements from . The same holds true for functions but with only uniquely determined in by definition of the space. The reason for this is given in the discussion below [dissnvd, (2.16), p. 41]. Furthermore, for each function we have where denotes the equivalence class with respect to the almost-everywhere relation. In the sequel, we will frequently use the following interpolation operators. 1. (Orthogonal projection) PYk(X)v∣∣∣∣Im:=1km∫tmtm−1vdt, m=1,…,M, PYk(X)v(T):=0 (34) 2. (Piecewise linear interpolation on the dual grid) πP∗k(X)v∣∣∣∣I∗1∪I∗2 :=v(t∗1)+t−t∗1t∗2−t∗1(v(t∗2)−v(t∗1)), (35) πP∗k(X)v∣∣∣∣I∗m for m=3,…,M−1, πP∗k(X)v∣∣∣∣I∗M∪I∗M+1 The interpolation operators are obviously linear mappings. Furthermore, they are bounded, and we have error estimates, as [dissnvd, Lemma 31] shows. In addition to the notation introduced after Remark 6, adding a subscript to a norm will indicate an norm in the following. Inner products are treated in the same way. Note that in all of the following results denotes a generic, strict positive real constant that does not depend on quantities which appear to the right or below of it. Note that we can extend the bilinear form of (6) in its first argument to , thus consider the operator A:W(I)∪Yk(H10(Ω))×W(I)→R,%$A$givenby(???). (36) Using continuous piecewise linear functions in space, we can formulate fully discretized variants of the state and adjoint equation. We consider a regular triangulation of with mesh size h:=maxT∈Thdiam(T), see, e.g., [brenner-scott, Definition (4.4.13)], and triangles. We assume that . We also denote by (in a slight abuse of notation) the grid itself. With the space Xh:={ϕh∈C0(¯Ω)∣∣ϕh∣∣∣∣T∈P1(T,R)∀ T∈Th} (37) we define to discretize . For the space grid we make use of a standard grid assumption, as we did for the time grid, sometimes called quasi-uniformity. Assumption 18. There exists a constant independent of such that h≤μminT∈Thdiam(T). We fix fully discrete ansatz and test spaces, derived from their semidiscrete counterparts from (33), namely Pkh:=Pk(Xh0),P∗kh:=P∗kh(Xh0),and Ykh:=Yk(Xh0). (38) With these spaces, we introduce fully discrete state and adjoint equations as follows. Definition 19 (Fully discrete adjoint equation). For find such that A(~y,pkh)=∫T0⟨h(t),~y(t)⟩H−1(Ω)H10(Ω)dt∀ ~y∈Ykh. (39) Definition 20 (Fully discrete state equation). For find , such that A(ykh,vkh)=∫T0⟨f(t),vkh(t)⟩H−1(Ω)H10(Ω)dt+(g,vkh(0))∀ vkh∈Pkh. (40) Existence and uniqueness of these two schemes follow as in the semidiscrete case discussed in [DanielsHinzeVierling] or [dissnvd, section 2.1.2]. Let us recall some stability results and error estimates for these schemes. The first result is [dissnvd, Lemma 56]. Lemma 21. Let solve (39) with . Then there exists a constant independent of and such that ∥pkh∥H1(I,L2(Ω))+∥∇pkh∥C(¯I,L2(Ω))≤C∥h∥I. For stability of a fully discrete state and an error estimate, we recall [dissnvd, Lemma 59]. Lemma 22. Let be the solution of (7) for some and let be the solution of (40) for the same . Then with a constant independent of and , it holds ∥ykh∥I≤C(∥f∥L2(I,H−1(Ω))+∥g∥). If furthermore the regularity as well as is fulfilled, we have the error estimate ∥y−ykh∥I≤C(h2+k)(∥f∥I+∥∇g∥). (41) Let us now consider the error of the fully discrete adjoint state. We begin with an norm result, which is [dissnvd, Lemma 62]. Lemma 23. Let solve (9) for some such that has the regularity . Let furthermore
# Study of the data acquisition network for the triggerless data acquisition of the LHCb experiment and new particle track reconstruction strategies for the LHCb upgrade Pisani, Flavio (2020) Study of the data acquisition network for the triggerless data acquisition of the LHCb experiment and new particle track reconstruction strategies for the LHCb upgrade, [Dissertation thesis], Alma Mater Studiorum Università di Bologna. Dottorato di ricerca in Fisica, 32 Ciclo. DOI 10.6092/unibo/amsdottorato/9456. Documenti full-text disponibili: Documento PDF (English) - Richiede un lettore di PDF come Xpdf o Adobe Acrobat Reader Disponibile con Licenza: Salvo eventuali più ampie autorizzazioni dell'autore, la tesi può essere liberamente consultata e può essere effettuato il salvataggio e la stampa di una copia per fini strettamente personali di studio, di ricerca e di insegnamento, con espresso divieto di qualunque utilizzo direttamente o indirettamente commerciale. Ogni altro diritto sul materiale è riservato. Download (12MB) ## Abstract The LHCb experiment will receive a major upgrade by the end of February 2021. This upgrade will allow the recording of proton-proton collision data at $\sqrt{s} = 14\ \text{TeV}$ with an instantaneous luminosity of $2 \cdot 10^{33}\ \text{cm}^{-2}\text{s}^{-1}$, making possible measurements of unprecedented precision in the $b$ and $c$-quark flavour sectors. For taking advantage of the increased luminosity provided, the data acquisition system will receive a substantial upgrade. The upgraded system will be capable of processing the full collision rate of $30\ \text{MHz}$, without any low-level hardware preselection. This new design constraint poses a non-trivial technological challenge, both from a networking and computing point of view. A possible design of a $32\ \text{Tb/s}$ data acquisition network is presented, and low-level network simulations are used to validate the design. Those simulations use an accurate behavioural model developed and optimised for this specific purpose. It is mandatory to optimise the reconstruction algorithms using a computing and physics approach, to perform the online reconstruction of the full $30\ \text{MHz}$ $pp$ collisions rate. A new parametrisation of the charged particles' bending generated by the dipole of the LHCb experiment is presented. The accuracy of the model is tested against Monte Carlo data. This strategy can reduce by a factor four the size of the search windows needed in the SciFi sub-detector. The LookingForward algorithm in the Allen framework uses this model. Abstract Tipologia del documento Tesi di dottorato Autore Pisani, Flavio Supervisore Dottorato di ricerca Ciclo 32 Coordinatore Settore disciplinare Settore concorsuale Parole chiave high energy physics, data acquisition systems, networking URN:NBN DOI 10.6092/unibo/amsdottorato/9456 Data di discussione 27 Marzo 2020 URI
## plugin development – How to render a time-of-day string like ’16:42′ with a site’s chosen time format? Question The trick here is using wp_date() in a peculiar way, giving it the time of day in seconds (as if it were 1970-01-01 16:42:18), then asking for it to be formatted in UTC. $time="16:42:18"; if ( preg_match( '/^\d\d:\d\d:\d\d$/', $time ) ) { //validate the$time $ts = intval( substr($time, 0, 2 ) ) * 3600; $ts += intval( substr($time, 3, 2 ) ) * 60; $ts += intval( substr($time, 7, 2 ) ); if ( $ts >= 0 &&$ts < 86400 ) { $time = wp_date( get_option( 'time_format' ),$ts, 'UTC' ); } } 0 5 months 2022-03-19T07:37:29-05:00 0 Answers 0 views 0
# “Peakedness” of a skewed probability density function I would like to describe the "peakedness" and tail "heaviness" of several skewed probability density functions. The features I want to describe, would they be called "kurtosis"? I've only seen the word "kurtosis" used for symmetric distributions? • Indeed, the measures of kurtosis are typically applied to symmetric distributions. You can calculate it for skewed ones as well but the interpretation changes since this value varies when the asymmetry is introduced. In fact, these two concepts are difficult to separate. Recently, a skewness-invariant measure of kurtosis was proposed in this paper. – user10525 Jan 3 '13 at 16:22 • High kurtosis is associated with peakedness and with heavy tailedness (it's also characterized as 'lack of shoulders'). One of the volumes of Kendall and Stuart discuss these issues at some length. But such interpretations, are, as you note, generally given in the situation of near-symmetry. In nonsymmetric cases, the standardized 4th moment is usually highly correlated with the square of the standardized third moment, so they're mostly measuring much the same kind of thing. – Glen_b -Reinstate Monica Jan 3 '13 at 23:44 • Indeed, given the particular way I phrased it in my earlier comment, it's true even of symmetric distributions - the square of the sample standardized third moment (squared moment skewness) is highly correlated with the sample standardized fourth moment ('kurtosis'), even at say the normal. – Glen_b -Reinstate Monica Jun 19 '13 at 0:27 With variance being defined as the second moment $\mu_{2}$, skewness being defined as the third moment $\mu_{3}$ and the kurtosis being defined as the fourth moment $\mu_{4}$, it is possible to describe the properties of a wide range of symmetric and non-symmetric distributions from the data. This technique was originally described by Karl Pearson in 1895 for the so-called Pearson Distributions I to VII. This has been extended by Egon S Pearson (date uncertain) as published in Hahn and Shapiro in 1966 to a wide range of symmetric, asymmetric and heavy tailed distributions that include Uniform, Normal, Students-t, Lognormal, Exponential, Gamma, Beta, Beta J and Beta U. From the chart of p. 197 of Hahn and Shapiro, $B_{1}$ and $B_{2}$ can be used to establish descriptors for skewness and kurtosis as: $\mu_{3} = \sqrt {B_{1}\ \mu_{2}^{3}}$ $\mu_{4} = B_{2}\ \mu_{2}^{2}$ If you just wanted simple relative descriptors then by applying a constant $\mu_{2} = 1$ the skewness is $\sqrt {B_{1}}$ and the kurtosis is $B_{2}$. We have attempted to summarize this chart here so that it could be programmed, but it is better to review it in Hahn and Shapiro (pp 42-49,122-132,197). In a sense we are suggesting a little bit of reverse engineering of the Pearson chart, but this could be a way to quantify what you are seeking. The main issue here is, what is "peakedness"? Is it curvature at the peak (2nd derivative?) Does it require standardization first? (You would think so, but there is a stream of literature starting with Proschan, Ann. Math. Statist. Volume 36, Number 6 (1965), 1703-1706, that defines peakedness in a way that normal with smaller variance are more "peaked"). Or is it probability concentration within a standard deviation of the mean, as implicit in Balanda and Macgillivray (The American Statistician, 1988, Vol 42, 111-119)? Once you settle on a definition, then it should be trivial to apply it. But I would ask, "why do you care?" Of what relevance is "peakedness", however defined? BTW, Pearson's kurtosis measures tails only, and does not measure any of the above mentioned "peakedness" definitions. You can change the data or distribution within a standard deviation of mean as much as you want (keeping the mean=0 and variance=1 constraint), but the kurtosis can only change within a maximum range of 0.25 (usually much less). So you can rule out using kurtosis to measure peakedness for any distribution, even though kurtosis is indeed a measure of tails for any distribution, no matter whether the distribution is symmetric, asymmetric, discrete, continuous, discrete/continuous mixture, or empirical. Kurtosis measures tails for all distributions, and virtually nothing about peak (however defined). A possible very practical approach could be calculate the ratio of the survival function of the distribution $\Pr\left(\tilde X \gt 1- \alpha \right)$ against the normal one, showing it is quite far greater. Another approach can be calculating the ratios of percentiles $w_1=\frac{\tilde{x_{99}}-\tilde{x_{50}}}{\tilde{x_{75}}-\tilde{x_{50}}}$ of the distribution $\tilde x$ under interest and dividing it against the normal one quantile values, $w_2=\frac{\tilde{\Phi_{99}}-\tilde{\Phi_{50}}}{\tilde{\Phi_{75}}-\tilde{\Phi_{50}}}$, $\tau=\frac{w_1}{w_2}$. I am not sure I get your understanding of peakedness and heaviness. Kurtosis means "Excess" in German, so it describes the "head" or "peak" of a distribution, describing whether it is very wide or very narrow. Wikipedia states that the "peakedness" is actually described by the "kurtosis", whereas peakedness does not to appear to be a real word and you should use the term "Kurtosis". So I think you might have gotten everything right, the head is the Kurtosis, The "heaviness" of the tail might be the Skewness": Here is how you find it: $$a_3 = \frac{\Sigma^{N}_{i=1}(x_i - \overline x)^3}{N * s^3_x}$$ with s as the standard deviation for x. The values indicate: Negative Skew: $$a_3 < 0$$ Positive Skew: $$a_3 > 0$$ No Skew $$a_3 = 0$$ You can get a value for the kurtosis with: $$a_4 = \frac{\Sigma^{N}_{i=1}(x_i - \overline x)^4}{N * s^4_x}$$ The values indicate: Platycurtic: $$a_4 < 3$$ Leptocurtic: $$a_4 > 3$$ Normal: $$a_4 = 3.0$$ Did that help? • I'm afraid this answer in its current form may be less than helpful due to errors in it. Skewness is a standard measure of asymmetry. It is not closely related to heaviness of tails: it is possible for the tails to be extremely heavy and the skewness to be zero (which is the case for any symmetric distribution, for instance). Please note, too, that it is impossible for $a_4$ to be negative, so the second half of this answer makes little sense. (Perhaps you confused kurtosis with excess kurtosis?) – whuber Jan 3 '13 at 17:53 • Thank you for clarifying. There might indeed be some errors in the formulas, I just copied them from the scripts they provide at uni. I oversaw the fact that a4 can't be negative. – Johannes Hofmeister Jan 3 '13 at 22:46 • I looked up why my answer is wrong - it is a translational error, I apologize for that. My slides are all in German, mixing Kurtosis and Excess. – Johannes Hofmeister Jan 3 '13 at 22:52 • @Peter As Peter Westfall keeps pointing out, your comment is incorrect: "peakedness" (of any mode), thought of vaguely as pointiness or height, has absolutely nothing to do with the tails of any distribution, nor is it measured by any finite combination of moments (such as the kurtosis). It may happen to be connected to heaviness of tails for a family of distributions, but that's a completely different matter. – whuber Jan 3 '18 at 13:44 Kurtosis is definitely associated with the peakedness of the curve. I henceforth believe that you are really looking for kurtosis which does exist whether the distribution is symmetric or not. (user10525) has definitely said it right ! I hope your problem is resolved by now. Do share its outcome, all opinions are welcome. • I'm not sure how this constitutes a helpful answer beyond what was already written here. How about you expand more on kurtosis and peakedness of the curve? – Momo Oct 18 '13 at 18:58 • Wanted to give clear cut clarification to the query. The discussion seemed to be confusing @Momo – Vani Oct 18 '13 at 19:00
# So, let’s talk about rule intent and question closing Admittedly a bit slower that we’d have preferred (our bad), this is a continuance of rule intent discussion. (See why I’m avoiding saying revisit below). However, there is another point which needs to be brought up at the same time, which leads me to not call it just a revisit. The question type was originally deemed off topic in response to a specific problem, which was that explicitly asking for designers reasons for specific rules lead to rampant speculation (required increased moderation). However, with time it seems sometimes any possibility that designer reasons could be relevant caused closure or at least close votes. And there’s a bit of a thing with History of Gaming questions. As a highlight, it seemed sometimes the mere use of the word “why” would be enough to garner close votes. Why thus getting read as “why was this designed this way” as opposed to “what does this achieve within the system”. This suggests we need a bit of back to basics together, and agreement on how to read and evaluate questions for this particular concern. And this back to basics is useful as part of (or preceding) the revisit because we need clarity in what we’re considering the topicality of. I’ll note that we don’t have a specific closure reason for designer reasons. This leads to questions either being closed as “opinion based” or “off-topic” with a custom reason (usually whichever got cast first). A consequence of this is that asker feedback is either conflicting or highly dependent on comments explaining things well. A custom close reason might be a good reason to explain the off topic reason better to both askers and close voters. Whether to ask for an additional slot or to rework some of our existing reasons is probably best left for its own discussion, as might well the full scope of the wording, but I’d like to offer it to answers here. ### Outcomes for this discussion With the aim of clarity, I’d like to give guidance towards the intended scope of discussion. This is not intended as extra rules for the discussion. • We’re open to the possibility of unbanning designer reasons. Such an answer should include the guidance for such questions and answers so they can be answered and curated properly. See the previous discussions for the issues that do come with the question type. • We’re open to refocusing the actual practice to the actual issue. This would be keeping the question type off-topic, but hopefully reducing the collateral. An answer should more clearly identify the problematic question type, and give good guidance for how to navigate around this. Since the current state of the FAQ is a bit messy, it would be good for answers to give at least an outline for what the new FAQ should cover. (The existing “how can I ask instead meta is quite good for this/may be a good starting point/include) • We’re open to expanding the scope of designer reasons. Such an answer should clearly identify the problems for which the expanded scope is needed, which questions need to be made off topic and an outline for ongoing guidance. Though, I’m mostly including this for completeness, I have very little expectation that it is the prevailing opinion. †: This is the moderators speaking as moderators on behalf of the community. As much as possible we'd like for this to be read as the community or the discussion being open for these possibilities and that this discussion not being an exercise in convincing the moderators. • Note: you can use the [designer-reasons] tag to find many questions that would be affected by a change. Feb 21, 2022 at 23:02 • @Laurel fyi you can write [tag:designer-reasons] in comments and here for an easy link :) there's also [meta-tag:featured] etc for meta site tags Feb 21, 2022 at 23:04 • We had the same problem with the word should a few years ago which you identify as the problem with the word why - which tells me that we may have a few trigger happy close voters who get triggered by a single word. Feb 22, 2022 at 17:21 • @KorvinStarmast To be fair, the asking words (why, should, what, where, etc.) tend to do a lot of heavy lifting in the framing of a question and it's a bit snappier I suppose to focus the discussion/correction on a single word. It's possible there's something to address in the use of heuristics like that, but I can't say I know how to usefully go about it so for now we can deal with them when we see it (seemingly) going wrong. – Someone_Evil Mod Feb 22, 2022 at 18:09 • @Someone_Evil What bothers me about that is the behavior of some of our community ... but I'm not going to further derail this well crafted post/question. Feb 22, 2022 at 21:46 • Resulting from this discussion, there has (finally) come a declaration from the mods: Are questions about rule intent on topic? [2022] – Someone_Evil Mod Jul 28, 2022 at 17:07 ### On this one, I think it actually should be up to ♦ mods Obviously, having a ♦ doesn’t allow the elected moderators to dictate site policy; it never has, and it never should. I, for one, am still very salty about the times when a ♦ has overridden the community, and I don’t think I’m alone in that: community consensus is crucial to the site. If the rules were changed so that elected moderators dictated policy, I’d probably leave. But this is a special case. Specifically, designer-intent questions are on-topic, answerable, back-up-able, and at least occasionally of interest. Moreover—as I’ve discussed in more detail in another answer—the ban on designer-intent questions has been overinterpreted by some users as a ban on a much broader swath of questions, leading to arguments and bad feelings. The only reason we banned designer-intent questions in the first place was ♦ moderator overload. That was it, the ♦ moderators at the time brought up a rash of designer-intent questions that hit HNQ, and lead to a ton of speculative answers getting upvoted, no matter how hard the question tried to emphasize that answers had to be backed up. It led to the ♦ having to delete high-rated answers, which is never a good thing, and a lot of the questions got closed anyway just because they were attracting too many problems, no matter how good the question itself was. This situation also caused bad feelings and arguments. The ♦ moderators didn’t want to be put in the middle of it, and the community agreed. So if the current ♦ moderators are amenable to allowing designer-intent questions, then I think we should unban them. If the current ♦ mods want to go into the backlog and history and determine that the situation isn’t one they want to re-open, I still accept that: designer-intent questions aren’t crucially important, and those that are useful can often be edited into acceptable questions even with the ban in place. (This doesn’t happen far too often, but that’s a separate concern.) I would say, though, that the community is vastly more moderation-focused than it was at the time, and also, the ban on designer-intent questions has had far more negative outcomes than I, for one, expected when I voted for it the first time. Also, ♦ moderators can now remove a question from HNQ—that was a feature that wasn’t available to ♦ moderators when the ban was put in place. If the ♦ moderators are OK with it, I definitely think we should unban them, and I don’t think we need any special policies around them—I think it would be appropriate to just treat them as we treat any other question, as long as ♦ moderators are up for that. If not, well, like I’ve been saying, I have another answer about that. • +1 for the low HNQ issue being a root cause. Matches my memory as well. Feb 22, 2022 at 17:22 • I think this is a pretty good take overall, I too feel comfortable letting the mods make the call here. However, it does not address one of S_E's concerns from the post, "Such an answer should include the guidance for such questions and answers so they can be answered and curated properly." What direction can you give for handling this? Is it "mods make the call, mods make the guidance", or something like "mods make the call, then ask the community to construct the guidance", or something else? Feb 22, 2022 at 17:23 • @ThomasMarkov My proposal is that there not be any special rules. Just treat them as topical questions. If they specifically invite speculation, are unclear, too broad, etc. etc., treat them accordingly, like any other question. Feb 22, 2022 at 19:10 • I don't think anyone is asking for special policies or rules for them, S_E is asking for helpful guidance we can point to for making good questions and answers of this kind, just as we have for other types of questions. Feb 22, 2022 at 19:19 • For example, with homebrew reviews, we have this guidance. When someone asks a homebrew review question that just presents features bare bones, we can say "Check out our guidance for homebrew review, it would be helpful for answers if you could include some of the additional details discussed there." Or for unsupported answers, we can leave a comment suggesting improvements based on our guidance for supporting answers. These guidance posts aren't rules or policies, but they help give direction to posts that need some direction. Feb 22, 2022 at 19:46 • @ThomasMarkov “Can this question be answered well within our format?” is the only guiding question I think anyone should ever have interfacing in any way with any question on this site. Feb 22, 2022 at 19:59 • I think that's essentially what Someone_Evil is looking for here - "Can this question be answered well within our format? Yes, here is how: ...". Feb 22, 2022 at 20:05 • @ThomasMarkov I don’t believe that’s so, and moreover, I’m staunchly opposed to us having any such post. I do not want, or think we should have, anything that smacks of “this is how you must write such a question.” Already we have explicit cases of “here are some ideas for making your question the best it can be” that people are treating as “this is how you must write such a question,” and it’s hard to imagine a more deleterious situation. I am definitely not in favor of adding more fuel to the fire. Feb 22, 2022 at 20:34 • I haven't really seen any of that in many months, I was probably the chief offender there for a while, and I quit doing it thanks to some counsel from you and doppel on the matter. Feb 22, 2022 at 21:06 • @ThomasMarkov You definitely were the chief offender, but you weren’t alone. I can’t say I’ve specifically noted a continuation of that behavior in recent months, so you may be right (but then, I didn’t notice a reduction either; I can’t actually recall the last homebrew-review question that caught my eye). But I also just don’t really know what you’re looking for here. Don’t invite people to speculate; consider reminding them that they shouldn’t do so. Maybe include an explicit list of acceptable sources. But that could go for any question on the site. Feb 22, 2022 at 21:22 • Another useful point in support of this answer is that mods now have the ability to easily remove a question from the HNQ list. If we suspect that will cause issues for a certain question we can get in early before the answers pile up. That tool didn't exist when these questions were first banned. Feb 22, 2022 at 23:54 • @linksassin Perhaps deserves more highlighting, but I do have that in there. Feb 23, 2022 at 2:25 • @ThomasMarkov Since this answer is delegating the decision to the mod team, providing details around guidance becomes moot (or implicitly also delegated). Part of the intent was for a decision to allow the question type would come with guidance and a promise for the community to uphold our general standards (which should be sufficient). And depending on what we decide, it's quite possible any needed guidance would be much more directed at close voters than askers. – Someone_Evil Mod Feb 23, 2022 at 13:05 • Addl note about HNQ, I believe it was also reworked so questions could only be on it for 3 days tops as opposed to staying on there for a week (weeks?). That should mean the scope of any HNQ-caused messes be smaller, but perhaps end up being more frequent? – Someone_Evil Mod Feb 23, 2022 at 13:07 • @Someone_Evil My idea here was more, if the moderators only feel comfortable with allowing designer-intent questions in some qualified manner and want to state rules for that, then yes. But only if that’s the difference between “this is acceptable to us” and “this is too much of a burden for ♦ moderators.” If you just think some kind of guidelines would be a good idea, but they aren’t necessary, then I think that should still be up to the community. (And I, for one, would prefer not having any.) Feb 23, 2022 at 20:00 ## What Is Needed Is Both A Clarification Of Guidelines.... I think any of the three paths laid out in the Question above are reasonable and I don't really have a preference between them. They all seem to lead to roughly the same place, and require roughly the same type of deliberation and refinement. For lack of a better term, I'll borrow from existing terminology and refer to it as the good-designer-intent/bad-designer-intent question dichotomy. To indulge in tautology for a moment, good designer intent questions are ones this stack can generally answer, while bad designer intent questions are ones we can't. To back off from that tautology, good designer intent questions are ones that fall within the realm of the expertise of the community. I don't think we'll ever get a sharp, bright-line answer (more on that below) but based on past experience and past discussions here, I think the following are at least a starting point: 1. A rant disguised as a question ('This rule sucks! What was Gygax thinking?') is a bad designer intent question. It's arguably not even a question, even though it ends in a question mark. 2. A designer intent question that is, or can reasonably be construed as a game balance question, or a question of mechanical interactions or unintended side effects, or a homebrew rules question ("Why does this rule exist? What happens if it goes away?") is probably/often going to be a good designer intent question. I don't take part in them often, but I suspect that the guidelines for homebrew rules questions might be a good place to look for guidelines insights. 3. A question of designer intent that hinges on lore ("Why did the designers make elves so powerful and settle Dwarves only on these terrible plots of land?") are going to be at least as dicey as similar questions on the lore itself. 4. A question of actual design intent-- a question that strongly expects a personalized answer in the form of "This person or persons at this company enacted this rule definitively for this purpose," is very likely to be a bad designer intent question. I should note, here, the my confidence in these four points is not absolute, nor am I intentionally trying to limit consideration to these four categories-- this is what I have after thinking about this overnight. Further, I am pretty confident in my approach to the first two points; less so on the third and fourth: It is not impossible that answers exist in the form of interviews, podcasts, design documents, errata; it is not impossible that answers could be had, if the designers are still active and accessible, as is sometimes the case. (And for all of these reasons, I think that knowing where to find information is a form of expertise.) But it is definitely not impossible that there will be nothing but vacuum to draw on, outside the published texts themselves. On the other hand, I also care much more about the first two points than the last two points. ## ...And A Cultural Shift But I don't think a simple clarification of policy/guidelines is going to fix this problem. Looking back through the historical discussions on this specific issue, this sentiment pops up: Well, that may have been true then, and may be true now (although four years is an eternity in net-culture years.) But the corollary seems to be "We can't seem to stop turning policies or guidelines into straight-jackets." This is, arguably, a systemic flaw in the stack system at large which can't fully be addressed here: As a community grows, the five votes necessary to close becomes a smaller and smaller fraction of the community at large. That is a double-edged sword, in that a growing community will see growth in the number of bad questions being posed, but also in that increasingly smaller fractions of activist vote-to-closers can really gum up the works. And this is definitely something that we do here. Just because it's not happening out of malice or bad faith doesn't mean it doesn't happen-- we founder on these rocks regularly. This question itself acknowledges 'why' as a trigger word. The previous round of discussion had two answers recognizing that trigger or general overuse of the policy. And this phenomenon is not limited to this issue: We've gone around this tree with good-subjective/bad-subjective questions, where for a time we were turning the policy into an anecdote tax and every question that could remotely be considered subjective would gather comments insisting on the personal experience criterion, with ever-increasing narrowness and specificity. And we've seen it show up in supported/unsupported questions, where some stances were taken that were so inflexible that they generated the following highly upvoted comments: ...The issue here is that you are interpreting a best-practice guide for "How to cite a good answer" as "How to enforce good answers" which was never the intent of that guidance. Perhaps it is time for a new guidance meta 'When should answers be deleted?" and ...after reading through all this, I cannot tell if (1) you are voting to delete because you want to, and you believe the references you cite give you permission, or (2) you believe the references you cite require or obligate you to vote to delete, or (3) some other case I have not considered. Can you clarify your mindset on this? I'm not trying to single anyone out or accuse anyone of bad faith, especially for things that took place almost half a year ago. To the contrary, my point is that this issue comes up time after time after time. It is not an issue with a single policy, or a single user. It is a community-generated issue. I'll say that again, louder: It is a community-generated issue. And unless the community starts thinking differently about policies and guidelines, we will end up here having another discussion that looks exactly like this in six months, nine months, twelve months. It might be about this issue, if we don't "fix it" well enough. But if we do, it is my firm conviction based on years of lurking, participating in the main stack, and participating in meta, that we will as a community transfer our collective obsession with policy enforcement to some other policy. So let me make the following points on that topic: 1. We are experts, and I do not believe for a minute that the best use of anyone's expertise (diamond mods possibly excepted) is in turning policies and guidelines into finer and finer filters to apply to questions or answers. 2. We are experts, and while experts make use of policies and guidelines to inform our responses, we do not need-- and it is counter-productive to seek-- policies that are mechanically precise and cover all situations without the need for critical thought and interpretation. That is, in fact, the very opposite of expertise itself! Clarification of policy is all well and good, and we could probably use a little here. But unless we change the way the community at large thinks about policies and guidelines, we will be right back here in a few months. So I implore the community: Please, start thinking of your expertise as a way to generate good answers, and not as a way to just apply the VTC to another question or answer. • "I'm not trying to single anyone out" Look mom, I'm on TV! I'll own this - the comments you quote are in response to my ill-conceived understanding of what those guidelines were trying to do and how to use them. Feb 22, 2022 at 21:40 • @ThomasMarkov fair enough-- I felt I had to link them because they were my best examples, but because I was not trying to beat on you specifically the best I could was put your name under ellipses. Feb 22, 2022 at 21:41 • No worries, the example is spot-on; I have since internalized the attitudes you're looking to encourage here, and am working on "being the change you want to see", so to speak, that you talk about in your Cultural Shift section. Feb 22, 2022 at 21:48 • Could you put that last para in bold? The part that begins with "So I Implore the community ..." up to you. Also, we had a meta two or thread years ago with a question on the word 'should' that seems to be related and might be a useful reference. Here is a link Feb 22, 2022 at 21:50 • @KorvinStarmast I think I can do that. Feb 22, 2022 at 21:52 • Strong agree with most of this, but I disagree with your last bullet point on good-designer-intent/bad-designer-intent: such questions are difficult, because it is difficult to find an appropriate source to back up an answer to such questions, but they do leverage our expertise—because we’ll be able to identify those sources. Particularly with more historical questions, knowing if and where someone discussed their thinking behind something is definitely something one can, and many here do, have expertise in. Feb 22, 2022 at 22:15 • @KRyan I don't really disagree with what you wrote, but my sense is that those are also where a lot of the actual bad answers are coming from. I'm not sure, but that's my sense of it. That's why that one is the least strongly held opinion of anything I've written, and I value everyone's thoughts on it. Feb 22, 2022 at 22:31 • I wish I could upvote this more, and not just because you quoted me. You have done a fantastic job of both answer the question here, and highlighting a topic we have been dancing around for far too long. Feb 23, 2022 at 0:01 • This answer has already changed the way I use this site and my mindset completely. If nothing else comes of it, thank you. Mar 1, 2022 at 15:59 • Thank you for the analogue example of good/bad subjective. As long as an answer is backed up, I don't think these questions are bannable. Mar 2, 2022 at 8:04 ## No need for a special policy, and good reasons to avoid one I don’t think it’s worthwhile to debate any particular special policy for : • The original reasons for the ban on these questions is basically gone. Without those, there isn’t a compelling reason to have any special policy. • The original, very simple, blanket ban on has been misunderstood and misconstrued a lot, suggesting that we can’t even define well what we’re talking about. • Any refinement of the policy would be more nuanced, and thus harder to articulate and easier to misunderstand. Thus, there is simultaneously no need for a special policy, and a high likelihood of causing problems. Combining “little chance of doing good” with “high chance of doing harm” yields pretty guaranteed “more harm than good.” ### The original purpose of banning designer-intent is gone My primary answer suggests that the ♦ moderators should be the ultimate arbiters of how true this is, but for myself, it certainly seems true to me: • The ♦ moderators have expressed openness to re-allowing . • The community moderation is much stronger than it was, leaving less on ♦ moderators. • HNQ only applies for a few days instead of a week or more. • ♦ moderators have an option to remove a question from HNQ that they didn’t have before. All of this suggests that speculative answers are unlikely to get problematically upvoted by visitors coming via HNQ. That was the whole reason why we ever even discussed in the first place. If that had never been true, it probably would never have even come up. Now that it’s no longer true, things should revert to that default. ### The existing blanket ban already gets misunderstood. Already, with the blanket, “everything looks like a nail” policy, we have enormous disagreements about which questions we’re even talking about. There have been serious debates as to whether was, or should be, silently included in the ban, because some portions of the community don’t see a distinction between that and . Even with the Meta consensus firmly on the side of seeing a distinction, and espousing the continued topicality of questions, we routinely see comments and close votes that seem, to me as someone who was there for the initial problems that led to the ban, patently absurd on their face. I don’t have a good solution to this. No, I’ll go beyond that—I don’t think there is a good solution to this. I wrote the answers to a lot of the Meta discussions about whether things should be closed per the ban, but it hasn’t helped, and quite frankly I never claimed to be able to offer any hard-and-fast definition. This policy has always been “I’ll know it when I see it” and that’s not really great for, ya know, policy. It would take an enormous amount of work to try to debate a meaningful hard-line definition, and I’m all-but-certain that effort would be wasted because it would most likely end in failure to come up with one. Per the above, that level of effort isn’t warranted. ### A more-nuanced policy would be even more complicated Going beyond black-and-white banning of some set of questions, and getting into shades of gray among that set, amplifies the above problems immensely. It also massively complicates the policy’s stated response: no longer are we necessarily talking about the simple “just close it,” we’re talking about having a more nuanced response. That would be hard. Much harder than what we’ve already attempted, and largely done poorly at. Just as we can’t well define where the line is, we can’t well define the various shades in play, nor the appropriate response to each. And again, there’s no compelling reason to do so. ## In the end, designer-intent is just like any other tag A question inviting speculation—about designer intent or any other subject—is a bad question, and should be edited and/or closed. Thinly-disguised rants don’t become good questions just because we’d be unbanning . Etc. and so on. I’m not suggesting—no one is suggesting—that become a get-out-of-jail-free card. Just that it no longer be “go directly to jail, do not collect \$200” as it has been. This is about a reversion to our norm. We don’t have special policies for the overwhelming majority of topics. In fact, with banned, burninated, and an elimination of our special policy about editing system tags, I’m not sure we have any. And that’s a good thing. I supported special policies in some or all of those cases, and might personally feel some of them are still worthy of special exceptions, but it’s unquestionable that it’s a very nice state of affairs to avoid having any. And that’s what this proposal would accomplish. • Your framing about "special policies" is odd - that game recs (tool recs, and some alignment) questions are banned are special policies, and so is our policy regarding piracy, and those are good policies. Since you make a good case here, you might also want to answer the request to provide guidance on what makes a good designer-intent question. Mar 22, 2022 at 19:43 • @Akixkisu A good designer-intent question is a good question that happens to be about designers’ intentions, nothing more and nothing less. That is the entire point of this answer, we do not need more than that and there are very good reasons to avoid having more than that. Mar 22, 2022 at 20:11 • Would you say that what makes a good lore question good is exactly the same that makes a good character optimisation question good, which would be exactly the same that makes a good homebrew-review question a good question? If so, then sure, a good thing is good — I don't think that would be a satisfying answer, but certainly not wrong. Mar 22, 2022 at 21:42 • @Akixkisu The same general principles apply. Beyond that, it’s a mistake to try to pin down specifics. Mar 22, 2022 at 22:16 ## Let us not only talk about rule intent, but about question closing too! We have not resolved the original reasons for closure. Why exactly should we act as if this wasn't a Stack Exchange format? Using this site comes with a steep and obscure learning curve, and whether we wordsmith and slightly-to-moderately improve the accessibility of specific reasoning doesn't change that fact. A participant has to be willing to go through that sieve. They have to make their peace with the sieving process and the included limitations of what the stack does and does not cater to, compromise with it, or leave. We should equip querents with suggestions so that they may rephrase their question. We shouldn't do that for them unless that teaches them adequately — participation always comes with collateral damage. We are willing to sacrifice, and nobody here has made a persuasive case on why we ought to treat this particular collateral damage differently. If a question doesn't meet the eligibility criteria, the querent should rework it or take it elsewhere — such as our chat or the myriad places on the internet where different criteria are at work. We do not need to polish sand or highlight the odd allure of noise — we ought not to do either that is what we accept as a guiding principle of the Stack Exchange format. I'd rather not see volunteers break their backs to accommodate solicitants, or for eager volunteers to pounce and transform questions so that they fit the mould while overconfidently guessing at the intent of the querent. This is a matter of mutual respect, and sometimes that means telling someone that their question is better suited to a different place (where what we consider "sand" here may sparkle). Let us keep doing well what we do well, and let us fail and compromise and accept collateral damage where we have a proven track record that we don't do them well. • "We do not need to polish sand or highlight the odd allure of noise — we ought not to do either that is what we accept as a guiding principle of the StackExchange format." – Could you clarify what this sentence means? – V2Blast StaffMod Mar 17, 2022 at 22:32 • I am not understanding the relevance of anything you are saying. This is a discussion about topicality—it’s sort of presumed that the question otherwise meets our criteria, topic aside. If it didn’t, whether or not it was on topic would be a moot question. And if you are saying that a designer-intent question requires “transformation,” than you are saying they should be off-topic—in which case you should just say that, and nothing in this discussion would seem terribly relevant after that? Mar 17, 2022 at 23:43 • @V2Blast Yes, we have a culture of transforming questions that are unclear to "a question that would work" to "save" a question where we shouldn't do that because the querent didn't meet the baseline of our format, and that doesn't work in favour of the querent and reflects poorly on the signal to noise ratio principle. We polish noisy questions that then generate internet points but don't solve the problem — and there is the context to designer intent — in a way that makes them formally acceptable. I don't think that we should continue to do that. Mar 18, 2022 at 8:42 • @KRyan I'd prefer structures built on foundations instead of cloud-castles that ignore them, and yes you are exactly correct - a discussion without that would be moot. Mar 18, 2022 at 10:33 For me, a question asking for a game holds little value. It does not play into the strengths of our community. A good answer is nothing more than a block quote from some source, the only experience necessary to produce such answer is the knowledge of where to find it. It is absolute, no voting should theoretically be necessary. Either the one person said what they thought at the point in time, or they didn't. It also does not seem to be a practical question for an existing problem a person has. Stack Exchange was made to answer practical questions that an actual person needs an answer to. "What was the designer thinking" is not such a question. It's just idle curiosity. We all have it, but that doesn't mean it has a place on SE. On the other hand, "why is that rule this way and what would happen if I changed it" in my eyes makes an excellent question. Assuming that is actually a question the person has because they have a problem with the original rule, then we have everything that SE is made to do: we have a real world problem of a real person that can be answered by subject matter experts using their experience. Maybe people did that and can share their findings. Or maybe there is a theoretical construct in the rules that makes if obviously unbalanced without even trying, just by playing through an example situation. So, I am in favor of leaving game designer reasons out, because the question is just a resource request, not a request for experience from experts. I am also in favor of allowing to ask "why" or "what would happen if I changed it", because that is the exact opposite: a request for experience of other users that have already done so and can report back their findings. • Are "Why does X rule exist?" questions not also answered by block quotes from sources? If not, how are their answers not primarily opinion based and nothing more than conjecture about why a rule might exist? Feb 22, 2022 at 12:12 • @Exempt-Medic the point of the Q is that we seem to be overzealously interpreting any "why" question to be "what was the designer thinking" when we can instead use an analytical perspective - "what changes about the system if the rule doesn't exist or is different?" Questions that want to know a designer's thoughts are off-topic. Questions that want to understand the ramifications of a rule are on-topic. Both questions can be written "why does rule X exist" - so let's stop closing questions that can be reasonably answered the latter way just because someone might read them in the former. Feb 22, 2022 at 13:17 • @Carcer If that is what nvoigt intended, then I'd like to see it expressed. I was genuinely confused because, as far as I could tell, they never said anything about an effort to reinterpret "Why?" questions. If what you've commented is what nvoigt meant in this answer, that's perfectly fine; I just do not see this answer stating that that is actually what they mean Feb 22, 2022 at 13:23 • FWIW I don't think that designer-reasons questions have no value and do think that knowing where to find those answers is valid expertise that questions could draw on, as Praxiteles' answer to a previous meta describes well; the issue with these questions was never that they cannot produce high-value answers (because they can and have) but that they tend to attract so many bad answers that they required a disproportionate amount of moderation. (Maybe now the community is larger it wouldn't be an overwhelming amount anymore, but who knows.) Feb 22, 2022 at 13:25 • @Exempt-Medic Here is one of my questions that asks "why" that was alleged to be a designer reasons question that clearly is not: Why does the Amulet of the Black Skull specify that you cannot use it to teleport to another plane of existence? Might be a good example of the kind of thing nvoigt is talking about here. Feb 22, 2022 at 13:40 • @Carcer It's worth noting that we probably should be closing "why" questions. If the asker is OK with "what is the impact on the game system of X" rather than "what were the designers thinking when they wrote X" then the question can be promptly cleaned up & reopened. That's technically what closing is for: temporarily blocking answers while we improve the question into an acceptable state. – Oblivious Sage Mod Feb 22, 2022 at 13:49 • @ObliviousSage Unless the question is clearly asking for a designer's reasons I think it's already in an answerable state and shouldn't be closed. Edit it to clarify, sure, but it doesn't need to be closed before that happens, if the only purpose of editing is to make sure we rule out designer-reasons answers. Feb 22, 2022 at 13:57 • @Carcer in a world with infinite moderation resources, absolutely. In practice, I think the word "Why" may be an attractive nuisance for bad answers and comments. Feb 22, 2022 at 16:07 • @fectin You underestimate the extent of my lurking on the site. Feb 22, 2022 at 17:26 • @ThomasMarkov I would note that of the three questions that Black Skull question originally asked, I objected to only the "why was it designed like this?" question - and since that has been removed by a recent edit, I have no objections to the remaining questions. If you think the edit still captures your intent, then I was just objecting to the original wording, which made it seem like a designer intent question. – Kirt Feb 25, 2022 at 8:36 • I wouldn't say I was 'triggered' by the use of 'why' in the title, but within the context of the question it did make it seem to me like you were actually asking why the magic item was designed the way it was. If you were asking only 'is this property redundant / is there a case in which it is not' - then at the very least stating it as a 'why' question was the source of misinterpretation on my part. – Kirt Feb 25, 2022 at 16:02 # Who of us is the designers of the games we play?! There are only very few in the community that actually design the games they play. BESW might be the only prolific one. Everyone else can only be guessing and clutching for clues on what the designer really wanted. As such, the biggest problem becomes apparent: # Designer Reasons attracts unsubstantiated answers Unless the designers have said "Yea, I thought about this or that when designing X" or "I wanted to make a game that does this", there is no way to know. However, people just can't stop from attempting to answer questions... and then inject their own interpretation. Of course, it must have been Mr. G's cat that inspired him to make cats the most deadly creature for a commoner, right?! WRONG! That's just wishful thinking! We are a StackExchange, we need to adhere to citation rules of some sort so we can claim that we are expert answers and unsubstantiated answers are BAD. There is a tiny bit of gold in the Designer Reasons, but too much rubble!
# Properties Label 400.2.l.f Level $400$ Weight $2$ Character orbit 400.l Analytic conductor $3.194$ Analytic rank $0$ Dimension $12$ CM no Inner twists $2$ # Learn more ## Newspace parameters Level: $$N$$ $$=$$ $$400 = 2^{4} \cdot 5^{2}$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 400.l (of order $$4$$, degree $$2$$, minimal) ## Newform invariants Self dual: no Analytic conductor: $$3.19401608085$$ Analytic rank: $$0$$ Dimension: $$12$$ Relative dimension: $$6$$ over $$\Q(i)$$ Coefficient field: 12.0.4767670494822400.1 Defining polynomial: $$x^{12} - 4 x^{11} + 7 x^{10} - 4 x^{9} - 8 x^{8} + 24 x^{7} - 38 x^{6} + 48 x^{5} - 32 x^{4} - 32 x^{3} + 112 x^{2} - 128 x + 64$$ Coefficient ring: $$\Z[a_1, \ldots, a_{7}]$$ Coefficient ring index: $$2^{2}$$ Twist minimal: yes Sato-Tate group: $\mathrm{SU}(2)[C_{4}]$ ## $q$-expansion Coefficients of the $$q$$-expansion are expressed in terms of a basis $$1,\beta_1,\ldots,\beta_{11}$$ for the coefficient ring described below. We also show the integral $$q$$-expansion of the trace form. $$f(q)$$ $$=$$ $$q -\beta_{1} q^{2} -\beta_{6} q^{3} + \beta_{2} q^{4} + ( \beta_{1} - \beta_{2} + \beta_{3} - \beta_{4} + \beta_{6} - \beta_{7} - \beta_{10} ) q^{6} + ( \beta_{1} + \beta_{3} - \beta_{4} - \beta_{7} ) q^{7} + ( 1 - \beta_{1} - \beta_{2} + 2 \beta_{3} - \beta_{5} - \beta_{6} - \beta_{8} - \beta_{10} ) q^{8} + ( 1 - \beta_{1} + \beta_{2} + \beta_{4} + \beta_{5} + \beta_{10} - \beta_{11} ) q^{9} +O(q^{10})$$ $$q -\beta_{1} q^{2} -\beta_{6} q^{3} + \beta_{2} q^{4} + ( \beta_{1} - \beta_{2} + \beta_{3} - \beta_{4} + \beta_{6} - \beta_{7} - \beta_{10} ) q^{6} + ( \beta_{1} + \beta_{3} - \beta_{4} - \beta_{7} ) q^{7} + ( 1 - \beta_{1} - \beta_{2} + 2 \beta_{3} - \beta_{5} - \beta_{6} - \beta_{8} - \beta_{10} ) q^{8} + ( 1 - \beta_{1} + \beta_{2} + \beta_{4} + \beta_{5} + \beta_{10} - \beta_{11} ) q^{9} + ( -1 + \beta_{1} + \beta_{2} - 2 \beta_{3} + \beta_{4} + \beta_{5} + 2 \beta_{8} - \beta_{9} + \beta_{10} - \beta_{11} ) q^{11} + ( -\beta_{1} + \beta_{2} - \beta_{3} + 2 \beta_{4} + 2 \beta_{10} ) q^{12} + ( 1 - \beta_{1} + \beta_{3} + \beta_{4} - 2 \beta_{6} ) q^{13} + ( 2 - \beta_{2} + 2 \beta_{3} - 2 \beta_{7} - \beta_{8} ) q^{14} + ( -\beta_{4} - \beta_{5} + 2 \beta_{6} + \beta_{7} - \beta_{8} + \beta_{9} + \beta_{11} ) q^{16} + ( 1 - \beta_{1} + \beta_{2} + \beta_{7} - 2 \beta_{8} - \beta_{10} + \beta_{11} ) q^{17} + ( -1 + \beta_{3} - \beta_{4} - \beta_{6} - \beta_{7} - \beta_{8} + \beta_{9} - \beta_{10} + \beta_{11} ) q^{18} + ( -1 - 2 \beta_{1} + \beta_{4} - \beta_{5} - \beta_{6} - \beta_{7} + 2 \beta_{11} ) q^{19} + ( -1 - \beta_{1} + 2 \beta_{2} - 3 \beta_{3} + 2 \beta_{4} + \beta_{5} + 2 \beta_{7} + 2 \beta_{10} ) q^{21} + ( \beta_{4} + 2 \beta_{5} - \beta_{6} + \beta_{7} + \beta_{8} - \beta_{10} - 2 \beta_{11} ) q^{22} + ( 1 - 2 \beta_{1} + 2 \beta_{2} - 2 \beta_{3} + 3 \beta_{4} + \beta_{5} - \beta_{6} + \beta_{7} - \beta_{9} + 2 \beta_{10} - 2 \beta_{11} ) q^{23} + ( -1 + \beta_{1} - 2 \beta_{4} + \beta_{5} - \beta_{6} + 2 \beta_{7} - \beta_{10} ) q^{24} + ( -2 + \beta_{1} - \beta_{2} + \beta_{3} - 2 \beta_{4} + 2 \beta_{6} - \beta_{8} - 2 \beta_{10} ) q^{26} + ( -1 + 3 \beta_{1} - \beta_{2} + 2 \beta_{3} - \beta_{4} - \beta_{5} - 2 \beta_{7} + 2 \beta_{8} + \beta_{9} - \beta_{10} - \beta_{11} ) q^{27} + ( -1 + \beta_{3} - \beta_{4} + \beta_{6} - \beta_{7} - \beta_{8} + \beta_{9} + \beta_{10} + \beta_{11} ) q^{28} + ( -\beta_{1} - \beta_{3} - \beta_{5} - 2 \beta_{6} + \beta_{7} - 2 \beta_{11} ) q^{29} + ( -2 \beta_{3} - \beta_{6} + 2 \beta_{8} + \beta_{9} + 2 \beta_{10} ) q^{31} + ( 2 - 2 \beta_{1} + \beta_{2} - 2 \beta_{3} + \beta_{4} - \beta_{5} - 2 \beta_{6} + \beta_{7} - \beta_{9} + 2 \beta_{10} + \beta_{11} ) q^{32} + ( -2 - 2 \beta_{1} + 2 \beta_{2} - 2 \beta_{3} + \beta_{4} - \beta_{5} + \beta_{7} + 2 \beta_{10} + 2 \beta_{11} ) q^{33} + ( -1 - \beta_{1} - 2 \beta_{2} + 3 \beta_{3} - \beta_{4} - 4 \beta_{5} - \beta_{6} - \beta_{7} - \beta_{8} + \beta_{9} - \beta_{10} + \beta_{11} ) q^{34} + ( -1 + 2 \beta_{1} - 2 \beta_{2} + 3 \beta_{3} - \beta_{4} - 2 \beta_{5} + \beta_{6} - 3 \beta_{7} - \beta_{8} - \beta_{9} - \beta_{10} + \beta_{11} ) q^{36} + ( -\beta_{1} - 2 \beta_{2} + 3 \beta_{3} - \beta_{4} - 2 \beta_{5} + \beta_{7} - 4 \beta_{8} - 2 \beta_{10} + 2 \beta_{11} ) q^{37} + ( -2 + \beta_{1} + \beta_{2} + \beta_{3} - \beta_{4} + \beta_{6} + 3 \beta_{7} - 2 \beta_{9} - \beta_{10} - 2 \beta_{11} ) q^{38} + ( 1 - \beta_{1} + \beta_{3} + \beta_{5} - \beta_{6} + 4 \beta_{7} - \beta_{9} ) q^{39} + ( 1 - 4 \beta_{1} - 2 \beta_{3} + 3 \beta_{4} + \beta_{5} - 2 \beta_{6} + 2 \beta_{7} - 2 \beta_{9} ) q^{41} + ( -2 + 2 \beta_{1} - \beta_{2} - 2 \beta_{4} - 2 \beta_{6} + \beta_{8} - 2 \beta_{10} ) q^{42} + ( 1 - \beta_{1} + \beta_{2} - \beta_{4} + \beta_{5} + 2 \beta_{8} + 2 \beta_{9} + \beta_{10} - \beta_{11} ) q^{43} + ( -5 + \beta_{1} - 2 \beta_{3} + \beta_{4} + \beta_{6} - \beta_{7} + \beta_{9} - \beta_{10} + \beta_{11} ) q^{44} + ( -1 + \beta_{1} - \beta_{2} + 2 \beta_{3} - 3 \beta_{4} - \beta_{6} + \beta_{7} + 3 \beta_{9} - 3 \beta_{10} + \beta_{11} ) q^{46} + ( -1 + \beta_{1} - 2 \beta_{2} - \beta_{3} - 2 \beta_{4} + \beta_{5} + \beta_{6} + 2 \beta_{8} - \beta_{9} - 2 \beta_{11} ) q^{47} + ( 1 + 2 \beta_{1} - 2 \beta_{2} + \beta_{3} - \beta_{5} + \beta_{6} - 6 \beta_{7} - \beta_{10} ) q^{48} + ( 4 \beta_{1} - 4 \beta_{3} + \beta_{4} + 3 \beta_{5} - \beta_{7} + 2 \beta_{8} + 2 \beta_{10} ) q^{49} + ( 2 - \beta_{1} - \beta_{2} + 2 \beta_{3} + 2 \beta_{4} - \beta_{6} - \beta_{7} + \beta_{10} - 3 \beta_{11} ) q^{51} + ( 1 + \beta_{2} - \beta_{3} + 3 \beta_{4} - 2 \beta_{5} - \beta_{6} - \beta_{7} + \beta_{9} + 3 \beta_{10} + \beta_{11} ) q^{52} + ( 1 - 2 \beta_{2} - 2 \beta_{3} + 2 \beta_{5} - \beta_{7} - 2 \beta_{9} - 2 \beta_{10} ) q^{53} + ( 2 - 2 \beta_{1} - 2 \beta_{3} + 3 \beta_{4} + 2 \beta_{5} + \beta_{6} + 3 \beta_{7} - \beta_{8} - 2 \beta_{9} + \beta_{10} ) q^{54} + ( 1 + 2 \beta_{1} + \beta_{3} - \beta_{4} - \beta_{6} - 3 \beta_{7} - \beta_{8} - \beta_{9} + \beta_{10} + \beta_{11} ) q^{56} + ( 2 - 3 \beta_{1} - 3 \beta_{2} + 2 \beta_{3} + 2 \beta_{6} - 2 \beta_{8} + 2 \beta_{9} - \beta_{10} + 3 \beta_{11} ) q^{57} + ( 4 + \beta_{1} - \beta_{2} + \beta_{3} - 2 \beta_{4} + 2 \beta_{6} + \beta_{8} + 2 \beta_{9} - 2 \beta_{10} + 2 \beta_{11} ) q^{58} + ( 3 \beta_{1} - \beta_{2} + 2 \beta_{4} - 2 \beta_{5} - 3 \beta_{7} - 2 \beta_{8} + 2 \beta_{9} - \beta_{10} + \beta_{11} ) q^{59} + ( -3 \beta_{1} - \beta_{3} + \beta_{4} - 2 \beta_{5} + \beta_{7} + 2 \beta_{11} ) q^{61} + ( 1 + \beta_{1} + \beta_{2} - 2 \beta_{3} - \beta_{4} + 4 \beta_{5} + \beta_{6} - \beta_{7} + 2 \beta_{8} - 3 \beta_{9} - \beta_{10} - \beta_{11} ) q^{62} + ( -1 + 4 \beta_{1} - 2 \beta_{2} + 2 \beta_{3} + \beta_{4} + \beta_{5} + 2 \beta_{6} - 3 \beta_{7} + 2 \beta_{8} - 2 \beta_{9} - 2 \beta_{11} ) q^{63} + ( 3 - \beta_{2} + 3 \beta_{3} - 4 \beta_{4} + \beta_{5} + \beta_{6} + \beta_{8} - 3 \beta_{10} - 2 \beta_{11} ) q^{64} + ( 2 + \beta_{1} + 2 \beta_{3} - 2 \beta_{4} - 2 \beta_{6} + 2 \beta_{7} - 2 \beta_{9} - 2 \beta_{10} - 2 \beta_{11} ) q^{66} + ( -4 + \beta_{1} - \beta_{2} + \beta_{6} - 5 \beta_{7} + \beta_{10} - \beta_{11} ) q^{67} + ( 5 + \beta_{2} - \beta_{3} - \beta_{4} + 3 \beta_{6} + 5 \beta_{7} - \beta_{8} - \beta_{9} + \beta_{10} + \beta_{11} ) q^{68} + ( -3 + 4 \beta_{1} + 4 \beta_{3} - 2 \beta_{4} - 2 \beta_{5} - \beta_{7} + 4 \beta_{8} - 2 \beta_{9} - 2 \beta_{11} ) q^{69} + ( \beta_{1} + 2 \beta_{2} + \beta_{3} - \beta_{4} + 2 \beta_{5} - \beta_{6} - \beta_{7} + 2 \beta_{8} - \beta_{9} - 2 \beta_{11} ) q^{71} + ( 3 + \beta_{3} + \beta_{4} + \beta_{6} + 3 \beta_{7} - \beta_{8} + \beta_{9} + 3 \beta_{10} - \beta_{11} ) q^{72} + ( -1 + \beta_{1} + \beta_{2} - 2 \beta_{3} + \beta_{4} - \beta_{5} + 2 \beta_{6} + 3 \beta_{7} + 2 \beta_{9} + \beta_{10} - \beta_{11} ) q^{73} + ( 2 \beta_{1} - \beta_{2} - 2 \beta_{4} - 4 \beta_{5} + 2 \beta_{6} - \beta_{8} + 2 \beta_{9} + 2 \beta_{10} + 2 \beta_{11} ) q^{74} + ( 6 - \beta_{1} - \beta_{2} + \beta_{3} + 2 \beta_{4} - 2 \beta_{5} - 2 \beta_{6} - 2 \beta_{8} + 4 \beta_{9} ) q^{76} + ( -1 - 5 \beta_{1} + 2 \beta_{2} - 3 \beta_{3} - 3 \beta_{5} + 4 \beta_{7} - 2 \beta_{10} + 2 \beta_{11} ) q^{77} + ( -1 + \beta_{1} - 2 \beta_{3} - \beta_{4} + \beta_{6} - 3 \beta_{7} - \beta_{8} + \beta_{9} - \beta_{10} - \beta_{11} ) q^{78} + ( 2 - 3 \beta_{1} + 2 \beta_{2} - 7 \beta_{3} - 3 \beta_{4} + 2 \beta_{5} + 2 \beta_{6} + 5 \beta_{7} - 2 \beta_{8} - 2 \beta_{9} + 2 \beta_{11} ) q^{79} + ( \beta_{1} + \beta_{2} + \beta_{4} + \beta_{5} - 2 \beta_{6} - 2 \beta_{8} + 2 \beta_{9} - \beta_{10} + \beta_{11} ) q^{81} + ( -6 + 2 \beta_{1} + 2 \beta_{2} - \beta_{3} - 2 \beta_{4} + 2 \beta_{6} + 2 \beta_{7} + 2 \beta_{8} + 2 \beta_{9} - 2 \beta_{10} - 2 \beta_{11} ) q^{82} + ( -1 + 3 \beta_{1} - \beta_{2} - 4 \beta_{3} - 3 \beta_{4} - \beta_{5} + \beta_{6} + 2 \beta_{7} + \beta_{10} + 3 \beta_{11} ) q^{83} + ( 1 + 2 \beta_{1} - 2 \beta_{2} + \beta_{3} + \beta_{4} + 3 \beta_{6} - 3 \beta_{7} + \beta_{8} - \beta_{9} - \beta_{10} - \beta_{11} ) q^{84} + ( 1 - 2 \beta_{1} + 2 \beta_{2} - \beta_{3} + \beta_{4} + 2 \beta_{5} - \beta_{6} - 3 \beta_{7} - \beta_{8} - 3 \beta_{9} - \beta_{10} + \beta_{11} ) q^{86} + ( 3 \beta_{1} + 4 \beta_{2} + \beta_{3} - \beta_{4} + 2 \beta_{5} + \beta_{6} + 5 \beta_{7} + 2 \beta_{8} + \beta_{9} + 2 \beta_{10} - 4 \beta_{11} ) q^{87} + ( -5 + 3 \beta_{1} - 2 \beta_{3} + 2 \beta_{4} - \beta_{5} - \beta_{6} + 4 \beta_{7} + 2 \beta_{8} - 2 \beta_{9} + \beta_{10} ) q^{88} + ( 1 - \beta_{1} + \beta_{2} + 2 \beta_{3} - \beta_{4} + 3 \beta_{5} + 2 \beta_{6} - 3 \beta_{7} + 2 \beta_{8} + 2 \beta_{9} - \beta_{10} - \beta_{11} ) q^{89} + ( 2 \beta_{2} - 2 \beta_{3} + 2 \beta_{4} + 2 \beta_{10} ) q^{91} + ( -2 - \beta_{2} - 2 \beta_{3} + 2 \beta_{4} - 2 \beta_{5} + 2 \beta_{6} - 4 \beta_{7} - \beta_{8} - 4 \beta_{9} + 2 \beta_{11} ) q^{92} + ( 4 + 2 \beta_{3} + \beta_{4} + \beta_{5} + 2 \beta_{6} + 2 \beta_{7} ) q^{93} + ( 3 + \beta_{1} + 4 \beta_{2} - 4 \beta_{3} + 3 \beta_{4} + 4 \beta_{5} + \beta_{6} - 3 \beta_{7} + 3 \beta_{8} + \beta_{9} + 3 \beta_{10} - \beta_{11} ) q^{94} + ( -1 - 2 \beta_{1} + \beta_{2} + \beta_{3} + 2 \beta_{4} + \beta_{5} + \beta_{6} + 4 \beta_{7} + \beta_{8} + 3 \beta_{10} ) q^{96} + ( -1 + 4 \beta_{1} + 2 \beta_{2} - 4 \beta_{3} + 3 \beta_{4} + 3 \beta_{5} + 2 \beta_{6} - \beta_{7} - 2 \beta_{9} + 2 \beta_{10} + 2 \beta_{11} ) q^{97} + ( -6 + 3 \beta_{1} - 2 \beta_{2} - 2 \beta_{3} + 4 \beta_{5} - 4 \beta_{7} + 4 \beta_{8} - 2 \beta_{9} - 2 \beta_{11} ) q^{98} + ( 1 - 5 \beta_{1} + \beta_{2} + 2 \beta_{3} + 3 \beta_{4} - \beta_{5} + 2 \beta_{6} - \beta_{10} - \beta_{11} ) q^{99} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$12q - 4q^{2} - 2q^{3} + 2q^{4} + 6q^{6} + 8q^{8} + O(q^{10})$$ $$12q - 4q^{2} - 2q^{3} + 2q^{4} + 6q^{6} + 8q^{8} - 2q^{11} - 8q^{12} + 4q^{13} + 14q^{14} + 2q^{16} + 8q^{17} - 18q^{18} - 14q^{19} - 20q^{21} - 2q^{22} - 14q^{24} - 16q^{26} + 10q^{27} - 26q^{28} - 4q^{31} + 16q^{32} - 28q^{33} - 6q^{34} + 2q^{36} - 8q^{37} - 10q^{38} - 10q^{42} - 44q^{44} - 10q^{46} - 8q^{47} + 28q^{48} + 4q^{49} + 10q^{51} + 12q^{52} + 16q^{53} + 10q^{54} + 6q^{56} + 60q^{58} + 20q^{59} + 4q^{61} + 18q^{62} + 8q^{63} + 38q^{64} + 32q^{66} - 50q^{67} + 60q^{68} + 14q^{72} + 10q^{74} + 60q^{76} + 8q^{77} - 4q^{78} + 12q^{79} - 8q^{81} - 42q^{82} + 2q^{83} + 34q^{84} + 6q^{86} - 30q^{88} + 2q^{92} + 44q^{93} + 32q^{94} - 34q^{96} - 64q^{98} + 12q^{99} + O(q^{100})$$ Basis of coefficient ring in terms of a root $$\nu$$ of $$x^{12} - 4 x^{11} + 7 x^{10} - 4 x^{9} - 8 x^{8} + 24 x^{7} - 38 x^{6} + 48 x^{5} - 32 x^{4} - 32 x^{3} + 112 x^{2} - 128 x + 64$$: $$\beta_{0}$$ $$=$$ $$1$$ $$\beta_{1}$$ $$=$$ $$\nu$$ $$\beta_{2}$$ $$=$$ $$\nu^{2}$$ $$\beta_{3}$$ $$=$$ $$($$$$-\nu^{11} + 2 \nu^{10} + \nu^{9} - 2 \nu^{8} + 6 \nu^{5} - 4 \nu^{4} - 16 \nu^{3} + 16 \nu^{2} + 32 \nu - 32$$$$)/16$$ $$\beta_{4}$$ $$=$$ $$($$$$-\nu^{11} + 2 \nu^{10} + \nu^{9} - 6 \nu^{8} + 8 \nu^{7} - 12 \nu^{6} + 14 \nu^{5} - 4 \nu^{4} - 32 \nu^{3} + 56 \nu^{2} - 32 \nu + 16$$$$)/16$$ $$\beta_{5}$$ $$=$$ $$($$$$\nu^{11} - 2 \nu^{10} - 5 \nu^{9} + 18 \nu^{8} - 12 \nu^{7} + 10 \nu^{5} - 28 \nu^{4} + 88 \nu^{3} - 112 \nu^{2} - 16 \nu + 128$$$$)/32$$ $$\beta_{6}$$ $$=$$ $$($$$$-\nu^{11} + 2 \nu^{10} - 4 \nu^{8} + 5 \nu^{7} - 6 \nu^{6} + 10 \nu^{5} - 4 \nu^{4} - 18 \nu^{3} + 36 \nu^{2} - 16 \nu - 8$$$$)/8$$ $$\beta_{7}$$ $$=$$ $$($$$$\nu^{11} - 6 \nu^{10} + 11 \nu^{9} - 2 \nu^{8} - 12 \nu^{7} + 24 \nu^{6} - 38 \nu^{5} + 60 \nu^{4} - 40 \nu^{3} - 64 \nu^{2} + 144 \nu - 64$$$$)/32$$ $$\beta_{8}$$ $$=$$ $$($$$$-\nu^{11} + 4 \nu^{10} - 3 \nu^{9} - 4 \nu^{8} + 12 \nu^{7} - 16 \nu^{6} + 22 \nu^{5} - 24 \nu^{4} - 8 \nu^{3} + 72 \nu^{2} - 80 \nu + 32$$$$)/8$$ $$\beta_{9}$$ $$=$$ $$($$$$9 \nu^{11} - 22 \nu^{10} + 15 \nu^{9} + 22 \nu^{8} - 56 \nu^{7} + 80 \nu^{6} - 118 \nu^{5} + 124 \nu^{4} + 80 \nu^{3} - 336 \nu^{2} + 304 \nu - 64$$$$)/32$$ $$\beta_{10}$$ $$=$$ $$($$$$3 \nu^{11} - 14 \nu^{10} + 21 \nu^{9} + 6 \nu^{8} - 56 \nu^{7} + 88 \nu^{6} - 114 \nu^{5} + 124 \nu^{4} - 16 \nu^{3} - 288 \nu^{2} + 496 \nu - 320$$$$)/32$$ $$\beta_{11}$$ $$=$$ $$($$$$-7 \nu^{11} + 30 \nu^{10} - 41 \nu^{9} + 2 \nu^{8} + 80 \nu^{7} - 144 \nu^{6} + 202 \nu^{5} - 252 \nu^{4} + 96 \nu^{3} + 400 \nu^{2} - 720 \nu + 480$$$$)/32$$ $$1$$ $$=$$ $$\beta_0$$ $$\nu$$ $$=$$ $$\beta_{1}$$ $$\nu^{2}$$ $$=$$ $$\beta_{2}$$ $$\nu^{3}$$ $$=$$ $$\beta_{10} + \beta_{8} + \beta_{6} + \beta_{5} - 2 \beta_{3} + \beta_{2} + \beta_{1} - 1$$ $$\nu^{4}$$ $$=$$ $$\beta_{11} + \beta_{9} - \beta_{8} + \beta_{7} + 2 \beta_{6} - \beta_{5} - \beta_{4}$$ $$\nu^{5}$$ $$=$$ $$-\beta_{11} - 2 \beta_{10} + \beta_{9} - \beta_{7} + 2 \beta_{6} + \beta_{5} - \beta_{4} + 2 \beta_{3} - \beta_{2} + 2 \beta_{1} - 2$$ $$\nu^{6}$$ $$=$$ $$-2 \beta_{11} - 3 \beta_{10} + \beta_{8} + \beta_{6} + \beta_{5} - 4 \beta_{4} + 3 \beta_{3} - \beta_{2} + 3$$ $$\nu^{7}$$ $$=$$ $$-\beta_{11} - 2 \beta_{10} - \beta_{9} + 2 \beta_{8} + 5 \beta_{7} + \beta_{5} - 5 \beta_{4} - 3 \beta_{2} + 6 \beta_{1} - 4$$ $$\nu^{8}$$ $$=$$ $$2 \beta_{11} - 3 \beta_{10} - 3 \beta_{8} + 8 \beta_{7} - 3 \beta_{6} - 3 \beta_{5} - 4 \beta_{4} + 7 \beta_{3} + \beta_{2} - 4 \beta_{1} - 5$$ $$\nu^{9}$$ $$=$$ $$-\beta_{11} + 4 \beta_{10} - \beta_{9} + 8 \beta_{8} + 5 \beta_{7} - 2 \beta_{6} + 7 \beta_{5} + 3 \beta_{4} - 4 \beta_{3} + 3 \beta_{2} - 4 \beta_{1} - 10$$ $$\nu^{10}$$ $$=$$ $$8 \beta_{11} + 5 \beta_{10} + 6 \beta_{9} - \beta_{8} + 6 \beta_{7} + \beta_{6} - 9 \beta_{5} - 2 \beta_{4} + 7 \beta_{3} - 7 \beta_{2} - 12 \beta_{1} + 11$$ $$\nu^{11}$$ $$=$$ $$\beta_{11} - 8 \beta_{10} + 13 \beta_{9} - 9 \beta_{7} - 6 \beta_{6} - 11 \beta_{5} + 5 \beta_{4} + 24 \beta_{3} - 19 \beta_{2} + 8 \beta_{1} - 6$$ ## Character values We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/400\mathbb{Z}\right)^\times$$. $$n$$ $$101$$ $$177$$ $$351$$ $$\chi(n)$$ $$-\beta_{7}$$ $$1$$ $$1$$ ## Embeddings For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below. For more information on an embedded modular form you can click on its label. Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$ 101.1 1.35979 − 0.388551i 1.22306 + 0.710021i 0.719139 − 1.21772i 0.618969 + 1.27156i −0.507829 − 1.31989i −1.41313 − 0.0554252i 1.35979 + 0.388551i 1.22306 − 0.710021i 0.719139 + 1.21772i 0.618969 − 1.27156i −0.507829 + 1.31989i −1.41313 + 0.0554252i −1.35979 + 0.388551i −1.03997 + 1.03997i 1.69806 1.05670i 0 1.01006 1.81822i 1.49668i −1.89842 + 2.09667i 0.836925i 0 101.2 −1.22306 0.710021i 1.09156 1.09156i 0.991741 + 1.73679i 0 −2.11008 + 0.560012i 0.973926i 0.0202025 2.82835i 0.616985i 0 101.3 −0.719139 + 1.21772i 1.66783 1.66783i −0.965679 1.75142i 0 0.831547 + 3.23035i 1.87372i 2.82719 + 0.0835873i 2.56332i 0 101.4 −0.618969 1.27156i −2.16859 + 2.16859i −1.23375 + 1.57412i 0 4.09979 + 1.41521i 3.30519i 2.76525 + 0.594467i 6.40553i 0 101.5 0.507829 + 1.31989i −0.0623209 + 0.0623209i −1.48422 + 1.34056i 0 −0.113905 0.0506084i 0.375877i −2.52312 1.27824i 2.99223i 0 101.6 1.41313 + 0.0554252i −0.488516 + 0.488516i 1.99386 + 0.156646i 0 −0.717411 + 0.663259i 4.71540i 2.80889 + 0.331870i 2.52270i 0 301.1 −1.35979 0.388551i −1.03997 1.03997i 1.69806 + 1.05670i 0 1.01006 + 1.81822i 1.49668i −1.89842 2.09667i 0.836925i 0 301.2 −1.22306 + 0.710021i 1.09156 + 1.09156i 0.991741 1.73679i 0 −2.11008 0.560012i 0.973926i 0.0202025 + 2.82835i 0.616985i 0 301.3 −0.719139 1.21772i 1.66783 + 1.66783i −0.965679 + 1.75142i 0 0.831547 3.23035i 1.87372i 2.82719 0.0835873i 2.56332i 0 301.4 −0.618969 + 1.27156i −2.16859 2.16859i −1.23375 1.57412i 0 4.09979 1.41521i 3.30519i 2.76525 0.594467i 6.40553i 0 301.5 0.507829 1.31989i −0.0623209 0.0623209i −1.48422 1.34056i 0 −0.113905 + 0.0506084i 0.375877i −2.52312 + 1.27824i 2.99223i 0 301.6 1.41313 0.0554252i −0.488516 0.488516i 1.99386 0.156646i 0 −0.717411 0.663259i 4.71540i 2.80889 0.331870i 2.52270i 0 $$n$$: e.g. 2-40 or 990-1000 Embeddings: e.g. 1-3 or 301.6 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles ## Inner twists Char Parity Ord Mult Type 1.a even 1 1 trivial 16.e even 4 1 inner ## Twists By twisting character orbit Char Parity Ord Mult Type Twist Min Dim 1.a even 1 1 trivial 400.2.l.f 12 4.b odd 2 1 1600.2.l.g 12 5.b even 2 1 400.2.l.g yes 12 5.c odd 4 1 400.2.q.e 12 5.c odd 4 1 400.2.q.f 12 16.e even 4 1 inner 400.2.l.f 12 16.f odd 4 1 1600.2.l.g 12 20.d odd 2 1 1600.2.l.f 12 20.e even 4 1 1600.2.q.e 12 20.e even 4 1 1600.2.q.f 12 80.i odd 4 1 400.2.q.e 12 80.j even 4 1 1600.2.q.f 12 80.k odd 4 1 1600.2.l.f 12 80.q even 4 1 400.2.l.g yes 12 80.s even 4 1 1600.2.q.e 12 80.t odd 4 1 400.2.q.f 12 By twisted newform orbit Twist Min Dim Char Parity Ord Mult Type 400.2.l.f 12 1.a even 1 1 trivial 400.2.l.f 12 16.e even 4 1 inner 400.2.l.g yes 12 5.b even 2 1 400.2.l.g yes 12 80.q even 4 1 400.2.q.e 12 5.c odd 4 1 400.2.q.e 12 80.i odd 4 1 400.2.q.f 12 5.c odd 4 1 400.2.q.f 12 80.t odd 4 1 1600.2.l.f 12 20.d odd 2 1 1600.2.l.f 12 80.k odd 4 1 1600.2.l.g 12 4.b odd 2 1 1600.2.l.g 12 16.f odd 4 1 1600.2.q.e 12 20.e even 4 1 1600.2.q.e 12 80.s even 4 1 1600.2.q.f 12 20.e even 4 1 1600.2.q.f 12 80.j even 4 1 ## Hecke kernels This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(400, [\chi])$$: $$T_{3}^{12} + \cdots$$ $$T_{7}^{12} + 40 T_{7}^{10} + 484 T_{7}^{8} + 2144 T_{7}^{6} + 3776 T_{7}^{4} + 2304 T_{7}^{2} + 256$$ ## Hecke characteristic polynomials $p$ $F_p(T)$ $2$ $$64 + 128 T + 112 T^{2} + 32 T^{3} - 32 T^{4} - 48 T^{5} - 38 T^{6} - 24 T^{7} - 8 T^{8} + 4 T^{9} + 7 T^{10} + 4 T^{11} + T^{12}$$ $3$ $$1 + 18 T + 162 T^{2} + 282 T^{3} + 243 T^{4} - 20 T^{5} + 36 T^{6} + 60 T^{7} + 51 T^{8} - 6 T^{9} + 2 T^{10} + 2 T^{11} + T^{12}$$ $5$ $$T^{12}$$ $7$ $$256 + 2304 T^{2} + 3776 T^{4} + 2144 T^{6} + 484 T^{8} + 40 T^{10} + T^{12}$$ $11$ $$85849 - 294758 T + 506018 T^{2} - 383110 T^{3} + 162307 T^{4} - 30308 T^{5} + 2212 T^{6} - 484 T^{7} + 619 T^{8} - 94 T^{9} + 2 T^{10} + 2 T^{11} + T^{12}$$ $13$ $$256 + 3328 T + 21632 T^{2} + 30784 T^{3} + 22800 T^{4} + 5888 T^{5} + 832 T^{6} + 384 T^{7} + 488 T^{8} + 112 T^{9} + 8 T^{10} - 4 T^{11} + T^{12}$$ $17$ $$( -823 - 1412 T + 631 T^{2} + 152 T^{3} - 49 T^{4} - 4 T^{5} + T^{6} )^{2}$$ $19$ $$29997529 + 20494934 T + 7001282 T^{2} + 1036582 T^{3} + 792387 T^{4} + 486580 T^{5} + 165412 T^{6} + 25044 T^{7} + 2219 T^{8} + 254 T^{9} + 98 T^{10} + 14 T^{11} + T^{12}$$ $23$ $$8248384 + 7631232 T^{2} + 2431600 T^{4} + 304192 T^{6} + 11868 T^{8} + 184 T^{10} + T^{12}$$ $29$ $$428655616 + 33788928 T + 1331712 T^{2} - 4592512 T^{3} + 5399440 T^{4} - 92800 T^{5} + 512 T^{6} - 2208 T^{7} + 7960 T^{8} - 32 T^{9} + T^{12}$$ $31$ $$( 2152 + 3688 T + 1100 T^{2} - 248 T^{3} - 82 T^{4} + 2 T^{5} + T^{6} )^{2}$$ $37$ $$6801664 + 9847808 T + 7129088 T^{2} - 8940288 T^{3} + 4476736 T^{4} - 781568 T^{5} + 51840 T^{6} + 4704 T^{7} + 3652 T^{8} - 512 T^{9} + 32 T^{10} + 8 T^{11} + T^{12}$$ $41$ $$86397025 + 111297670 T^{2} + 17569359 T^{4} + 1016212 T^{6} + 24687 T^{8} + 262 T^{10} + T^{12}$$ $43$ $$26214400 - 52428800 T + 52428800 T^{2} - 26214400 T^{3} + 7356416 T^{4} - 806912 T^{5} + 8192 T^{6} - 3840 T^{7} + 7108 T^{8} - 128 T^{9} + T^{12}$$ $47$ $$( -22016 + 9792 T + 4068 T^{2} - 472 T^{3} - 136 T^{4} + 4 T^{5} + T^{6} )^{2}$$ $53$ $$7225000000 - 4216000000 T + 1230080000 T^{2} - 168787200 T^{3} + 15410224 T^{4} - 2509056 T^{5} + 812032 T^{6} - 106368 T^{7} + 7436 T^{8} - 352 T^{9} + 128 T^{10} - 16 T^{11} + T^{12}$$ $59$ $$56712564736 - 33926946816 T + 10147995648 T^{2} - 963770368 T^{3} + 71756800 T^{4} - 14528000 T^{5} + 4040192 T^{6} - 403136 T^{7} + 21380 T^{8} - 712 T^{9} + 200 T^{10} - 20 T^{11} + T^{12}$$ $61$ $$473344 + 374272 T + 147968 T^{2} - 236544 T^{3} + 628544 T^{4} + 249344 T^{5} + 59776 T^{6} - 33248 T^{7} + 9476 T^{8} + 72 T^{9} + 8 T^{10} - 4 T^{11} + T^{12}$$ $67$ $$38626225 + 6152850 T + 490050 T^{2} + 17786890 T^{3} + 37824659 T^{4} + 22084652 T^{5} + 7133348 T^{6} + 1460572 T^{7} + 203043 T^{8} + 19386 T^{9} + 1250 T^{10} + 50 T^{11} + T^{12}$$ $71$ $$95257600 + 132730880 T^{2} + 18877456 T^{4} + 1009152 T^{6} + 24008 T^{8} + 256 T^{10} + T^{12}$$ $73$ $$192626641 + 287556114 T^{2} + 55765023 T^{4} + 2400924 T^{6} + 42511 T^{8} + 338 T^{10} + T^{12}$$ $79$ $$( 1250320 - 571120 T + 45632 T^{2} + 3976 T^{3} - 450 T^{4} - 6 T^{5} + T^{6} )^{2}$$ $83$ $$4583881 + 12601926 T + 17322498 T^{2} + 13230166 T^{3} + 5853795 T^{4} + 1106836 T^{5} + 14084 T^{6} + 13044 T^{7} + 40923 T^{8} + 494 T^{9} + 2 T^{10} - 2 T^{11} + T^{12}$$ $89$ $$2165692369 + 2006433682 T^{2} + 240821727 T^{4} + 7595612 T^{6} + 96847 T^{8} + 530 T^{10} + T^{12}$$ $97$ $$( 37504 + 63488 T + 14336 T^{2} - 1088 T^{3} - 324 T^{4} + T^{6} )^{2}$$ show more show less
Sharing is caring # How to draw a DC/DC Buck Converter in LaTeX using CircuiTikZ • dc-dc buck converter is a dc voltage converter used to transform an unregulated dc input into a lower controlled dc output. This transformation is achieved by the use of semiconductors devices that turn on and off at high switching frequency. In this tutorial, we will learn how to draw a dc-dc buck converter in LaTeX using CircuiTikZ package. The idea is to recreate the circuit diagram shown in Fig. 1, which is published recently in IEEE Xplore. Buck converter circuit diagram (published in IEEE Xplore 2019) ## What motivates me to use TikZ Some of the drawbacks of adding an image directly into a LaTeX document are: • The font is not the same as the rest of the document, • The font size is affected when the image is scaled, • If you would like to modify your illustration you have to go back to the used drawing tool, more distraction. Well, we assume that the image in question is a good one, like mine For that, I prefer to use TikZ to draw my illustrations and benefit from its features such as precision, reusability and automation. And when it comes to drawing circuits in LaTeX, circuitikz is the best option. My first illustration, December 2011. ## CircuiTikZ, minimal code The CircuiTikZ package can be loaded as follows: \documentclass[border=0.2cm]{standalone} \usepackage{circuitikz} \begin{document} \begin{circuitikz}[american] \end{circuitikz} \end{document} • The CircuiTikZ package is built based on PGF/TikZ, which means no need to load TikZ package twice as it is already loaded by CircuiTikZ. The circuit code will be added inside the CircuiTikZ environment which is an alias for tikzpicture. As an option, we have chosen american style for the electrical components. ## The Origin, a successful journey depends on it! Choosing the starting point of your illustration (the origin) is important as it makes positioning easy to deduce. Sometimes, to go faster, I draw my illustration down in a paper and add coordinates to it. The latter are adjusted later by trial and error method until I get a satisfactory result. Here is my hand drawn version of a dc-dc buck converter: Hand drawn version of a dc-dc buck converter I have chosen the starting point at the bottom left of the circuit diagram. From there, I will draw a dc source then a switch an inductor a resistor  a ground and go back to the starting point. Along this path, I save coordinates to use them later to draw the diode and the capacitor (a1 and a2). Ready for details, Let's go! ## DC/DC buck converter circuit diagram Before going further with details and for curious minds, Here is the buck converter schematic drawn in LaTeX using the CircuiTikZ package DC-DC Buck converter drawn in LaTeX using CircuiTikZ package And the corresponding code is: \documentclass[border=0.2cm]{standalone} \usepackage{circuitikz} \begin{document} \begin{circuitikz}[american] % Change components size \ctikzset{ resistors/scale=0.7, capacitors/scale=0.7, diodes/scale=0.7, inductors/coils=6 } \node[nigfete,rotate=90,label=S] (switch)at (1.7,3){} ; % Draw DC source \draw (0,0) to[battery1,invert,l=$v_{in}$] ++(0,3) -- (switch.D); % Draw the inductor \draw (switch.E) -- ++(1,0) coordinate(a1); \draw (a1) to[cute inductor,l=L, i>^=$i_L$] ++(3,0) coordinate(a2); % Draw the resistor \draw (a2) -- ++(1.5,0) to[R,l_=R,v^>=$\:v_o$,i">"_=$i_o$] ++(0,-3) to[short] ++(-1.5,0) coordinate(a3); % draw the ground \node[ground] at (a3) {}; % Close the circuit \draw (a3) -- ++(-3,0) coordinate(a4); \draw (a4) -- (0,0); % Draw the capacitor \draw (a3) to[C,invert,*-*,l=C,v<=$\:v_c$] (a2); % Draw the diode \draw (a4) to[D*,l_= D,*-*] (a1); \end{circuitikz} \end{document} • In CircuiTikZ, the electrical components are separated in two main categories: 1. One that are bipoles and are placed along a path. In this example, it corresponds to resistor, diode, DC voltage source, inductor and capacitor. 2. Components that have any number of poles or connections and are placed as nodes. In the buck converter case, it corresponds to the transistor and the ground elements. Let's start drawing our circuit from scratch! ### Step 1: Add a transistor as a node From the CircuiTikZ manual, the transistor type is named nigfete which can be added using node command at any coordinate. It has four predefined connectors as shown in the next illustration where we can link paths to it. Transistor anchors In the circuit code, we have added the transistor as a node at the point with coordinates (1.7,3) and we named it (switch) to get access to its four connectors. This corresponds to line (17) of the code: \node[nigfete,rotate=90,label=S] (switch)at (1.7,3){} ; By default, the transistor orientation is shown above and to get the right one, we have rotated it by 90 degrees using the option rotate=90. In addition, the switch has a label Swhich is set by the option label=S. ### Step 2: Draw dc voltage source From the CircuiTikZ manual, the dc voltage source corresponds to the element with the name battery1 which belongs to the first components category. Thus, it will be placed along a path. It is drawn from the origin to the point (0,3) and then linked to the transistor through its connector (switch.D): \draw (0,0) to[battery1,invert,l=$v_{in}$] ++(0,3) -- (switch.D); • To flip a component in CircuiTikZ, we add the option invert to the component in question. Inverting the DC voltage source polarity The DC voltage source has the label v_{in}, which is defined by the option l=$v_{in}$. We can specify the label position (right or left of the electrical component) using l^=$v_{in}$ or l_=$v_{in}$. ### Step 3: Draw an inductor Now, we have to move from the right of the transistor (switch.E) by 1cm along the x-axis and save the coordinate using the command coordinate as follows: \draw (switch.E) -- ++(1,0) coordinate(a1); From the point (a1), we draw an inductor along the x-axis (3cm) and there we save the second point (a2): \draw (a1) to[cute inductor,l=L,i>^=$i_L$] ++(3,0) coordinate(a2); The inductor shape is obtained using the option (cute inductor). With the same manner as the DC voltage source, we have added the label to the inductor using the option l=L Inductor styles L vs cute inductor Inductor current is draw at its input where direction and label position can be specified as follows: Current arrow and label positions (left to right direction). Current arrow and label positions (right to left direction). ### Step 4: Draw a resistor From the point (a2), we draw a straight line along the x-axis and from there we draw a resistor (using the option resistor or simply  ) along the y-axis as follows: \draw (a2) -- ++(1.5,0) to[R,l_=R,v^>=$v_o$,i>_=$i_o$] ++(0,-3) -- ++(-1.5,0) coordinate(a3); ### Step 4: Add a ground The ground is added as a node at the point named (a3) \node[ground] at (a3) {}; Ground added as a node in two cases: with and without rotation From the point (a3), we have drawn a straight line to the origin and we have saved the point (a4) to be used later to draw the diode: \draw (a3) -- ++(-3,0) coordinate(a4); \draw (a4) -- (0,0); ### Step 5: Draw a capacitor and a diode Now, we draw the diode between the points (a1) and (a4), and the capacitor between the points (a2) and (a3) : \draw (a3) to[C,invert,*-*,l=C,v<=$\:v_c$] (a2); \draw (a4) to[D*,l_= D,*-*] (a1); • We have added the option *-* to both components to highlight the connection points. For one side connection, you can use -* or *-. For small circles, you can use o instead of * Different Capacitor styles ### Step 6: Change components size Components can be scaled using the command \ctikzset as follows: \ctikzset{ resistors/scale=0.7, capacitors/scale=0.7, diodes/scale=0.7, inductors/coils=6 } Without components scaling DC-DC buck converter drawn in LaTeX using CircuiTikZ At this level, we have reached the end of this tutorial. Share with us your thoughts, reach us at [email protected], we will be happy to hear from you!
# How to create a scatterplot in R using ggplot2 with transparency of points? R ProgrammingServer Side ProgrammingProgramming A scatterplot is used to observe the relationship between two continuous variables. If the sample size is large then the points on the plot lie on each other and does not look appealing. Also, the interpretation of such type of scatterplots is not an easy task, therefore, we can increase the transparency of points on the plot to make it more appealing. We can do this by using alpha argument in geom_point of ggplot2. ## Example Consider the below data frame − > set.seed(123) > x<-rnorm(5000) > y<-rnorm(5000,0.5) > df<-data.frame(x,y) > library(ggplot2) > ggplot(df,aes(x,y))+geom_point() ## Output > ggplot(df,aes(x,y))+geom_point(alpha=0.10) ## Output > ggplot(df,aes(x,y))+geom_point(alpha=0.05) ## Output Published on 12-Aug-2020 12:59:34
# I Condition on vector field to be a diffeomorphism. 1. Jun 5, 2016 ### kroni Hi everybody, Let $V(x)$ a vector field on a manifold ($R^2$ in my case), i am looking for a condition on $V(x)$ for which the function $x^µ \rightarrow x^µ + V^µ(x)$ is a diffeomorphism. I read some document speaking about the flow, integral curve for ODE solving but i fail to find a generic condition to avoid V to send two point on the same coordinate. I think about the generator of the diffeomorphism group but it's only defined infinitesimaly. Thanks Clément 2. Jun 5, 2016 ### wrobel there are a lot of possible approaches to sufficient conditions. For example, if $\sup_{x\in\mathbb{R}^2}\Big\|\frac{\partial V}{\partial x}\Big\|$ is small enough then it is a diffeomorphism. monotonicity assumption can also help. Perhaps the book Topics in Nonlinear Functional Analysis by L Nirenberg would be of use 3. Jun 5, 2016 ### kroni Monotonicity work only in 1D, [itex]\sup_{x\in\mathbb{R}^2}\Big\|\frac{\partial V}{\partial x}\Big\|[\itex] is non local. I will look in the book you advise. I find this problem really interesting, may be treated and treated again, but interesting
# zbMATH — the first resource for mathematics On isopart parameters of complete bipartite graphs and n-cubes. (English) Zbl 0585.05027 A graph H is said to be G-decomposable if H can be decomposed into the subgraphs $$H_ 1,H_ 2,...,H_ n$$ such that they are all isomorphic to G. Fink introduced three ”isopart parameters”, $$p_ 0(G)$$, $$r_ 0(G)$$, and $$f_ 0(G)$$. The numbers $$p_ 0(G)$$ and $$r_ 0(G)$$ are respectively the minimum order and minimum degree of regularity among all connected, regular, G-decomposable graphs. The parameter $$f_ 0(G)$$ is the smallest number t ($$\geq 2)$$ for which there exists a connected regular graph H decomposable into t copies of G. The authors determine the three parameters for all complete bipartite graphs and the n-cube. Reviewer: Z.Ma ##### MSC: 05C70 Edge subsets with special properties (factorization, matching, partitioning, covering and packing, etc.) 05C99 Graph theory Full Text: ##### References: [1] BEHZAD M., CHARTRAND G., LESNIAK-FOSTER L.: Graphs & Digraphs. Wadsworth International, Belmont, CA, 1979. · Zbl 0403.05027 [2] FINK J. F.: Every graph is an induced isopart of a connected regular graph. Submitted for publication. · Zbl 0614.05052 [3] FINK J. F.: On smallest regular graphs with a given isopart. Submitted for publication. · Zbl 0607.05039 [4] FINK J. F., RUIZ S.: Every graph is an induced isopart of a circulant. Submitted for publication. · Zbl 0614.05052 [5] KÖNIG D.: Über Graphen und ihre Anwendung auf Determinantentheorie and Mengenlehre. Math. Ann. 77, 1916, 453-465. · JFM 46.0146.03 [6] PETERSEN J.: Die Theorie der regularen Graphen. Acta Math. 15, 1891, 193-220. · JFM 23.0115.03 [7] REISS M.: Über eine Steinersche kombinatorische Aufgabe. J. reine u. ang. Mathematik 56,1859, 326-344. [8] WILSON R. M.: Decompositions of complete graphs into subgraphs isomorphic to a given graph. Proceedings of the Fifth British Combinatorial Conference (Univ. Aberdeen, Aberdeen, 1975) 647-659. Congressus Numerantium, No. XV, Utilitas Math., Winnipeg, Man. (1976). This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
# Problem with a Resource Manager This topic is 4853 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I'm trying to utilize a variation of Scripts and Resource Management from the "FPS in DirectX" book. There is one thing in particular in the book that I wanted to change that is not working for me, and I know *what* is going on, but I'm not sure why. Hopefully someone here can throw the info my way ;) Basically, in my global engine there is a function to get a pointer to my Script Resource Manager. The function is public, and the Manager* is protected. Now I have another global class that gets the pointer to my Manager via the GetScriptManager() call, and then asks it to add a Script... ResourceManager* manager = GetScriptManager(); Script* setup = manager->Add("setup.txt"); When the Manager adds a Script, it uses the "push_back" of it's vector to store the new Script, and then returns a pointer to it. My problem is that when the pointer comes back, it is invalid. I debugged enough to see that right before the return, the most recently added element of the vector is ok, so the only thing I can think of is that there is a problem since the Manager itself (and therefore it's members) is protected. If that is the case, then I'm just not really clear on why it's member function can return a pointer, but that pointer becomes invalid. The basic jist of my code is as such (inside the Managers "Add" function): Script *resource = new Script("setup.txt"); m_Variables->push_back(*resource); return &(m_Variables->back()); I'd appreciate any explanation anyone can give as to why this happens...hopefully my description is clear enough. Thanks for any input, Antrim ##### Share on other sites I think your problem lies here: return &(m_Variables->back()); as m_Variables->back() will be generated as a local value, and loses it's value, when you leave the function. Why don't you just return the iterator for your vector? That works similiar to pointers, otherwise the best approach would be to store pointers in your vector. ##### Share on other sites Quote: I think your problem lies here: return &(m_Variables->back()); as m_Variables->back() will be generated as a local value, and loses it's value, when you leave the function. back() returns a reference, which has the same address as the object inside the vector. Pointers to objects stored in a vector are not stable: they can become invalid when you insert, add or remove items. Make sure you don't do anything at all with the vector until you are done using the pointer to the script. The pointer to the script is stable if: - You use a list - You store pointers to scripts instead of scripts ##### Share on other sites by the way: Script *resource = new Script("setup.txt"); m_Variables->push_back(*resource); return &(m_Variables->back()); You know you have a memory leak there? correct it would be: Script *resource = new Script("setup.txt"); m_Variables->push_back(*resource); delete resource; return &(m_Variables->back()); ##### Share on other sites Ok, I think that switching the vector to store pointers to Scripts might be the easiest fix (less code to change). Also, I hadn't thought about the implications of manipulating the vector again before I was done using the pointer I just grabbed...not sure how likely it is, but better to be safe. I also didn't realize that sending &(m_Variables->back()) as the return value would loose its value. I thought returning it would just give me a pointer to the location of the object that was referenced by back(). I suppose where I'm confused is that a function, such as: char* test(){ char* value = "abc"; return value;} would successfully return the value, even though value is a locally declared variable. However, returning &(m_Variables.back()) looses its value. Something about that just doesn't seem right to me. Maybe someone can beat some sense into me as to why this is. As for the memory leak...heh, yeah, thanks for pointing that out. I, for some reason, was just thinking that deleting resource would get rid of the object in the vector. Forgot that it makes its own copy. Thanks for the info, Antrim ##### Share on other sites char* test(){ char* value = "abc"; return value; } this works also only, if you are lucky... as there is no garantuee, when the memory will be reused, where value is pointing to.... a way to get this to work is, would be: static char* value; or by returning a string, which will be then copied. ##### Share on other sites Really? man, I always thought that was an ok thing to do. Anyway, thanks for clearing stuff up for me. I just switched the vector to holding pointers and everything is working fine now. Glad I got straightened out before I completly butchered my code. By the way, just to make sure I understand how the vector works...Now that I'm using pointers, not deleting "resource" doesn't cause a memory leak correct...doing so would actually delete the Script I'm wanting to use right? Script* resource = new Script("setup.txt"); m_Variables->push_back(resource); return m_Variables.back(); Just wanted to make sure I'm understanding vector correctly...in that it just makes it's own copy of whatever you push back...in this case, the pointer, which will take care of itself on function exit right? Thanks again for all the help, it's much appreciated, Antrim ##### Share on other sites In the function itself you have now longer a memory leak, but formally you have still a memory leak for your whole program ;-) But this not really bad, as all memory still allocated by a programm will be "freed" at exit. Correct would be somewhere a loop about your vector, where you call a delete for each element. and then a clear() for the vector itself. ##### Share on other sites ah, yeah, I actually do have a function in the ResourceManager that iterates through the vector and deletes each pointer, then does a clear(). I also remembered to put the "delete" call in the ResourceManager::Remove() as well, so I think all the leaks should be covered ;) 1. 1 Rutin 37 2. 2 3. 3 4. 4 5. 5 • 11 • 12 • 12 • 14 • 9 • ### Forum Statistics • Total Topics 633351 • Total Posts 3011478 • ### Who's Online (See full list) There are no registered users currently online ×
# First week in-class assignments Make sure that you have looked at the preliminaries and review section. We will work on solutions to the following problems from the text. Chapter 1. 1. (Problem 11) Prove that $(A\cup B)\times C = (A\times C)\cup(B\times C)$ 2. (Problem 19) Let $f:A\to B$ and $g:B\to C$ be invertible mappings, that is, functions such that $f^{-1}$ and $g^{-1}$ exist. Prove that $(g\circ f)^{-1} = f^{-1}\circ g^{-1}$. 3. (Problem 24) Let $f:X\to Y$ be a map with $A_1,A_2\subset X$ and $B_1, B_2\subset Y$. • Prove that $f(A_1\cup A_2) = f(A_1)\cup f(A_2)$. • Prove that $f(A_2\cap A_2)\subset f(A_1)\cap f(A_2)$. Given an example in which equality fails. • Prove that $f^{-1}(B_1\cup B_2)=f^{-1}(B_1)\cup f^{-1}(B_2)$ where $f^{-1}(B) = \{x\in X : f(x)\in B\}$ • Prove that $f^{-1}(B_1\cap B_2)=f^{-1}(B_1)\cap f^{-1}(B_2)$. • Prove that $f^{-1}(Y-B_1) = X-f^{-1}(B_1)$. 4. (Problem 25) Which of the following relations are equivalence relations? For those which are, describe the associated partition. For those which aren’t, explain why not. • $x\sim y$ in $\mathbb{R}$ if $x\ge y$. • $m\sim n$ in $\mathbb{Z}$ if $mn>0$. • $x\sim y$ in $\mathbb{R}$ if $\vert x-y\vert\le 4$. • $m\sim n$ in $\mathbb{Z}$ if $m\equiv n\pmod{6}$ Chapter 2. 1. (Problem 4) Prove that $x+4x+7x+\cdots+(3n-2)x=\frac{n(3n-1)x}{2}$ for all $n\in \mathbb{N}$. 2. (Problem 15) For each of the following pairs of numbers $a$, $b$, find integers $r$ and $s$ so that $ar+bs=gcd(a,b)$. • $14$,$39$ • $234$, $165$ • $1739$, $9923$ • $471$, $562$ • $23771$, $19945$ • $-4357$, $3754$ 3. (Problem 17) Define the Fibonacci numbers by the recurrence relation $f_n=f_{n-1}+f_{n-2}$ with $f_1=1$ and $f_2=1$. Prove the following: • $f_{n}<2^{n}$. • $f_{n+1}f_{n-1}=f_{n}^2+(-1)^{n}$ • $f_{n} = [\phi^{n}-\overline{\phi}^{n}]/(2^{n}\sqrt{5})$ where $\phi=(1+\sqrt{5})/2$ and $\overline{\phi}=(1-\sqrt{5})/2$. • Prove that $\lim_{n\to\infty} f_{n}/f_{n+1} = -\overline{\phi}$. • Prove that successive fibonacci numbers are relatively prime. 4. (Problem 22) Let $n\in \mathbb{N}$. Use the division algorithm to prove that every integer is congruent mod $n$ to exactly one of the integers $0,1,\ldots, n-1$. Conclude that if $r$ is an integer, then there is exactly one $s$ in $\mathbb{Z}$ such that $0\le s<n$ and $[r]=[s]$. Conclude that the integers are partitioned into $n$ disjoint congruence classes mod $n$. 5. (Problem 25) Show that the least common multiple of two integers $a$ and $b$ is their product if and only if their greatest common divisor is one.
# On the completeness of topologically isomorphic spaces Let $$(E_1, tau_1)$$ to be a locally convex space and let $$(E_2, tau_2)$$ to be a complete space locally convex. Assume that $$T: (E_1, tau_1) longrightarrow (E_2, tau_2)$$ is a topological isomorphism (i.e. $$T$$ is linear, bijective, continuous and its inverse $$T ^ {- 1}$$ is continuous too). Is it true that space $$(E_1, tau_1)$$ is necessarily complete? Thank you for all the advice / comments.
# Problem with \mkbibdateapalongextra of biblatex-apa biblatex-apa gives me this: ! Undefined control sequence. <argument> \mkbibdateapalongextra {labelyear}{labelmonth}{labelday}\iffieldu... I'm using the default (english) language, and have done the: \DeclareLanguageMapping{norsk}{norsk-apa} \DeclareLanguageMapping{english}{american-apa} thing. Here's a short example: \documentclass[english]{memoir} \usepackage{babel} \usepackage{roffe} \usepackage[backend=biber,date=short,maxcitenames=2,style=apa]{biblatex} \DeclareLanguageMapping{norsk}{norsk-apa} \DeclareLanguageMapping{english}{american-apa} \begin{document} \cite{R-base} \printbibliography \end{document} This one replicates the problem. • Without a MWE we can only guess. Did you put \DeclareLanguageMapping{english}{american-apa} after babel and biblatex? Do you load the language english or american in babel? – moewe Sep 17 '13 at 7:49 • Welcome to TeX.SX! Your question would require for a definite/specific answer a Minimal Working Example, or MWE – alandella Sep 17 '13 at 7:51 • I've tried to build an MWE, but, and here's the rub, none that replicates this error. Which is why I'm stymied. Is there anything I should look for in the .log file? – roffe Sep 17 '13 at 15:10 • @roffe Well, if you have no way of reproducing the error, it is quite hard to track down the problem. You could start off with the affected document and delete the unnecessary parts, thereby arriving at a MWE. Did you try deleting all the temporary files and recompile? Maybe an update can help. – moewe Sep 17 '13 at 16:36 • (pastebin.com/gNDrMnMk) replicates the problem. – roffe Sep 18 '13 at 12:42 Starting from biblatex v3.8, biblatex-apa v7.5 an explicit \DeclareLanguageMapping should not be needed any more. The mapping is automatically done for you with \DeclareLanguageMappingSuffix{-apa}. Of course this can only work properly if biblatex-apa comes with an .lbx file for your language. Update biblatex, Biber and biblatex-apa to their newest versions if you experience problems with an undefined \mkbibdateapalongextra. The old version of this answer is left below in case you are stuck with an old version of biblatex or biblatex-apa. If you use biblatex-apa you will need a language mapping for each used language (at least the main language) to its -apa counterpart \DeclareLanguageMapping{american}{american-apa} if your document is american. See also problems using apa6e with biblatex-apa. This is pointed out in the biblatex-apa documentation, § 3 Specify the style in the usual way when loading biblatex. If you are using babel: \usepackage[american]{babel} \usepackage{csquotes} \usepackage[style=apa]{biblatex} \DeclareLanguageMapping{american}{american-apa} Refer to section 3.2 Localisation for a few more hints. That means for each language you load with babel or polyglossia (but there things are a bit more complicated), you will need a mapping. You will also have to provide a language mapping if you don't load babel at all. In that case the default language is English and you need \DeclareLanguageMapping{english}{english-apa}. Whenever you declare a language mapping, biblatex uses the new file (in our case british-apa.lbx) if need be, that is if the mapped language is requested (in our case english). british-apa.lbx contains some additional "BibliographyExtras" declared by \DefineBibliographyExtras{british}. These extras are only available for the exact language they are specified for (here british). So even though we have forced biblatex to load british-apa.lbx instead of english.lbx we cannot use the "BibliographyExtras" since our document requests them for english only, but they are only available for british. The relevant part of the documentation, § 4.11.8 Custom Localization Modules, p. 232 states: Note that \DeclareLanguageMapping is not intended to handle language variants (e.g., AmericanEnglish vs. BritishEnglish) or babel language aliases (e.g., USenglish vs. american). For example, babel offers the USenglish option which is similar to american. Therefore, biblatex ships with an USenglish.lbx file which simply inherits all data from american.lbx (which in turn gets the ‘strings’ from english.lbx). In other words, the mapping of language variants and babel language aliases happens on the file level, the point being that biblatex's language support can be extended simply by adding additional lbx files. The simplest solution would be to use british or american instead of the "generic" english. The following MWE works on my machine. \documentclass[british]{article} \usepackage{babel} \usepackage{csquotes} \usepackage[backend=biber,date=short,maxcitenames=2,style=apa]{biblatex} \DeclareLanguageMapping{british}{british-apa} \begin{filecontents}{\jobname.bib} @Manual{R-base, title = {R: A Language and Environment for Statistical Computing}, author = {{R Development Core Team}}, organization = {R Foundation for Statistical Computing}, year = {2008}, isbn = {3-900051-07-0}, url = {http://www.R-project.org}, } \end{filecontents} \begin{document} \cite{R-base} \nocite{*} \printbibliography \end{document} If you do not want to switch to a language other than english, you can go with the fix suggested in Polyglossia and biblatex-apa. Copy british-apa.lbx to a place LaTeX can find it, rename it to english-apa.lbx and replace all occurrences of british with english (the most important of which is \DefineBibliographyExtras{british} which becomes \DefineBibliographyExtras{english})
# x64 Assembly zeroing an array (8 bytes at a time) Is there a better way of implementing this other than using simd instructions? What is the best way of dealing with arrays not divisible by 8, as in the code where if there are less than 8 bytes left to zero they just get zeroed 1 by 1? Maybe it is faster to check how many bytes there are left and then zero them 2 bytes or 4 bytes at a time? Does the checking outweigh the cost of doing them 1 by 1? This is just a test for me to try to learn assembly so any, even small, improvements and tips are greatly appreciated. Thank you .code ZeroArray proc cmp edx, 0 jle Finished ; Check if count is 0 cmp edx, 8 jl SetupLessThan8Bytes ; Check if counter is less than 8 mov r8d, edx ; Storing the original count shr edx, 3 ; Bit shifts the counter to the right by 3 (equal to dividing by 8), works because 2^3 is equal to 8 mov r9d, edx ; Stores the divided count to be able to check how many single byte zeros the program has to do MainLoop: mov qword ptr [rcx], 0 ; Set the next 8 bytes (qword) to 0 add rcx, 8 ; Move pointer along the array by 8 bytes dec edx ; Decrement the counter jnz MainLoop ; If counter is not equal to 0 jump to MainLoop shl r9d, 3 ; Bit shifts the stored divided counter to the left by 3 (equal to multiplying by 8), 2^3 again sub r8d, r9d ; Subs the counts from eachother, if it equals zero all bytes are zeroed, otherwise r8d equals the amount of bytes left je Finished SetFinalBytesLoop: mov byte ptr [rcx], 0 ; Sets the last byte of the array to 0 inc rcx dec r8d jnz SetFinalBytesLoop Finished: ret SetupLessThan8Bytes: mov r8d, edx ; Mov the value of edx into r8d so the same code can be used in SetFinalBytesLoop jmp SetFinalBytesLoop ZeroArray endp end • You are using MASM and Visual Studio is this correct? – xvk3 Sep 12 '17 at 18:49 • Yes, I am and I'm calling the function from c++. @Will – Signekatt Sep 12 '17 at 18:50 • Is the second parameter of ZeroArray the number of qwords or number of bytes? – xvk3 Sep 12 '17 at 19:05 • There are lots of different ways of going about this. Which one is fastest tends to change from CPU to CPU. For example, looking at the source for the MSVC memset function (which is basically what you are doing), you can see it testing whether the current CPU supports "Enhanced Fast Strings" as it selects which approach to use. As you say this is for educational purposes, how about looking at the stosb/stosw/stosd/stosq instructions? Combined with the rep prefix they can produce small, easy-to-understand code that is a common alternative if you don't want to use SIMD instructions. – David Wohlferd Sep 12 '17 at 22:53 ## Shave off a byte cmp edx, 0 jle Finished ; Check if count is 0 Using cmp is certainly not wrong, but the optimal way to check for any inappropriate counter value would be to use the test instruction. test edx, edx jle Finished ; Check if count is 0 Bypassing when the counter is zero is fine, but perhaps a negative counter value should rather be considered an error and handled accordingly? ## Don't loose yourself in jumping around cmp edx, 8 jl SetupLessThan8Bytes ; Check if counter is less than 8 mov r8d, edx ; Storing the original count ... ... SetupLessThan8Bytes: mov r8d, edx jmp SetFinalBytesLoop When the counter in EDX is smaller than 8, you jump to SetupLessThan8Bytes where you just make a convenient copy of the counter and then jump again to SetFinalBytesLoop. If you move the instruction that makes a copy of the original counter to right before where you compare the counter to 8, you can save yourself from writing 3 lines of code (a label, a mov, and a jmp). Moreover the program becomes clearer. mov r8d, edx ; Storing the original count cmp edx, 8 jl SetFinalBytesLoop ; Check if counter is less than 8 ## You don't even have to compare to 8 at all! When you shift the counter in EDX 3 times to the right in order to find out how many qwords you have to process, you can look at the zero flag. If the ZF is set (meaning no qwords at all), you instantely know that the counter is in the range [1,7], and so the above snippet becomes: mov r8d, edx ; Storing the original count shr edx, 3 ; Equal to dividing by 8 jz SetFinalBytesLoop ; Jump if counter is less than 8 ## Easier calculation of leftovers mov r9d, edx ... shl r9d, 3 sub r8d, r9d je Finished SetFinalBytesLoop: The way you find out about the number of left over bytes is too complicated. It's correct but needlessly involved. Basically all it takes is anding the original counter with 7 to extract the lowest 3 bits. Simpler, shorter, and using one register less which in future programs will always be handy: and r8d, 7 jz Finished SetFinalBytesLoop: ## Smaller instructions are generally better With the 32-bit immediate value, the mov instruction in the MainLoop is quite long (7 bytes). You can store the zero in RAX and move that to memory. This also eliminates the need for the mention "qword ptr ": xor rax, rax ; Equivalent to MOV EAX, 0 MainLoop: mov [rcx], rax ; Set the next 8 bytes (qword) to 0 add rcx, 8 ; Move pointer along the array by 8 bytes dec edx ; Decrement the counter jnz MainLoop ; If counter is not equal to 0 jump to MainLoop ## Your program with all the above applied xor rax, rax test edx, edx jle Finished ; Check if count is LE 0 mov r8d, edx ; Copy of the original count shr edx, 3 ; Gives number of qwords jz SetFinalBytesLoop ; Jump if counter is less than 8 MainLoop: mov [rcx], rax ; RAX=0 Set the next 8 bytes (qword) to 0 add rcx, 8 ; Step per 8 bytes dec edx ; Dec the counter jnz MainLoop and r8d, 7 ; Remainder from division by 8 jz Finished SetFinalBytesLoop: mov [rcx], al ; AL=0 Sets the last bytes of the array to 0 inc rcx ; Step per 1 byte dec r8d ; Dec counter jnz SetFinalBytesLoop Finished: ret I've moved the xor rax, rax higher up in the code so SetFinalBytesLoop can benefit from using the register AL vs the immediate 0. ## The optimization The most important optimization that you can apply to your program is making sure that the qword value that you write is aligned on a qword boundary, so a memory address that is divisible by 8. The extra alignment loop will at most iterate 7 times. xor rax, rax test edx, edx jle Finished ; Check if count is LE 0 jmp TestAligned AlignLoop: mov [rcx], al inc rcx dec edx jz Finished TestAligned: test rcx, 7 ; Is this a qword aligned address? jnz AlignLoop ; Not yet! mov r8d, edx ; Copy of the (reduced) original count shr edx, 3 ; Gives number of qwords jz SetFinalBytesLoop ; Jump if counter is less than 8 MainLoop: mov [rcx], rax ; RAX=0 Set the next 8 bytes (qword) to 0 add rcx, 8 ; Step per 8 bytes dec edx ; Dec the counter jnz MainLoop and r8d, 7 ; Remainder from division by 8 jz Finished SetFinalBytesLoop: mov [rcx], al ; AL=0 Sets the last bytes of the array to 0 inc rcx ; Step per 1 byte dec r8d ; Dec counter jnz SetFinalBytesLoop Finished: ret
× Custom Search cancel × × Questions(204) × Find the remainder when $1201×1203×1205×1207$ is divided by $6.$ if $578xy6$ is divisible by $18,$ then the maximum value of $x$ can be ? A four digit number N is of the form XYZZ Each letter stands for a digit. The successor of N is of the form XPQQ The predecessor of N is of the form XYZU Can you find the value of Z+Q+U How many numbers are there between $100$ and $1000$ which have exactly one of their digits is $7\ ?$ if $x=\left(a+\sqrt{a^2+b^3}\right)^{1/3}+\left(a-\sqrt{a^2+b^3}\right)^{1/3},$ then what is value of $x^3+3bx-2a?$ If $9^{3x+2}=27^{5x+1}$ Then what will be the value of $x?$ What will be the remainder when $495149514951\cdots$ up to $900$ digits is divided by $101?$ The difference between the two digits of a number less than $100$ is $2.$ If $\dfrac{3}{2}$ times the sum of the digits be diminished from it, the digits will be reversed. Find the number. A number consists of two digits, the digits in the ten’s place exceeds that in the unit’s place by $5$ and if $5$ times the sum of the digits be subtracted  from the number, the digits will be reversed. Find the number.
## “School of Physic” Back to Papers Home Back to Papers of School of Physic Paper   IPM / Physic / 11553 School of Physics Title:   Calculating the jet-quenching parameter in STU background Author(s): 1 K. Bitaghsir Fadafan 2 B. Pourhassan 3 J. Sadeghi Status:   Preprint Journal: Year:  2010 Supported by:  IPM Abstract: In this paper we use the AdS/CFT Correspondence to compute the jet-quenching pa- rameter in a N = 2 thermal plasma. We add a constant electric field to the background and find the effect of the electric field on the jet-quenching parameter. Also we include higher derivative terms and obtain the first-order correction for the jet-quenching pa- rameter. Download TeX format back to top scroll left or right
# Dual space of l-infinitive Why the element of dual space of l-infinity can be represented as sum of l1 and c0 elements? - But that is not true, as is well known. (Hint: Ultrafilters on $\mathbb{N}$.) –  Harald Hanche-Olsen Mar 22 '11 at 9:46 And Fremlin and Talgats paper it is. And i didnt understand it –  Ravil Mudarisov Mar 22 '11 at 10:18 Perhaps you could be more specific in your question - you could at least provide info about the article you're studying. Did you mean D. H. Fremlin and M. Talagrand: A Gaussian Measure on $l^\infty$ jstor.org/stable/2243023 ? –  Martin Sleziak Mar 22 '11 at 11:00 Sorry. Yeah, i mean Fremlin and Talagrand article. –  Ravil Mudarisov Mar 22 '11 at 11:59 Obviously, the OP intended to ask about this sentence "$f\in\ell_\infty^*$ is the sum of an element of $\ell_1$ and an element null on $c_0$" from the paper D. H. Fremlin and M. Talagrand: A Gaussian Measure on $l^\infty$ http://jstor.org/stable/2243023 (Which is different claim from what was in the question.) The authors refer to the book Day, M. (1973). Normed Linear Spaces. Springer, Berlin. I was not able to find the exact place in Day's book where this is shown, but I think that for this special case it is relatively easy. For $f\in\ell_\infty^*$ put $a_i=f(e^i)$. Then the sequence $a=(a_i)$ belongs to $\ell_1$. (Since $\sum\limits_{i=1}^n |a_i| = \sum\limits_{i=1}^n |f(e^i)| = f(\sum\limits_{i=1}^n \varepsilon_ie^i) \le \lVert f \rVert$, where $\varepsilon_i=\pm1$ are chosen according to the signs of $f(e^i)$.) Now, if $x_n\to 0$, then $$f(x)-a^*(x)= \lim\limits_{n\to\infty} f(\sum\limits_{i=1}^n x_ie^i)-\sum\limits_{i=1}^n a_ix_i=0.$$ I hope I haven't overlooked something and that someone will provide the reference to the result (probably more general) which the authors of the above-mentioned paper had in mind. - This answers my question. Thank you. –  Ravil Mudarisov Mar 22 '11 at 12:05 +1 for finding the question that was supposed to be asked. (The proof is tantamount to showing that the dual of $c_0$ is $\ell_1$ and then saying that $\ell_\infty^* = c_0^\perp \oplus c_0^*$.) –  Yemon Choi Mar 22 '11 at 19:27 thanks. sometimes i think that the main aim of a students is to find that kinfd of questions and defects.) Good comment. –  Ravil Mudarisov Mar 22 '11 at 23:03 Now I accidentaly stumbled upon the Hewitt-Yosida decomposition of a finitely additive measure into purely additive and $\sigma$-additive part. See e.g. books.google.com/… If I understand it correctly, after representing the functionals as finitely additive measures it is basically the same thing. It is sumarized nicely in Theorem 6.31 of Aliprantis-Border - the page is not viewable at google books, but you can find the claim here: thales.doa.fmph.uniba.sk/sleziak/texty/rozne/pozn/books/… –  Martin Sleziak May 10 '11 at 14:01 The fact stated above by Martin is a special case of the general property of a bounded functional on a von Neumann algebra - it can be always decomposed into a sum of a normal functional (in other words an image of a functional in the predual, in this case a functional represented by a sequence in $l^1$) and a singular functional (a highly non-normal' functional, in the special case a functional vanishing on $c_0$). One can even achieve the decomposition respecting the functional norms in a suitable sense The general result together with some discussion can be found in the first volume of Takesaki's Theory of Operator Algebras'. -
# Polymethanal (paraformaldehyde) Polymethanal is a common fixative in biology. It is used to reduce degradation in cells, tissue and entire organisms before further experiments, e.g. antibody staining. ## Recipes for polymethanal buffers Some people make 16 or 24% stocks to be diluted at a later point. But the solid polymethanal / PFA is very hard to dissolve at these high concentrations. Stocks or working solutions can be preserved by freezing for later use. Otherwise, methanal reacts with itself to form methanol and methanoate (acidification). ### PFA 4% for 50ml 2.0 g PFA +5 ml H2O +drop of NaOH (0.5-1.0M) +45 ml PBS (correct pH to 7.4 with HCl) ### PFA 1% for 10ml 0.1 g paraformaldehyde powder in small glass tube +0.5 ml distilled water +drop of 0.5-1.0M sodium hydroxide heat to ~80°C for 2-3 mins shake in water bath until PFA dissolved (beaker of very hot water) +9.5ml PBS correct pH with HCl if necessary Can also be solved by heating to 80°C for 3 hours. ## Too many names There's plenty of confusion regarding this chemical because too many synonyms exist. Paraformaldyhde is probably still the most common term among biologist, but it ignores attempts to introduce meaningful and systematic names in chemistry. Polymethanal is clearer: poly - polymer, meth - single carbon, -al - aldehyde group. Here's a list of equivalent terms: polymethanal, polyoxymethylene, polyformaldehyde, paraformaldehyde, paraform, polytrioxane,.. (regards from Babel) Common abbreviations are PFA, pMeO. CAS number = 30525-89-4 ## Comparison with methanal Polymethanal is often preferred over methanal which is also used in fixation of live material. This is because polymethanal is a solid which makes it easier to transport and a little safer to use. The chemistry of fixation is similar due to the partial depolymerisation of polymethanal in the buffer making process (basic hydrolysis). ## Chemistry of fixation The aldehyde group, especially of methanal, is very reactive. It readily combines with amino groups to amides or with alcohol groups to esters. These reactions are mostly irreversible. Unreacted molecules have to be removed prior to subsequent experiments since they easily damage proteins like antibodies. ### Problems with Fixation There are issues in fixing free GFP in the cytoplasm of cells. Firstable 4% PFA reduces GFP fluorescence to some extend, but it is tolerable for most experiments. 4% PFA can also destroy the subcellular localization of GFP, i.e. exclusion from the nucleus. In our hands the fixation with more than 2% PFA or in combination with 0.5%< glutaraldehyde or 0.5%< acrolein destroys inclusion/exclusion of IRF3-GFP from the nucleus. Moreover 2% PFA fixed cells expressing IRF3-GFP loose integrity after 2-5 hours post fixation during storage in PBS 1x, meaning that cytoplasm containing GFP leaks out of the cells and remains as a bubble outside the cell. It can be washed away but does not distribute in the PBS buffer on its own. This issue was not solved by higher/lower PFA concentrations during fixation, by storage in H2O/PBS1x/PBS2x/fixation solution/permeabilization of cells after fixation with Tx100 or Saponin 0,2%.
By Topic # IEEE Transactions on Space Electronics and Telemetry ## Filter Results Displaying Results 1 - 14 of 14 • ### [Front cover] Publication Year: 1965, Page(s): c1 | PDF (425 KB) • ### IEEE Space Electronics and Telemetry Group Publication Year: 1965, Page(s): nil1 | PDF (96 KB) • ### [Breaker page] Publication Year: 1965, Page(s): nil1 | PDF (96 KB) • ### A Consideration of VCO and Thermal Phase Noise in a Coherent Two-Way Doppler Communication System Publication Year: 1965, Page(s):1 - 6 Cited by:  Papers (1)  |  Patents (1) | | PDF (752 KB) An analysis of a coherent two-way (transponder) Doppler communication system composed of a ground transmitter, a spacecraft transponder, and a ground receiver has been made to determine the effects of VCO phase and thermal phase noise at the ground receiver output (See Fig. 2). This system is typical of those used on lunar and planetary space programs. The analysis shows that the phase noise contr... View full abstract» • ### Coded Noncoherent Communications Publication Year: 1965, Page(s):6 - 13 Cited by:  Papers (10) | | PDF (1367 KB) This paper presents detailed results on the relative merits of encoding blocks of binary digits into a set of equiprobable, equal energy, orthogonal signals each containing n bits of information. During a time interval of T seconds, one signal from this set is selected and transmitted over the Rician'' channel, further perturbed by additive white Gaussian noise and noncoherently detected at the ... View full abstract» • ### Tracking Instrumentation and Accuracy on the Eastern Test Range Publication Year: 1965, Page(s):14 - 23 Cited by:  Papers (1) | | PDF (2342 KB) The Air Force Eastern Test Range (ETR) is, in essence, a huge laboratory extending from the Florida mainland to the Indian Ocean. It is instrumented to collect, record, analyze and communicate data for missile and space missions. This is achieved through a variety of highly sophisticated electronic and optical techniques. It is the purpose of this paper to describe briefly some of this primary ins... View full abstract» • ### A Strategy for Obtaining Explicit Estimators of Signal Delay Publication Year: 1965, Page(s):23 - 28 Cited by:  Papers (2) | | PDF (972 KB) A new strategy for extracting the delay parameter τ from observed data of the form s(t - τ) + n(t), where s(t) is a deterministic signal and n(t) is a sample function from an arbitrary random process, is presented and analyzed. The strategy requires no a priori statistics of the delay parameter and only second-order statistics (the autocorrelation function) of the additive noise pr... View full abstract» • ### Sampled-Data Prediction for Telemetry Bandwidth Compression Publication Year: 1965, Page(s):29 - 36 Cited by:  Papers (10) | | PDF (1569 KB) A portion of an exploratory investigation conducted recently at Lockheed Missiles and Space Company, Sunnyvale, Calif., into the comparative effectiveness of various prediction techniques for data compression is described. The comparisons were made by simulation with an IBM 7094 digital computer with the use of approximately 150,000 samples of actual vehicle telemetry data received during a typica... View full abstract» • ### Oscillator Stability Publication Year: 1965, Page(s):37 - 39 | | PDF (490 KB) The definition and significance of oscillator stability continually arise in communication and radar system design. This paper relates the conventional definition of oscillator stability to the more fundamental frequency error process of the oscillator through an integral equation. Both short-and long-term stability are encompassed by the definition. Since in general the integral equation cannot b... View full abstract» • ### A Simple Technique for Improving the Pull-in Capability of Phase-Lock Loops Publication Year: 1965, Page(s):40 - 46 Cited by:  Papers (2)  |  Patents (1) | | PDF (1630 KB) This paper presents a simple technique for improving the pull-in capability of phase-lock loops. This technique, called derived rate rejection or DRR, differs from those which use an external AFC loop in simplicity of implementation and design rationale, although the end result is the same. If, as is usually the case, a coherent detector accompanies the phase-lock loop, the implementation of the D... View full abstract» • ### A Note on Cascaded Limiters Publication Year: 1965, Page(s):47 - 49 Cited by:  Papers (5) | | PDF (519 KB) First Page of the Article View full abstract» • ### Noise in Digital-to-Analog Conversion Due to Bit Errors Publication Year: 1965, Page(s):49 - 50 | | PDF (349 KB) First Page of the Article View full abstract» • ### Contributors Publication Year: 1965, Page(s):51 - 52 | PDF (1373 KB) • ### [Front cover] Publication Year: 1965, Page(s): c2 | PDF (26 KB) ## Aims & Scope This Transactions ceased publication in 1965. The new retitled publication is IEEE Transactions on Aerospace and Electronic Systems. Full Aims & Scope
# I don't know how to solve this limit? Can you do it for me? Find the limit when $x$ approaches zero of $$\lim\limits_{x \to 0}{\frac{1-\cos(1-\cos x)}{x^4}}$$ My teacher already told us that the result is $1/8$ - Got to applaud the honest approach. –  Will Jagy Nov 10 '12 at 23:21 I use that $$\lim\limits_{x\to 0}\frac{1-\cos x}{x^2}=\frac 1 2$$ Now, consider the following manipulation $$\lim\limits_{x\to 0}\frac{1-\cos(1-\cos x )}{x^4}=\\ \lim\limits_{x\to 0}\frac{1-\cos(1-\cos x )}{(1-\cos x )^2}\frac {(1-\cos x )^2}{x^4}=\\ \lim\limits_{x\to 0}\frac{1-\cos(1-\cos x )}{(1-\cos x )^2}\left(\frac {1-\cos x }{x^2}\right)^2=$$ When $x\to 0$, $1-\cos x \to 0$, so $$\lim\limits_{u\to 0}\frac{1-\cos u}{u^2}\lim\limits_{x\to 0}\left(\frac {1-\cos x }{x^2}\right)^2=\frac 1 2 \frac 1 4=\frac 1 8$$ - Nice solution.. –  Berci Nov 11 '12 at 11:30 What about the L'Hospital rule? $$\frac{1-\cos(1-\cos x)}{x^4}$$ Differentiate both the nominator and the denominator: $$\big(1-\cos(1-\cos x)\big)' = \sin(1-\cos x)\cdot(1-\cos x)' = \sin(1-\cos x)\cdot\sin x$$ and so on.. - @Cameron Buie $0\over0$ seems indeterminant to me... –  Daryl Nov 10 '12 at 23:39 @CameronBuie, the first person to edit the question got it wrong. Berci's version, a single fraction, is what was originally posted. –  Will Jagy Nov 10 '12 at 23:45 @CameronBuie $1-\cos(1-\cos(0))=1-\cos(1-1)=1-\cos(0)=1-1=0$. The denominator is clearly $0$. Have I made an error? –  Daryl Nov 10 '12 at 23:52 @Daryl, Cameron was reacting to an incorrect version of the expression that was visible for about ten minutes, until I fixed it to agree with the actual question asked. You are doing fine. –  Will Jagy Nov 10 '12 at 23:57
# Problems of the Week Contribute a problem # 2018-10-22 Basic Circles have the strange property that if you increase the size of any circle by even the tiniest amount, you need at least three circles of the original size in order to completely cover the new, slightly larger circle. The 3 overlapping orange circles have the same radius as the black circle. It takes 3 of these circles to cover the slightly larger dashed circle. Note 1: For other shapes, rather than increasing their radius, which is generally only associated with circles, you are scaling up their shape proportionally. Note 2: You can rotate the shapes any way you like. A king has challenged you to a contest against a scoundrel. You have to divide 30 rubies into 3 piles. (You can choose how many rubies go in each pile.) The scoundrel will then choose and keep 2 of those piles. Assuming the scoundrel is greedy and wants as many rubies as possible, what is the maximum number of rubies that you can keep? Animation courtesy: TED-Ed and Artrake Studio Bella holds the end of a rope tied to a vertical wall. Point $A$ is marked on the rope. She gives the rope a small flick to start a wave pulse, which travels towards the wall. At some instant, the rope with the wave pulse will look as shown below. In which direction is the point $A$ moving at this instant? I have a book with regularly numbered pages, starting with 1 and increasing by 1 all the way to the last number. Which of the following is a possible value for the total number of digits used in all the page numbers? For example, in a book with page numbers 1 through 15, a total of 21 digits are used: $\underbrace{1+1+1+1+1+1+1+1+1}_{\stackrel{1\text{ through }9}{\text{(nine 1-digit numbers)}}}+\underbrace{2+2+2+2+2+2}_{\stackrel{10\text{ through }15}{\text{(six 2-digit numbers)}}}=21.$ • $1^5 - 1 = 0$ is divisible by 30. • $2^5 - 2 = 30$ is divisible by 30. • $3^5 - 3 = 240$ is divisible by 30. True or False? $n^{5}-n$ is divisible by 30 for all positive integers $n.$ ×
## Thinking Mathematically (6th Edition) Let $P$ be the original price. If the price decreased by 30%, then the reduced price is $0.7~P$ If the price is reduced another 20%, then the final price is 80% of the reduced price. We can find the final price. $final~price = (0.8)(0.7~P) = 0.56~P$ The final price is a reduction of $0.44~P$, which is a reduction of 44%. The salesperson is not using percentages properly. The actual percent reduction from the original price is 44%.
زاویه ها Common angles زاویه های معروف و مهم The radian (SI symbol rad) is the SI unit for measuring angles, and is the standard unit of angular measure used in many areas of mathematics. The length of an arc of a unit circle is numerically equal to the measurement in radians of the angle that it subtends; one radian is just under 57.3 degrees(expansion at ). The unit was formerly an SI supplementary unit, but this category was abolished in 1995 and the radian is now considered an SI derived unit. رادیان زاویه مرکزی مقابل به کمانی از دایره است که طول آن با شعاع دایره برابر است. یعنی زاویه مرکزیِ متناظر با محیط دایره، مساویِ  رادیان و اندازه زاویه نیم صفحه،  رادیان و اندازه زاویه قائمه،  رادیان است. هر رادیان برابر  درجه است. بنابر این با ضرب در رادیان، درجه به دست می‌آید. به عبارت دیگر با ضرب زاویه بر حسب رادیان در ۱۸۰ و تقسیم آن بر عدد پی، درجه به دست می‌آید. زاویه در درجه = زاویه در رادیان . به عنوان مثال: و بلعکس: با ضرب  در درجه، رادیان بدست می‌آید: جدول زیر تبدیل چند زاویه پرکاربرد را نمایش می‌دهد: درجه 0° 30° 45° 60° 90° 180° 270° 360° رادیان 0 Definition Radian describes the plane angle subtended by a circular arc as the length of the arc divided by the radius of the arc. One radian is the angle subtended at the center of a circle by an arc that is equal in length to the radius of the circle. More generally, the magnitude in radians of such a subtended angle is equal to the ratio of the arc length to the radius of the circle; that is, θ = s / r, where θ is the subtended angle in radians, s is arc length, and r is radius. Conversely, the length of the enclosed arc is equal to the radius multiplied by the magnitude of the angle in radians; that is, s = . As stated, one radian is equal to 180/π degrees. Thus, to convert from radians to degrees, multiply by 180/π. For example: Conversely, to convert from degrees to radians, multiply by π/180. For example: Radians can be converted to turns (complete revolutions) by dividing the number of radians by 2π. The length of circumference of a circle is given by , where  is the radius of the circle. So the following equivalent relation is true: [Since a  sweep is needed to draw a full circle] By the definition of radian, a full circle represents: Combining both the above relations: Conversion of common angles 0 0 0g 1/24 π/12 15° 16 2/3g 1/12 π/6 30° 33 1/3g 1/10 π/5 36° 40g 1/8 π/4 45° 50g 1/6 π/3 60° 66 2/3g 1/5 2π/5 72° 80g 1/4 π/2 90° 100g 1/3 2π/3 120° 133 1/3g 2/5 4π/5 144° 160g 1/2 π 180° 200g 3/4 3π/2 270° 300g 1 2π 360° 400g Some common angles, measured in radians. All the large polygons in this diagram are regular polygons. In calculus and most other branches of mathematics beyond practical geometry, angles are universally measured in radians. This is because radians have a mathematical “naturalness” that leads to a more elegant formulation of a number of important results. Most notably, results in analysis involving trigonometric functions are simple and elegant when the functions’ arguments are expressed in radians. For example, the use of radians leads to the simple limit formula which is the basis of many other identities in mathematics, including Because of these and other properties, the trigonometric functions appear in solutions to mathematical problems that are not obviously related to the functions’ geometrical meanings (for example, the solutions to the differential equation , the evaluation of the integral , and so on). In all such cases it is found that the arguments to the functions are most naturally written in the form that corresponds, in geometrical contexts, to the radian measurement of angles. The trigonometric functions also have simple and elegant series expansions when radians are used; for example, the following Taylor series for sin x : If x were expressed in degrees then the series would contain messy factors involving powers of π/180: if x is the number of degrees, the number of radians is y = πx / 180, so Mathematically important relationships between the sine and cosine functions and the exponential function (see, for example, Euler’s formula) are, again, elegant when the functions’ arguments are in radians and messy otherwise. Dimensional analysis Although the radian is a unit of measure, it is a dimensionless quantity. This can be seen from the definition given earlier: the angle subtended at the centre of a circle, measured in radians, is equal to the ratio of the length of the enclosed arc to the length of the circle’s radius. Since the units of measurement cancel, this ratio is dimensionless. Although polar and spherical coordinates use radians to describe coordinates in two and three dimensions, the unit is derived from the radius coordinate, so the angle measure is still dimensionless.
# How do I calculate d20 success probability using the Halfling ‘lucky’ trait with (dis)advantage? Here is a comprehensive DPR calculator, and here is the mathematics behind it. I’m trying to follow along with the equations. At the bottom of the second page are formulas for success probability $L$ of a Halfling (who has luck) in normal circumstances and with advantage and disadvantage: $$L = P + \frac{1}{20}P,$$ $$L_{adv} = P_{adv} + \left(\frac{2}{20}(1 – P) – \frac{1}{400}\right)P,$$ $$L_{dis} = P_{dis} + \frac{2}{20}P^2,$$ where: • $P$ is the probability of succeeding on any single roll, • $P_{adv} = 1 – (1 – P)^2$ is the probability of succeeding with advantage (not failing both rolls), and • $P_{dis} = P^2$ is the probability of succeeding with disadvantage (succeeding both rolls). The $P$ s are quite easy to derive, and $L$ is just passing outright OR [rolling a 1 AND THEN passing]: $$P + \left(\frac{1}{20}*P\right).$$ But I’m struggling with deriving $L_{adv}$ and $L_{dis}$ . Please can someone show a derivation?
# How to Implement a Search Engine Part 3: Ranking tf-idf Overview We have come to the third part of our implementing a search engine project, ranking. The first part was about creating the index, and the second part was querying the index. We basically have a search engine that can answer search queries on a given corpus, but the results are not ranked. Now, we will include ranking to obtain an ordered list of results, which is one of the most challenging and interesting parts. The first ranking scheme we will implement is tf-idf. In the following articles, we’ll analyze Okapi BM25, which is a variant of tf-idf. We will also implement Google’s PageRank. Then we will explore Machine Learning techniques such as Naive Bayes Classifier, Support Vector Machines (SVM), ClusteringDecision Trees and so forth. Tf-idf is a weighting scheme that assigns each term in a document a weight based on its term frequency (tf) and inverse document frequency (idf).  The terms with higher weight scores are considered to be more important. It’s one of the most popular weighting schemes in Information Retrieval. Term Frequency – tf Let’s first define how term frequency is calculated for a term t in document d. It is basically the number of occurrences of the term in the document. $tf_{t,d} = N_{t,d}$ We can see that as a term appears more in the document it becomes more important, which is logical. However, there is a drawback, by using term frequencies we lose positional information. The ordering of terms doesn’t matter, instead the number of occurrences becomes important. This is known as the bag of words model, and it is widely used in document classification. In bag of words model, the document is represented as an unordered collection of words. However, it doesn’t turn to be a big loss. Of course we lose the semantic difference between “Bob likes Alice” and “Alice likes Bob”, but we still get the general idea. We can use a vector to represent the document in bag of words model, since the ordering of terms is not important. There is an entry for each unique term in the document with the value being its term frequency. For the sake of an example, consider the document “computer study computer science”. The vector representation of this document will be of size 3 with values [2, 1, 1] corresponding to computer, study, and science respectively. We can indeed represent every document in the corpus as a k-dimensonal vector, where k is the number of unique terms in that document. Each dimension corresponds to a separate term in the document. Now every document lies in a common vector space. The dimensionality of the vector space is the total number of unique terms in the corpus. We will further analyze this model in the following sections. The representation of documents as vectors in a common vector space is known as the vector space model and it’s very fundamental to information retrieval. It was introduced by Gerard Salton, a pioneer of information retrieval. Google’s core ranking team is led by Amit Singhal, who was the PhD student of Salton at Cornell University. While using term frequencies if we use pure occurrence counts, longer documents will be favored more. Consider two documents with exactly the same content but one being twice longer by concatenating with itself.  The tf weights of each word in the longer document will be twice the shorter one, although they essentially have the same content. To remedy this effect, we length normalize term frequencies. So, the term frequency of a term t in document D now becomes: $tf_{t,d} = \dfrac{N_{t,d}}{||D||}$ ||D|| is known as the Euclidean norm and is calculated by taking the square of each value in the document vector, summing them up, and taking the square root of the sum. After normalizing the document vector, the entries are the final term frequencies of the corresponding terms. The document vector is also a unit vector, having a length of 1 in the vector space. Inverse Document Frequency – idf We can’t only use term frequencies to calculate the weight of a term in the document, because tf considers all terms equally important. However, some terms occur more rarely and they are more discriminative than others. Suppose we search for articles about computer vision. Here the term vision gives us more information about the intent of the query, instead of the term computer. We don’t simply want articles that are about computers, we want them to be about vision. If we purely use tf values then the term computer will dominate because it’s a more common term than vision, and the articles containing computer will be ranked higher. To mitigate this effect, we use inverse document frequency. Let’s first see what document frequency is. The document frequency of a term t is the number of documents containing the term: $df_t = N_t$ Note that the occurrence counts of the term in the individual documents is not important. We are only interested in whether the term is present in a document or not, without taking into consideration the counts. It’s like a binary 0/1 counting. If we were to consider the number of occurrences in the documents, then it’s called collection frequency. But document frequency proves to be more accurate. Also note that term frequency is a document-wise statistic while document frequency is collection-wise. Term frequency is the occurrence count of a term in one particular document only; while document frequency is the number of different documents the term appears in, so it depends on the whole corpus. Now let’s look at the definition of inverse document frequency. The idf of a term is the number of documents in the corpus divided by the document frequency of a term. Let’s say we have N documents in the corpus, then the inverse document frequency of term t is: $idf_t = \dfrac{N}{df_t} = \dfrac{N}{N_t}$ This is a very useful statistic, but it also requires a slight modification. Consider a corpus with 1000 documents. A term appears in 10 documents and another term appears in 100, so the document frequencies are 10 and 100 respectively. The inverse document frequencies are 100 and 10. Idf is 100 for the term that has a df of 10 (1000/10), and idf is 10 for the document with df 100 (1000/100) by definition. Now as we can see the term that appears in 10 times more documents is considered to be 10 times less important. It’s expected that the more frequent term to be considered less important, but the factor 10 seems too harsh. Therefore, we take the logarithm of the inverse document frequencies. Let’s say the base of log is 2, than term that appears 10 times less often is considered to be around 3 times more important. So, the idf of a term t becomes: $idf_t = log\dfrac{N}{df_t}$ This is better, and since log is a monotonically increasing function we can safely use it. Notice that idf never becomes negative because the denominator (df of a term) is always less than or equal to the size of the corpus (N). When a term appears in all documents, its df = N, then its idf becomes log(1) = 0. Which is ok because if a term appears in all documents, it doesn’t help us to distinguish between them. It’s basically a stopword, such as “the”, “a”, “an” etc. Also notice the resemblance between idf and the definition of entropy in information theory. In our case p(x) is df/N, which is the probability of seeing a term in a randomly chosen document. And idf is –logp(x). The important result to note is, as more rare events occur, the information gain increases. Which means less frequent terms gives us more information. Tf-idf scoring We have defined both tf and idf, and now we can combine these to produce the ultimate score of a term t in document d. We will again represent the document as a vector, with each entry being the tf-idf weight of the corresponding term in the document. The tf-idf weight of a term t in document d is simply the multiplication of its tf by its idf: $tf\mbox{-}idf_{t,d} = tf_{t,d} \cdot idf_t$ Let’s say we have a corpus containing K unique terms, and a document containing k unique terms. Using the vector space model, our document becomes a k-dimensional vector in a K-dimensional vector space. Generally k will be much less than K, because all terms in the corpus won’t appear in a single document. The values in the vector corresponding to the k terms that appear in the document will be their respective tf-idf weights, computed by the formula above. The entries corresponding to the K-k terms that don’t appear in the current document will be 0. Because their tf weight in the current document will be 0, since they don’t occur. Note that their idf scores won’t be 0, because idf is a collection-wise statistic, which depends on all the documents in the corpus. But tf is a document-wise statistic, which only depends on the current document. So, if a term doesn’t appear in the current document, it gets a tf score of 0. Multiplying tf and idf, the tf-idf weights of the missing K-k terms become 0. So, in the end we have a sparse vector with most of the entries being 0. To sum everything up, we represent documents as vectors in the vector space. A document vector has an entry for every term, with the value being its tf-idf score in the document. We will also represent the query as a vector in the same K-dimensional vector space. It will have much fewer dimensions though, since queries are generally much shorter than the documents. Now let’s see how to find relevant documents to a query. Since both the query and the documents are represented as vectors in a common vector space, we can take advantage of this. We will compute the similarity score between the query vector and all the document vectors, and select the ones with top similarity values as relevant documents to the query. Before computing the similarity scores between vectors, we will perform one final operation as we did before, normalization. We will normalize both the query vector and all the document vectors, obtaining unit vectors. Now that we have everything we need, we can finally compute the similarity scores between the query and document vectors, and rank the documents. The similarity score between two vectors in a vector space is the the angle between them. If two documents are similar they will be close to each other in the vector space, having a small angle in between. So given the vector representation of the documents, how do we compute the angle between them? We can do it very easily if the vectors are already normalized, which is true in our case, and this technique is called cosine similarity. We take the dot product of the vectors and the result is the cosine value of the angle between them. Remember that when the angle is smaller its cosine value is larger, so when two vectors are similar their cosine similarity value will be larger. This gives us a great similarity metric with higher values meaning more similar and lower values meaning less. Therefore, if we compute the cosine similarity between the query vector and all the document vectors, sort them in descending order, and select the documents with top similarity, we will obtain an ordered list of relevant documents to this query. Voila! We now have a systematic methodology to get an ordered list of results to a query, ranking. Source Code Here is the source code. You also need to download the workspace from the create index post to obtain the necessary files. First run the create index program and then the query index. You can write your queries on command prompt and the program will display the top 10 documents that match the query in the order of decreasing tf-idf scores. Enjoy.. VN:F [1.9.22_1171] How to Implement a Search Engine Part 3: Ranking tf-idf, 9.2 out of 10 based on 32 ratings This entry was posted in Information Retrieval, Search Engines, Web Search. Bookmark the permalink. • Ahmed Atif The first too parts were explained in an easy way and with too much info to help for complete understanding, but this part is not, i hope you explain even the code or the files like testIndex.dat and titleIndex.dat … • Berkay Celik Hi Arden nice post again, can you give an example how can we apply machine learning algorithms quickly ? • its_dark Hi Arden, Can you explain about the machine learning algorithms also(as in how can we implement them here), please? • its_dark how do we perform phrase queries, in this model? • https://gameserverarchitecture.com matsaleh I’m certainly late to this party, but I want to say thanks for this excellent article! I’ve been trying to teach myself text search/analysis/NLP concepts for a project I’m working on. I’m not that good at math, and frankly, haven’t been all that interested in it for years. But these topics have completely turned me around on that lately. I’ve been soaking up everything I can find on this, but it’s been slow going. Suddenly, with this article, the penny finally dropped and the light bulb is flickering on. I’m excited! Cheers! • Rachana Baldania From the tf-idf weight matrix, how can we get the importance of term respectively(e.g. which is the most important term?) • lolka_bolka Here is a really simple description of this tf-idf with example: http://www.tfidf.com/
# Why do we need resistors in led I've researched and it says that resistors limit the current flowing through the LED. But this statement confuses me because we know that in a series circuit, the current is constant at every point, so how come a resistor can limit the current flowing? - LEDs have a fairly constant voltage across them, like 2.2V for a red LED, which only slightly rises with current. If you supply 3V to this LED without series resistor the LED will try to set for a voltage/current combination for this 3V. There's no current that goes with this kind of voltage, theoretically it would be 10s, maybe 100s of amperes, which would destroy the LED. And that's exactly what happens if your power supply can supply enough current. So the solution is a series resistor. If your LED needs 20mA you can calculate for the red LED in the example $R = \dfrac{\Delta V}{I} = \dfrac{3V - 2.2V}{20mA} = 40 \Omega$ You may think that supplying 2.2V directly will also work, but that's not true. The slightest difference in LED or supply voltage may cause the LED to light very dim, very bright, or even destroy. A series resistor will ensure that slight differences in voltage have only a minor effect on the LED's current, provided that the voltage drop across the resistor is large enough. - +1 because I once assumed that a LED would provide enough internal resistance and ended up with explosive shrapnel very nearly missing my eye. –  fluffy Mar 20 '12 at 22:53 The point is a LED is a diode anyway and diodes have very small internal resistance (in "forward" direction of course), so unless there's something else in series the overall resistance is very low and the current is barely limited and this barely limited current can damage the LED and overload the circuit that powers it. So yes, you're totally right that the current is the same in each point of the circuit when elements are connected in series, but when you add a resistor you increase the overall resistance of the series and this decreases the current. - Note that constant current around a loop is only for a relatively small subset of possible circuits. It is an OK assumption for this example but a dangerous one in general. –  Russell McMahon Mar 20 '12 at 11:21 @Russell McMahon: I don't get it at all. Which assumption do you mean? –  sharptooth Mar 20 '12 at 11:27 Re subset of circuits - anyhing with reactive components and AC or time varying anything will be able to have different currents at different places in a loop at any given time. An oscillator with eg series LC would probably be a useful example. You understand that such things can happen even if we don't usually put things in those terms but a raw beginner will have no concept of AC operation etc. –  Russell McMahon Mar 20 '12 at 11:39 @RussellMcMahon If I intended it correctly, I have to disagree: no matter how fancy are the components, the current in a branch (set of components in series without other wires getting in or out), the current will be equal everywhere. –  clabacchio Mar 20 '12 at 12:23 @PortreeKid see the comment in Russel's answer: you have to consider each component in the series as a whole, because what happens inside breaks the rule of a closed system –  clabacchio Mar 21 '12 at 13:32 Imagine that • You had a water powered motor whose speed was proportional to current flow. • The motor itself offered very little resistance to current flow - you had to control the current flow external to a pump. • You has a pump able to pump 10 litres per second through a 10 metre pipe to the motor then through the motor and then through another 10 metre pipe to the suction side of the pump. (Flow rate was related to the pressure that the pump made and pipeline resistance - ie NOT a positive displacement pump. • When the pump was operated you found that the motor ran MUCH too fast and that you needed to limit flow to about 1 litre/second. To achieve the requirement you could place a reducing valve in the circuit to drop most of the pressure and to limit the flow. The valve worked to drop a certain amount of pressure across it at a given flow rate and as adjustable. (This is about how many rel water valves do work). You could place the valve ANYWHERE in the circuit and it would achieve the desired result. It could be at the pump inlet or exit or at the motor exit or inlet or anywhere in either pipe. This is a close analogy to you LED question. The current needs to be limited as it is too high without a limiter. The limiter may be placed anywhere in circuit. With the Battery - resistor LED circuit The LED has a certain defined voltage drop at a chosen current. To be specific lets say that at 20 mA the LED drops exactly 3.00 Volt. This is typical of some modern LEDs. If we wish to run the LED at 20 mA we MUST arrange for it to drop 3 V - not more and not less. If we wish to use a 9V supply to operate the LED we N=MUST "get rid of" 9-3 = 6B somehow. The resistor does this. To drop 6V at 20 mA the resistor needed is R = V/I = 6 / 0.02 = 300 ohms. In this example a 9V battery + a resistor + an LED will operate at 20 mA. The resistor can be placed before or after the LED. The current is dropped across it in either location. It is not relevant to this question but extremely important to know that your statement that • "we know that series ciircuit, the current is constant at every point." is incorrect. There are many circuits where this is rue -but also many circuits where it is not true. In DC circuits with only resistive components, such as this 1 LED, 1 resistor circuit, then it is true. BUT when there are reactive components present such as inductors and capacitors or certain other non linear elements then it is often NOT true. - I disagree with the last paragraph: in a series circuit (one wire in - one wire out) the current will be the same at every point outside the components (treating them as black boxes). –  clabacchio Mar 20 '12 at 13:56 Yes, I'm a bit confused. @Russell, could you give an example for a series circuit where current is NOT equal through all elements? –  exscape Mar 20 '12 at 17:10 Always with the complicated answers ;-). Look at it this way. What happens when you put a wire across the terminals of a battery? In a perfect world you get infinite current which melts the wire. We call this a short curcuit. Because diodes are designed to have minimal forward resistance we get the same effect as a short. Put a resistor in there to provide something to resist against current to limit it down from infinity - This may be understood and your question may have been rooted as a diode in a curcuit with other components that are limiting current by their resistance. While you may get a way with this - if anything changes in the curcuit, the LED is on it's own. Best to have its own R –  VariableLost Mar 21 '12 at 21:29 How can u say tht the wire is experiencing knfinite current? Why infinite in the first place? –  IvanMatala Mar 21 '12 at 22:31 A bit simplicistic for an engineering site, and a subset of sharptooth's answer...welcome anyway! –  clabacchio Mar 21 '12 at 23:42 Let's focus on what is important here: The LED (which is a diode) characteristic curve. Please look at this image from wikipedia. As you can see, for positive voltages across the diode its current increases exponentially. Imagine now you connect your LED to a power supply without resistor. You would have to set the exact voltage across the diode to get the exact current you need to light up the LED. If for any reason your power supply increases a little bit above the voltage you need then the current will be exponentially higher than before which may (it will!) damage your diode. So, how can a resistor help us with this problem? FEEDBACK! One of the most important concepts in electronics! Let's go back to our example, and add a resistor in series with the diode and the power supply. Now, everytime your power supply exceeds its nominal voltage the diode will increase its current exponentially again, but because the current got higher the voltage across the resistor will be higher too, which means, the voltage across the diode will decrease, compensating the power supply voltage increase. -
# tianjara.net | Andrew Harvey's Blog ### Entries from March 2012. 10th March 2012 So as of now when I download the document at http://www.commbank.com.au/personal/international/travel-money-card/default.aspx using, wget --save-headers -U 'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET CLR 1.1.4322)' --server-response 'http://www.commbank.com.au/personal/international/travel-money-card/default.aspx' from both IP address (140.168.75.39, 140.168.129.72) that get resolved, within that page I get a link to https://www.commbank.prepaidcardsupport.com/cbacustomer/html/LoginFrameTravel.html Looks weird. The commbank linking to www.commbank.prepaidcardsupport.com? At first I thought I was been man in the middle'ed, so I tried retrieving this document from various vantage points in the Internet with the same results. So either it wasn't a MIM or the MIM was happening at a point common between both vantage points (ie. the banks network, or the telstra network above the banks network). So maybe this is legit? I checked the whois for prepaidcardsupport.com but it is registered by proxy (not a good sign) and its HTTPS certificate isn't trusted by the default iceweasel install (again not a good sign). Anyway this reinforced to me a big problem surrounding sites that think it is okay to not offer HTTPS for most of their site but switch to HTTPS just for parts of the site where you log in. This opens you up to man in the middle attacks against your plain HTTP pages allowing the attacker to replace the switch to HTTPS for areas that you log in with just plain HTTP (hence allowing further man in the middle attacks). -- Of course this is ignoring the issue that current implementation of PKI using CA's isn't terrible secure at all. No tags 4th March 2012 This is why open source development and open collaboration in a community is great: 2. I see this question find it interesting so I have a go at writing a solution. I release this freely and openly to anyone on github under a free software license (CC0): https://gist.github.com/1675606/eb39d06c948bae471fee902a3cb688f28cefc9da 3. Original poster gets back to me thanking me and finding the solution I wrote useful. 4. Someone else comes along and forks my code https://gist.github.com/1953554 adding some cool extra functionality to, building on my work to make something new and useful. 5. We continue to build on the solution collaboratively https://gist.github.com/1675606/e8bfe1525478ada610ebc7f4d14eb433ed2866b1 None of this would have been possible without a platform to openly and freely communicate inside a community (1), free licensing and open sourcing of solutions allowing others to legally build upon others works (2), git and github a program and platform that allows one to publish derivative works that are visible to the original author but without needed permission or interaction with the original author (4, 5). Albeit small, it is extremely rewarding to see this unfold upon my own work. No tags 3rd March 2012 On my new sysadmin front I've migrated my site tianjara.net to Linode's Tokyo facility which has a better RTT than Freemont where it was previously located. Along the way I learnt that I probably should have lowered the DNS TTL entry before the move so that when the IP address changed DNS servers didn't take as long to pick up on the change. My site is also now IPv6 enabled. It took a little bit of work setting up lighttpd correctly (as it recently changed the way it could handle IPv6 network interfaces), and also a bit of confusion with ufw, which although I had set IPV6=yes I needed to re-add my rules to allow from Anywhere (v6) in addition to Anywhere. It is a shame most Australian ISP's are a little slow with IPv6 deployment... this made it tricky to test. I've also deployed an SSL certificate for https://tianjara.net, self-signed and added to the web of trust via monkeysphere. (not that I could actually test it though). On a related note, I was looking through my server logs and found what looks like the Catholic Education Network proxy server (but run by http://www.editure.com.au/) telling me the school and login of every student that visits my site (through plain HTTP) using a HTTP header like, X-SINA-ProxyUser: [school]/[username] Sure different people have their own expectations of privacy and the pupils at schools using CENET services may all be fine with this, but some may not, and many are probably not old enough to necessarily make the best decision on their own. I hope those students know that every site they visit gets told their school/username. No tags
## anonymous 5 years ago How do I evaluate this? 5/2log4 (4^(1/3) 1. anonymous Do you mean,$\frac{5}{2}\log_4 4^{1/3}$? 2. anonymous If so, then you need to use your log laws:$\log_4 4^{1/3}=\frac{1}{3}\log _ 4 4=\frac{1}{3}$ 3. anonymous So your expression is$\frac{5}{2}\log_4 4^{1/3}=\frac{5}{2}\times \frac{1}{3}=\frac{5}{6}$
Shared14 - Series Assignment / Series Notes.sagewsOpen in CoCalc This material was developed by Aaron Tresham at the University of Hawaii at Hilo and is ### Prerequisites: • Intro to Sage • Sequences # Infinite Series An infinite series is simply an infinite sum of numbers. ## Example 1 The harmonic series is the sum of the reciprocals of the positive integers: This may be written with summation notation as The decimal expansion of a real number can be thought of as a series. ## Example 2 In this example, we can see that an infinite sum of numbers may give you a finite answer. Such a series is called convergent. Of course, many infinite series do not give you a finite sum. Such series are called divergent. (We'll define these more precisely below.) One of the easiest ways to get a divergent series is if the terms don't approach zero. That is, diverges if . On the other hand, if , this is no guarantee that converges. ## Example 3 Even though , the harmonic series diverges. This fact was proved as far back as the 14th century by Oresme. His approach was to compare the harmonic series to a series with smaller terms. If the smaller series diverges, then the harmonic series must as well. Given Replace with , replace with , replace each of , , and with , and so on. What you get is the smaller series This series diverges, since . Compare this result with this convergent series: [Side note: compare this to the improper integrals (divergent) and (convergent). Actually, each series is a Riemann sum for the corresponding integral.] ## Partial Sums Given a series , we define a sequence as follows: is called the nth partial sum of the series. ## Example 4 Find the 10th and 20th partial sums of the series . We can use the sum command in Sage, which requires four arguments: sum(expression, index variable, starting value, ending value). Don't forget to declare the index variable first. Here is the 10th partial sum (I'll convert the answer to a decimal). %var n sum((2*n+1)/(3*n^2+9),n,1,10) N(_) 28771121/20454564 1.40658686247236 Here is the 20th partial sum. sum((2*n+1)/(3*n^2+9),n,1,20) N(_) 12139706620041946362/6522879694663705009 1.86109620111088 We define convergence of a series in terms of the sequence of partial sums, . If exists, then we say the series converges (or is convergent), and we define the sum of the series to be this limit; that is, If the limit does not exist, then we say the series diverges (or is divergent). Here is a graph of the first 50 partial sums for . Notice that the partial sums seem to get bigger and bigger without approaching a limit, which suggests this series diverges. Sage can handle infinite series. In this case, Sage tells us the series is divergent. sum((2*n+1)/(3*n^2+9),n,1,Infinity) Error in lines 1-1 Traceback (most recent call last): File "/projects/sage/sage-7.5/local/lib/python2.7/site-packages/smc_sagews/sage_server.py", line 995, in execute exec compile(block+'\n', '', 'single') in namespace, locals File "", line 1, in <module> File "/projects/sage/sage-7.5/local/lib/python2.7/site-packages/sage/misc/functional.py", line 563, in symbolic_sum return expression.sum(*args, **kwds) File "sage/symbolic/expression.pyx", line 11601, in sage.symbolic.expression.Expression.sum (/projects/sage/sage-7.5/src/build/cythonized/sage/symbolic/expression.cpp:63737) return symbolic_sum(self, *args, **kwds) File "/projects/sage/sage-7.5/local/lib/python2.7/site-packages/sage/calculus/calculus.py", line 621, in symbolic_sum return maxima.sr_sum(expression,v,a,b) File "/projects/sage/sage-7.5/local/lib/python2.7/site-packages/sage/interfaces/maxima_lib.py", line 892, in sr_sum raise ValueError("Sum is divergent.") ValueError: Sum is divergent. ## Example 5 Consider the series . Here is a graph of the first 50 partial sums of this series. It appears the partial sums are getting larger and larger without bound. This is what we expect, since we saw above that this series diverges. Sage will also tell us that this series diverges. sum(1/n,n,1,Infinity) Error in lines 1-1 Traceback (most recent call last): File "/projects/sage/sage-7.5/local/lib/python2.7/site-packages/smc_sagews/sage_server.py", line 995, in execute exec compile(block+'\n', '', 'single') in namespace, locals File "", line 1, in <module> File "/projects/sage/sage-7.5/local/lib/python2.7/site-packages/sage/misc/functional.py", line 563, in symbolic_sum return expression.sum(*args, **kwds) File "sage/symbolic/expression.pyx", line 11601, in sage.symbolic.expression.Expression.sum (/projects/sage/sage-7.5/src/build/cythonized/sage/symbolic/expression.cpp:63737) return symbolic_sum(self, *args, **kwds) File "/projects/sage/sage-7.5/local/lib/python2.7/site-packages/sage/calculus/calculus.py", line 621, in symbolic_sum return maxima.sr_sum(expression,v,a,b) File "/projects/sage/sage-7.5/local/lib/python2.7/site-packages/sage/interfaces/maxima_lib.py", line 892, in sr_sum raise ValueError("Sum is divergent.") ValueError: Sum is divergent. ## Example 6 Consider the series . Here is a graph of the first 50 partial sums for this series. Now the partial sums approach a limit around 1.6 (the exact answer is ). Here is the answer from Sage. %var n sum(1/n^2,n,1,Infinity) 1/6*pi^2 ## Geometric Series One common type of series is called a geometric series, because the terms form a geometric sequence (a sequence is geometric if the ratio of successive terms is a constant, called the common ratio). In other words, is a geometric series, if there exists a constant such that for all . In general, a geometric series has the form where is the first term and is the common ratio (note: it is customary to begin geometric series at , although this is not necessary). ## Example 7 Consider the geometric series . Let's look at the partial sums. I'll use the sum command in Sage: sum(formula, index variable, start, end) I'm going to separate out the initial 1 (the 0th term) to make the pattern easier to see. As , the partial sums approach . Here is a graph of the sequence of partial sums of the series. It appears the limit of the sequence is 2. ## Sum of a Geometric Series In general, if , then . If or , then this series diverges. [Note: the index must start at 0 for this formula. You can also think of it as , and then the index does not matter.] ## Example 8 (geometric series with and ) %var n sum(3/5^n,n,0,+Infinity) 15/4 ## Example 9 (geometric series with and ) %var n sum(3^n/5^n,n,0,+Infinity) 5/2 ## Example 10 Find the sum of the geometric series . In this case, (first term) and (common ratio , etc.). So the sum is . sum(1/3*(2/5)^n,n,0,+Infinity) 5/9 1/3/(1-2/5) 5/9 ## Example 11 Let's explore the series using Sage. Try things like , etc.
Status Not open for further replies. #### dingleb115 ##### Member Logging became disabled on my CalCube Timer. I lost a few months worth of times and no new ones are being saved. Having trouble figuring out how to fix this myself. Any tips? #### Ninja Storm ##### Member I believe it's the version 3. I remember ordering a v3 from Amazon, but I don't know if it's the same seller. #### tasguitar7 ##### Member How does hair always get into my cubes, is it a problem, and will it affect performance? #### drewsopchak ##### Member How does hair always get into my cubes, is it a problem, and will it affect performance? Hair gets into everybody's cubes. It doesn't seem to be a problem. #### Ickathu ##### Member How does hair always get into my cubes, is it a problem, and will it affect performance? Not a problem, just annoying. I clean mine out fairly often so it's not too bad for me #### drogg ##### Member Most recons begin with W top G front. Most do W cross- which is the U face. Therefore you should put Green colour on U to solve with green cross. Thanks got it now. Able to do some reconstructions of faster cubers than myself! Anyone else out there who uses a green only cross who thinks life would have been a little easier just to use a White one! (for video and guide purposes!) #### wytefury ##### Member Anyone else out there who uses a green only cross who thinks life would have been a little easier just to use a White one! (for video and guide purposes!) Anyone else out there who is CN who laughs at everyone that can only solve the cross on 1 specific color? #### Carson Anyone else out there who uses a green only cross who thinks life would have been a little easier just to use a White one! (for video and guide purposes!) Yes, I fought with this a lot early on. It is also confusing because I use a different orientation than most for BLD. Anyone else out there who is CN who laughs at everyone that can only solve the cross on 1 specific color? I'm not sure if this was meant to be a joke, or if you are just trying to be a jerk? #### Godmil Anyone else out there who is CN who laughs at everyone that can only solve the cross on 1 specific color? Dude, it's not cool to make fun of other cubers speed/ability. You may not have meant to be rude, but that's certainly how it came across. #### wytefury ##### Member Anyone else out there who is CN who laughs at everyone that can only solve the cross on 1 specific color? Everybody that reads this, my bad. I didnt think about what I posted and posted it anyway. In my head at the time it was funny, now I read it again and see its really not. Sorry. I'll keep my future posts here positive, helpful, and/or uplifting. #### Dacuba ##### Member Is it possible to build any kind of commutative twisty puzzle? How would you prove it if not? #### Zaterlord ##### Member What is the highest tps in a solve? What is the highest tps in a solve? The highest i have seen is 9.43 in a solve by Felikz and 20.59 in CLL alg by RCTACameron #### drewsopchak ##### Member Anyone else out there who is CN who laughs at everyone that can only solve the cross on 1 specific color? No, especially since there are many good one color cubers. Besides, I switched to CN when I was just sub 20. #### timeless ##### Member anyone know the tps for the WR of all events like 3x3 to 7x7 #### Brest ##### Moderator Staff member anyone know the tps for the WR of all events like 3x3 to 7x7 Code: single average Rubik's Cube 9.18 htps 4x4 Cube 5.75 stps / 6.24 etps 4.83 stps / 5.54 etps 5x5 Cube 4.49 stps / 5.50 etps 2x2 Cube 5.80 htps 7.76 htps 3x3 blindfolded 5.29 stps / 6.52 etps 3x3 one-handed 4.30 htps / 5.35 etps 4.26 stps / 5.28 etps 3x3 with feet 1.61 qtps / 2.25 etps 6x6 Cube 3.63 stps / 4.99 etps 7x7 Cube 4x4 blindfolded 1.69 stps / 2.49 etps 5x5 blindfolded 3x3 multi blind #### drewsopchak ##### Member Not sure. I'd recommend looking for a listing that has the version in the title or go to a proper cubing shop. #### Neo63 ##### Member Are there any fast square-1 solvers that hold the cube side ways? I found this video today and thought it was really cool. Last edited by a moderator: #### Carson Everybody that reads this, my bad. I didnt think about what I posted and posted it anyway. In my head at the time it was funny, now I read it again and see its really not. Sorry. I'll keep my future posts here positive, helpful, and/or uplifting. Things have a tendency to "sound" much different when written than when thought or spoken. I usually read my posts back to make sure they portray the tone I intended... No worries... not like you claimed that real men solve with Rubik's brand cubes or called Stefen a girlie man. #### KingTim96 ##### Member *TO PEOPLE WHO HAVE MEMORIZED FULL CLL FOR 2x2* 1. In what order did you memorize you alg sets?(as in like antisune then sune? or like bowtie then moved to pi? if you understand what im saying) 2. how long did it take you to reach full cll? 3. what were your before and after averages on the 2x2?(before cll and after cll) 4. and does recognition improve after practice with cll? Thanks everybody! #### Riley ##### Member *TO PEOPLE WHO HAVE MEMORIZED FULL CLL FOR 2x2* 1. In what order did you memorize you alg sets?(as in like antisune then sune? or like bowtie then moved to pi? if you understand what im saying) 2. how long did it take you to reach full cll? 3. what were your before and after averages on the 2x2?(before cll and after cll) 4. and does recognition improve after practice with cll? Thanks everybody! I don't know full CLL, but I'm learning it now. (kinda) 1. Learning CLL's in groups always helps. Just like people learning OLL's, they usually learn it by shape. You can use whatever order of groups you want. 2 and 3 don't apply to me yet. For number 4, yes, just like with learning OLL, recognition comes with practice. My question: Anyone know what happened here, a bit after 0:12? http://www.youtube.com/watch?v=dKJltNcbgOs&feature=youtu.be My theory is that I hit the timer twice really quick, but I tried doing it again, and it only stops it, doesn't start inspection again? Status Not open for further replies.
Dakota Reference Manual  Version 6.4 Large-Scale Engineering Optimization and Uncertainty Analysis lognormal_uncertain Aleatory uncertain variable - lognormal ## Topics This keyword is related to the topics: ## Specification Alias: none Argument(s): INTEGER Default: no lognormal uncertain variables Required/Optional Description of Group Dakota Keyword Dakota Keyword Description Required (Choose One) Group 1 lambdas First parameter of the lognormal distribution (option 3) means First parameter of the lognormal distribution (options 1 & 2) Optional lower_bounds Specify minimum values Optional upper_bounds Specify maximium values Optional initial_point Initial values Optional descriptors Labels for the variables ## Description If the logarithm of an uncertain variable X has a normal distribution, that is , then X is distributed with a lognormal distribution. The lognormal is often used to model: 1. time to perform some task 2. variables which are the product of a large number of other quantities, by the Central Limit Theorem 3. quantities which cannot have negative values. Within the lognormal uncertain optional group specification, the number of lognormal uncertain variables, the means, and either standard deviations or error factors must be specified, and the distribution lower and upper bounds and variable descriptors are optional specifications. These distribution bounds can be used to truncate the tails of lognormal distributions, which as for bounded normal, can result in the mean and the standard deviation of the sample data being different from the mean and standard deviation of the underlying distribution (see "bounded lognormal" and "bounded lognormal-n" distribution types in[89]). For the lognormal variables, one may specify either the mean and standard deviation of the actual lognormal distribution (option 1), the mean and error factor of the actual lognormal distribution (option 2), or the mean ("lambda") and standard deviation ("zeta") of the underlying normal distribution (option 3). The conversion equations from lognormal mean and either lognormal error factor or lognormal standard deviation to the mean and standard deviation of the underlying normal distribution are as follows: Conversions from and back to and or are as follows: The density function for the lognormal distribution is: ## Theory When used with design of experiments and multidimensional parameter studies, distribution bounds are inferred. These bounds are [0, ]. For vector and centered parameter studies, an inferred initial starting point is needed for the uncertain variables. These variables are initialized to their means for these studies.
# Enthalpy and it's use in Gibb's Free Energy 1. Nov 26, 2009 ### cavalier Lately I've been struggling with the idea of enthalpy and what it means conceptually, especially in its use in Gibb's free energy. There is nothing in the definition of change in enthalpy that would connect logically to spontaneity or free energy. After thinking about it for a couple days, here are my ideas about spontaneity. I expect some or even all of them to be wrong, so it would help me very much if someone could correct my mistakes or show me where I managed to get things right. 1. Enthalpy is just a way of measuring the energy released by something at constant pressure. We could measure it using the change in internal energy at constant volume, but in that case pressure would not be constant. 2. Things tend to become disordered because disorder is statistically favored. Things also tend to want to lose potential energy. It takes a certain amount of the latter tendency to overcome the former tendency and vice versa. 3. $$\Delta G=\Delta H - T\Delta S$$ is just a way of apply (2) to systems of gases, most of the time involving chemical reactions. 4. $$\Delta G=-nF\Delta E$$ is an application of (2) to electrochemistry. 5. Any spontaneous changes where entropy decreases is accompanied and balanced out by the conversion of potential energy into other forms. 2. Nov 26, 2009 ### Gerenuk You ask good questions. I think that is the way to understand physics. You seem to have used "applications" of enthalpy only, which isn't best to understand it's origins. I try to explain the view I'm comfortable with. First I should say that all laws are originally expressed at differentials like $$\mathrm{d}E=T\mathrm{d}S-p\mathrm{d}V$$ That is important. All other equations are special cases only. The definition of enthalpy is $$H=E+pV$$ That has only mathematical reasons, as with this definition the differential now uses another variable (Legendre transformation) $$\mathrm{d}H=T\mathrm{d}S+V\mathrm{d}p$$ Now in a constant pressure process ($\mathrm{d}p=0$, $\mathrm{d}Q=T\mathrm{d}S$) we have exactly $$Q=\Delta H\qquad(\text{const }p)$$ That's why the heat in a chemical reaction is given by the change in enthalpy. The free energy concept is something else. Maybe you can find a good statistical mechanics book (Reichl maybe?) and look up the section about "availability". Basically if you have a system in contact with an environment, then the total entropy of these two system can only increase. Consequently the work that you can extract from the system will be equal to $$A=E-T_\text{envir}S+p_\text{envir}V$$ Incidently for processes with some defined values in the final variables this availability will be equal to the change in one of the 4 thermodynamic potentials (E, F, H, G). Have a look at the section. Enthalpy is rather the change of plain heat Q in a processes where pressure is kept constant. The work at constant pressure is of course $W=-p\Delta V$. It is not a law that things want to lose energy! The only real law says that total entropy (system+environment) wants to increase. The statement that "free energy" should minimize derives from it under special circumstances. One can show that the increase of total entropy (system+environment) is equivalent to the statement that the Gibbs free energy of the system wants to decrease, if you consider only final states that have a defined pressure and temperature. And btw, you can conclude that the normal internal energy of the system goes to a minimum, if we have defined value of entropy and volume of the final state. The importance for chemistry stems from the fact that a) if you want to know the heat at constant pressure you need the difference in enthalpy b) since after the experiment you have a defined temperature and pressure, you want to minimize Gibbs free energy in order to find the final state The total entropy never decreases. There is no statement about the entropy of the components. Saying the individual entropy is balanced out is very hand-wavy, but maybe some people find it useful to think this way. I hope I haven't mixed up stuff, so it's up to you to read and ask questions Last edited: Nov 26, 2009 3. Nov 29, 2009 ### cavalier This confuses me. I usually think of heat as energy being transferred from one thing to another. Can we really speak of a transfer of energy in all chemical reactions? Suppose we have a reaction of two solid chemicals that produces gas at constant temperature. The enthalpy for such a reaction will be positive, which makes sense because the gas produced has a lot of random molecular translational energy, but that energy simply arises from the chemical energy; I would argue there wasn't really a transfer. Does E in this equation refer to something besides internal energy? Would it make sense to apply enthalpy for changes occurring at changing pressure? Engines perform under nonconstant pressure. Can we determine the enthalpy of an engine cycle? It was definitely hand-wavy. The idea came to me when I thought about a sealed container of muddy water. In space, the mud will never settle because a mixture is statistically favored. On Earth, the mud will settle. If I have the correct idea of entropy, it seems like the second law is being violated. It occurred to me that potential energy is being converted into some other form when the denser mud sinks, but I'm not sure how that fits in, and I don't know how to reconcile the sinking of mud in water with the second law and Gibb's free energy (or even if makes sense to apply Gibb's free energy to this change). 4. Nov 29, 2009 ### Gerenuk Hmm, I haven't understood what you meant in this paragraph. Yes, you are probably right that one should rather use U. Enthalpy is defined for all equilibrium states (i.e. homogeneous temperature and pressure) and you can calculate it. Here is basically the tools that you have: $$\Delta U=Q-\int p\mathrm{d}V$$ $$H=U+pV$$ $$\Delta H=Q+\int V\mathrm{d}p$$ $$W=-\int p\mathrm{d}V$$ $$\Delta U=Q-p\Delta V\qquad\text{(const p or V)}$$ $$\Delta H=Q+V\Delta p\qquad\text{(const p or V)}$$ $$W=-p\Delta V\qquad\text{(const p or V)}$$ Now you can use any of these equations purely mathematically without even understanding what they stand for. But take care that some of the equation apply for certain contraints on the processes only - in doubt you have to apply the more general equations. With these equations you see that only for constant pressure processes the change in enthalpy is equal to the heat. In general these quantities aren't equal but exist. They might be useful for other purposes. The second law of entropy only works for (energy) isolated systems! However, the mud interacts with the gravitation of earth upon which both object move closer together. The earth and the mud together could be considered as an isolated system. The entropy of earth plus the entropy of mud will increase as a total, but what can we say about a "subcomponent" like the mud? One can show that (maximum entropy of earth+mud) is equivalent to (minimum free energy of the mud) This way we can make statements about the mud. [I have to think for a moment which free energy to take here. The problem is that mud isn't merely a gas with pressure and temperature only. It has macroscopic particle positions and also is subjected to gravity. This might require some special free energy.] 5. Mar 12, 2011 ### Zeppos10 In the posts above no systematic distinction is made between the internal pressure and the environmental pressure. On Post #3 Cavalier defines the free enthalpy / Gibbs free energy as A=G=U + p(env)V -T(env)S, but he does not define enthalpy as H=U+p(env)V. May be the attachment (post #7) at https://www.physicsforums.com/showthread.php?t=88987&referrerid=219693 can resolve some of the problems: the discussion above indicates many entangled problems.
# is this an annuity? • Jul 28th 2010, 06:20 AM jamesk486 is this an annuity? Today is January 1, 2011. On January 1 of the years 2012 through 2021 you are to receive\$50,000. If the cash flows are discounted at 10% a year, what is the present value of these cash flows as of today? (Give your answer to the nearest dollar) • Jul 28th 2010, 08:50 AM SpringFan25 yes, that is an annuity. The formula for the present value of an annuity is $\displaystyle S \times a_{\bar{n|}} = S \times \frac{1-v^n}{i}$ S= payment amount i = annual interest rate v = 1/(1+i) n=number of payments It is difficult to believe that you would have been asked to do this question without being told the formula to do it. Review your course notes or read this http://www.mathhelpforum.com/math-he...nt-values.html • Jul 29th 2010, 09:12 PM jamesk486 do we count starting from 2012? not sure if it is 9 years or 10 years.. • Jul 30th 2010, 03:11 PM SpringFan25 i would read "2012 through 2021" as 2012-2021 inclusive, which is 10 years.
I started blogging about crypto a year ago1. Since then, I’ve kept my process consistent: wake up, write about whatever is puzzling me, and publish regularly2. As this blog enters its sophomore year, I thought I’d spend some time analyzing the themes of my posts to see what I found most puzzling3. I took all my public posts4 and gave each of them a primary theme, which was difficult in some cases because many posts have many themes. But a post on fiatcoins, for example, is not really a post about fiatcoins but a post about “who has power.” The two themes I wrote about most were “who has power” and “value capture” by a wide margin. My top themes in order of frequency: 1. Who has power? (37%) 2. How do you capture value? (34%) 3. Scarcity in non-monetary assets (11%) 4. How do you get users? (6%) 5. How do you create value? (3%) 6. Other (9%) For this post, let’s talk about “who has power?” The idealistic application of blockchains takes power from the powerful operators of legacy systems and gives them to the people. I wrote in Stateful Protocols: State aggregators enabled incredible outcomes: all the magical internet apps we use today. There’s no question that the products and services enabled by Internet 2.0 had a massive positive impact on society. However, we’re starting to see some of the major drawbacks (e.g. privacy, monopolistic behavior) and consumers are starting to retaliate. Centralization is the root cause of these problems. When a state aggregator grows to a certain size, the incentives of the state aggregator, the apps built on top of it, and the users misalign. This is covered well by Chris Dixon in Why Decentralization Matters. And in Disaggregation Theory: Users will have to make a choice between the superior UX of today’s internet companies and the superior privacy and data sovereignty of decentralized internet companies. I cringe a bit at how buzzwordy my early posts were, but that’s where I was at in handling the topics: very punchy at the conceptual level (centralization bad, decentralization good; power go from them to us) and very wishy washy at the practical level. I’m happier with my treatment of specific topics like “fees” vs “rents” and “incremental” vs “anarchy” and “voting” vs “governance” and “decentralization” and what does that even really mean?. In each case, there was a subject matter of e.g. fees, but the primary theme was power. • Rents are charged by people who have too much power and abuse it; when they’re not rents, fees are charged by people who created value and are deservedly capturing it • Crypto-anarchy is about making it impossible for people to have the power to marginalize groups • Voting may seem like governance, but does not mean that power is distributed • Decentralization is poorly defined and implies fair distribution of power; those with power can invoke it to manipulate perceptions of power The launch of fiatcoins like GUSD, PAX, USDC and others offered useful real-world examples of “who has power?” The issuers clearly have power. And recently we saw Gemini use that power to close accounts that attempted to redeem GUSD for USD. As I predicted: use regulated stablecoins, get censorship. Is it bad that Gemini can shut them down? Probably not. They probably have good reason to shut those accounts down. We rely on powerful people and platforms to take care of us. There are lots of benefits! We don’t know if we can replace those benefits if we take that power away. But if you don’t think anybody should have the power to censor, then perhaps you don’t care about losing those benefits. Who has power can be observed, but who should have power is harder to answer5. It’s a question of trade-offs: • If I give operators of a smart contract protocol more power but in exchange I get better performance and a better shot at getting users and developers, should I take it? What’s the optimal point on that spectrum6? • Are there cases where I rely on benefits offered by powerful operators of platforms? Is there always a powerful operator-less working alternative that delivers replacements to those benefits? • Is it overall better or worse for society if who has power changes? And does it change in the “more fair” way we have in mind? Or does it naturally pool again7. I’m not sure what I think about most of these things yet, but I am eager to continue exploring topics like these as I enter my second year. Thanks for all your support, ideas and feedback. 1. My first post was on March 7th, 2018. Fun factoid: it came from a chart I made to respond to a Crypto Bobby thread on fundamental analysis. 3. Josh Wolfe has this awesome line in his recent interview with Shane Parish that’s along the lines of you know when nobody knows the answer to a topic when there’s a lot of books on that topic. The presence of those books is an existence proof that nobody knows what they’re talking about. If we buy this logic, the existence of many posts about a given theme could indicate I don’t know what I’m talking about for that theme. Makes you think… 4. Didn’t get around to labeling my member posts but I’ll probably do that soon. 5. I’m not sure this question has a right answer, but I know it has many wrong ones. 6. I wrote of USDC in a member update: a programmable and stable money is a boon to crypto adoption as long as we remember that it’s censorable. Coinbase has the opportunity to increase the probability that any one of their users goes from crypto speculator to crypto user–the most important step in the funnel–so I guess I’m cautiously cheering them on. 7. As I wrote here: “Niall Ferguson, a historian I admire, spoke at an event I attended a couple weeks ago. Feeling uncertain about the overall impact of crypto on society, I asked him how “democratizing” revolutions tend to work out. He said they almost never work because a new, even more hierarchical power structure emerges on the new “decentralized” foundation.”
A parallel plate capacitor is filled by a dielectric Question: A parallel plate capacitor is filled by a dielectric whose relative permittivity varies with the applied voltage (U) as ε = αU where α = 2V-1. A similar capacitor with no dielectric is charged to U0 = 78 V. It is then connected to the uncharged capacitor with the dielectric. Find the final voltage on the capacitors. Solution: Since the capacitors are connected in parallel, the potential difference across the capacitors is the same. The final voltage is assumed to be U. C is the capacitance of the capacitor without dielectric, then the charge is given as Q1 = CU The initial charge is given as Q0 = CU0 The conversion of charges is Q0 = Q1 + Q2 CU0 = CU + αCU2 αU2 + U – U0 = 0 Solving the equation, we get U = 6V
011-40705070  or Select Board & Class • Select Board • Select Class , asked a question Subject: GK , asked on 5/11/12 # Riddles:- This mother comes from a family of eight, Supports her children in spite of their weight, Turns around without being called, has held you since the time you crawled. Who is she? Mother Earth with 8 planets • 2 Ans. "Mother" Earth. The family of eight are the eight planets, all of the world's population is quite a load, the earth is always spinning (or turning around) and unless you're an alien, you've been on earth your whole life. • 2 Answer is mother earth . • 1 mather earth with 8 planets ??? • 1 Ans.  "Mother" Earth. The family of eight are the eight planets, all of the world 's population is quite a load, the earth is always spinning (or turning around) and unless you 're an alien, you 've been on earth your whole life. • 1 It's Our dear Earth...:) • 1 mother earth with eight planets
Math Dictionary Home. Triangle Calculator . addend A number used in the mathematical operation of addition (e.g., 6 … Learn more. Obtuse angle This page was last changed on … Reacting readily to stimuli or impressions; sensitive: His hearing was unusually acute. Hawaiian Translation: Huina `Oi Example. The People’s Choice 2020 Word Of The Year: 2020 Was A $#@#%%$@! For our professional healthcare knowledge, we need to know how an acute illness differs from a chronic illness along with some common examples. When two rays form an angle of 90-degree, it is called a right angle. Based on the Random House Unabridged Dictionary, © Random House, Inc. 2020, Collins English Dictionary - Complete & Unabridged 2012 Digital Edition 1. adjective You can use acute to indicate that an undesirable situation or feeling is very severe or intense. Also find the definition and meaning for various math words from this math dictionary. sharp or penetrating in intellect, insight, or perception: extremely sensitive even to slight details or impressions: (of a triangle) containing only acute angles. An acute condition might also be so fast acting and severe enough that the patient won’t sur… Having a rapid onset and following a short but severe course: an acute disease. Obtuse. ... mathematics: ending in a sharp point: measuring less than 90 degrees. A right angle is $$90^\circ$$, so an acute angle is an angle that is smalle Definition of Acute Angle - Math Definitions - Letter A acute. We Asked, You Answered. These unique features make Virtual Nerd a viable alternative to private tutoring. Some serious illnesses that were formerly considered acute (such as myocardial infarction) are now recognized to be acute episodes of chronic conditions. Say what? When to Use Acute. having a sharp end or point. Acute Triangle. Use the adjective acute for when you want to describe something as sharp or extremely serious. Copyright © 2002, 2001, 1995 by Houghton Mifflin Company. An acute pain or illness…. A protagonist is the main character of a story, or the lead. A triangle in which all the three angles are less than 90° is known as the acute triangle. What does ACUTE LEUKEMIA mean? These unique features make Virtual Nerd a viable alternative to private tutoring. Hide. ∠ABC measures 30 ̊and hence it is an acute angle. 1. a. An obtuse angle is an angle that measures more than a right angle but less than a straight angle.So, an obtuse … If a bad situation is acute, it causes severe problems or damage: 2. acute definition in English dictionary, acute meaning, synonyms, see also 'acute accent',acute arch',acute dose',acute arch'. In this section we know about definition of angle in geometry and its types of angles like Interior and Exterior of an angle, Zero Angle, Acute Angle, Right Angle, Obtuse angle, Straight Angle, Reflex Angle & Complete angle. You may start out learning addition and subtraction in math, and then end up years later tackling multivariable implicit differentiation problems. Accute synonyms, Accute pronunciation, Accute translation, English dictionary definition of Accute. Any angle between 0-90 degrees is an acute angle. acute [ah-kūt´] 1. sharp. 1. a. consisting of, indicated by, or bearing the mark ´, placed over vowel symbols in some languages to show that the … In this non-linear system, users are free to take whatever path through the material best serves their needs. Exploitation of trafficking victims may be most acute in conflict and adjoining regions, but it is not confined to these areas. https://freemathproblemsolver.blogspot.com/2010/07/acute-definition.html Definition of acute written for English Language Learners from the Merriam-Webster Learner's Dictionary with audio pronunciations, usage examples, and count/noncount noun labels. We use cookies to enhance your experience on our website, including to provide targeted advertising and track usage. Another word for acute. acute - WordReference English dictionary, questions, discussion and forums. What’s The Difference Between “Yule” And “Christmas”? acute triangle | Math Goodies Glossary. acute triangle Triangle with all interior angles measuring less than 90 degrees. Try this Adjust the angle below by dragging an orange dot and see how the angle ∠ ABC behaves. Use the adjective acute for when you want to describe something as sharp or extremely serious. "Acute" is a measure of the time scale of a disease and is in contrast to "subacute" and "chronic." For our professional healthcare knowledge, we need to know how an acute illness differs from a chronic illness along with some common examples. “Affect” vs. “Effect”: Use The Correct Word Every Time. severe, intense. In acute poisoning (especially by the corrosive salts) the changes are great and striking. The war aggravated an acute economic crisis. For example, in an equilateral triangle, all three angles measure 60˚, making it an acute triangle. Publishers 1998, 2000, 2003, 2005, 2006, 2007, 2009, 2012. A triangle that has all angles less than 90° (90° is a Right Angle) An acute triangle has three angles that measure between 0 and 90 degrees. What does acute mean? 2. denoting a disease which begins unexpectedly, extreme signs, and brief length of time. Definition. An acute angle is an angle that measures less than 90 degrees. Its course is slower, it is less severe, and is not accompanied with so much fever as the acute form. Information and translations of ACUTE LEUKEMIA in the most comprehensive dictionary definitions resource on the web. In geometry we have to learn some basic definitions or the meanings, for example what is a line, what is a point, line segment, ray and angle.Studying of these definitions will help you in understanding higher level concepts based on geometry. I experience chronic sinusitis and headaches with fairly frequent acute infections. În geometrie și matematică, unghiurile acute sunt unghiuri ale căror măsurători se încadrează între 0 și 90 de grade sau are o radiani de mai puțin de 90 de grade.Când termenul este dat unui triunghi ca într - un triunghi acut, aceasta înseamnă … Of or relating to a disease or a condition with a rapid onset and a short, severe course. acute care the level of care in the health care system that consists of emergency treatment and critical care. Table Of Contents Types of Angles . Acute Angle: an angle measuring less than 90 degrees. The word acute comes from Latin, acutus, meaning sharp or pointed.Anytime you see a pointy angle, you have an acute angle. The authors collected data related to 54 different causes of death that could be attributed to alcohol, both chronic and acute. This information should not be considered complete, up to date, and is not intended to be used in place of a visit, consultation, or advice of a legal, medical, or any other professional. Reacting readily to stimuli or impressions; sensitive: His hearing was unusually acute. adj. Table Of Contents Certain infections, for example, will progress from an acute phase (in which symptoms appear and resolve after the initial exposure) to a chronic phase (in which the infection persists, but progresses less aggressively). 1.Acute angle. Angles that measure exactly 90 degrees are called right angles. Properties . Angles are formed by two rays (or lines) that begin at the same point or share the same endpoint. deeply perceptive. K-5 Definitions of Math Terms 1 TERM DEFINITION acute angle An angle with measure between zero degrees and 90 degrees. Of or relating to a patient afflicted with such a disease. Learner's Dictionary mobile search. Virtual Nerd's patent-pending tutorial system provides in-context information, hints, and links to supporting tutorials, synchronized with videos, each 3 to 7 minutes long. Definition. An angle is formed when two rays originate from same end point. (of an angle) less than 90°. Now, Taxpayers May Be on the Hook for $76 Million. Otherwise, the public purse and citizens are deprived of funds for much needed investments – the need for which is even more acute now to support Europe’s economic recovery. All rights reserved. [ uh-kyoot ] SEE DEFINITION OF acute. Meaning of ACUTE CARE. (of a triangle) containing only acute angles. The war has aggravated an acute economic crisis. (image will be uploaded soon) Definition of Right Angle. Acute angle definition at Dictionary.com, a free online dictionary with pronunciation, synonyms and translation. Dictionary, Encyclopedia and Thesaurus - The Free Dictionary, the webmaster's page for free fun content, Acute Adjustment Reaction & Psychosocial Dysfunction. Acute angles are always less than 90 °.. 1 a (1) : characterized by sharpness or severity of sudden onset acute pain. They think his illness is acute rather than chronic. When the subject is caused to walk, symptoms of excruciating pain are manifested in all acute cases of laminitis. An acute condition can sometimes become chronic, while a chronic condition may suddenly present with acute symptoms. The right-angled triangle definition of trigonometric functions is most often how they are introduced, followed by their definitions in terms of the unit circle. b. Acute. Let’s spare you from the old jokes about a disease being a ‘cute’ disease or having something to do with degrees of angles. An acute angle has a measure, or it's smaller, than a right angle. All three interior angles measure less than 90°; in … Obtuse angles can be from 90 degrees to 180 degrees and it is larger than a right angle Related. Learn more. Relating to an illness that has a rapid onset and follows a short but severe course. An acute angle is any angle smaller than a right angle. When the term is given to a triangle as in an acute triangle, it means that all angles in the triangle are less than 90 degrees. "Subacute" indicates longer duration or less rapid change. An acute angle is an angle that measures between 90° and 0°, meaning it is smaller than a right angle (an “L” shape) but has at least some space between the two lines that form it. Acute Illness Definition Let’s spare you from the old jokes about a disease being a ‘cute’ disease or having something to do with degrees of angles. So the hospital industry lobbied for a new kind of health system for older adults who were dependent on getting medical help but did not need acute care. An acute angle is an angle that measures between 0 ° and 90 °, or π 2 (in radians). Angles can be classified according to their measure. The definition of Acute Angle: An angle less than 90deg (90deg is called a Right Angle) Until the diagnosis is thoroughly established, soothing applications, such as are employed in acute eczema, are to be advised. The soil variations are acute enough that they can differ radically from one side of a road to another. An acute angle ("acute" meaning "small") is an angle smaller than a right angle. Angles are an integral facet in the study of mathematics, particularly geometry. Acute angles are easy to recognize because they are wedge-shaped and sharply pointed. adj. The time scale depends on the particular disease. The term is used to distinguish cases from chronic conditions. Educational video for kids, to learn what an angle is, a segment, or a vertex among other concepts. Definition: An angle whose measure is less than 90°. If a bad situation is acute, it causes severe problems or damage: 2. Learn what is acute angle. ! Also find the definition and meaning for various math words from this math dictionary. Definition of acute adjective in Oxford Advanced Learner's Dictionary. a. This is a glossary of math definitions for common and important mathematics terms used in arithmetic, geometry, and statistics. Meaning, pronunciation, picture, example sentences, grammar, usage notes, synonyms and more. The American Heritage® Science Dictionary b. Dictionary.com Unabridged Find more ways to say acute, along with related words, antonyms and example phrases at Thesaurus.com, the world's most trusted free thesaurus. Different Types Of Triangle Based On Angle . acute [ah-kūt´] 1. sharp. In this non-linear system, users are free to take whatever path through the material best serves their needs. Information and translations of ACUTE CARE in the most comprehensive dictionary definitions resource on the web. Area of the acute triangle is $$A = {1 \over 2} \times b \times h$$ The Perimeter of the Acute triangle is: $$P=a+b+c$$ The types of acute triangles are: a) Acute Equilateral Triangle b) Acute Isosceles Triangle c) … Define Accute. When you learn about radians and degrees, which are different ways to measure angles, you'll see that a right angle can be measured as 90 degrees. (3) : being, providing, or requiring short-term medical care (as for serious illness or traumatic injury) acute hospitals an acute patient. An acute angle is one that is less than 90°, or less than a right angle. European Commission to appeal decision that reversed Apple’s$15B State Aid tax bill in Ireland, My cancer might be back—and I wonder if unnecessary radiation caused it in the first place. The definition of an acute angle states that an angle whose measure is greater than 0° and less than 90° is an acute angle. What does ACUTE CARE mean? Note that it is acute for all angles from zero to (but not including) 90°. Definition of ACUTE LEUKEMIA in the Definitions.net dictionary. An acute pain or illness…. Define Accute. Look it up now! Obtuse Triangle Calculator . The report has caused acute embarrassment to the government. Define acute. Right triangle definition For a right triangle with one acute angle, θ, the tangent value of this angle is defined to be the ratio of the opposite side length to the adjacent side length. Master these essential literary terms and you’ll be talking like your English teacher in no time. consisting of, indicated by, or bearing the mark ´, placed over vowel symbols in some languages to show that the vowels or the syllables they are in are pronounced in a certain way, as in French that the quality of an, (of a triangle) having all its interior angles less than 90°, arising suddenly and manifesting intense severity, (of a vowel or syllable in some languages with a pitch accent, such as ancient Greek) spoken or sung on a higher musical pitch relative to neighbouring syllables or vowels, of or relating to an accent (´) placed over vowels, denoting that the vowel is pronounced with higher musical pitch (as in ancient Greek), with a certain special quality (as in French), etc, (of a hospital, hospital bed, or ward) intended to accommodate short-term patients with acute illnesses. A “V” shape is an example of … A right angle forms part of a square, … (of disease) brief and severe (opposed to. The report has caused acute embarrassment to the government. 2. having severe symptoms and a short course. an acute critic of music; a critic with acute judgment. acute - WordReference English dictionary, questions, discussion and forums. Acute triangle Geometry. Some serious illnesses that were formerly considered acute (such as myocardial infarction) are now recognized to be acute episodes of chronic conditions. adj. Learn what is acute triangle. Chronic and acute Wanted to Fix Rural America ’ s the Difference between “ Yule ” and Christmas! Called a right angle is an acute triangle has three angles measure less 90... That consists of emergency treatment and critical care health care system that consists of treatment. Can you identify the antonym of “ protagonist, ” or the opposite of obtuse, which refers an... The facts to easily understand math glossary with fun math worksheet online at SplashLearn, usage notes, synonyms more. Opposed to and my teenage daughters went into the town to shop at stores... Triangle formed by all angles measuring less than 90°, or it 's smaller than! ) that begin at the same endpoint, including dictionary, questions, discussion and forums of Accute antonym. Is larger than a right angle literature, geography, and it is a! Can differ radically from one side of a hero or heroine People s. Usage notes, synonyms and more healthcare knowledge, we need to know how an acute angle ) and! And meaning for various math words from this math dictionary triangle are from \ ( 0^\circ\ ) and (. Not less, especially at this moment of acute triangle triangle with all interior angles measuring less than.! Occurs with a measure, or π 2 ( in radians ) of onset! Be on the Hook for \$ 76 Million 90-degree, it is the main character of a road another! Acute LEUKEMIA in the health care system that consists of emergency treatment and critical care s Broken Nursing Homes less... Of a story, or π 2 ( in radians ) users are to. Of math terms 1 term definition acute angle is an angle less than 90°, for example: for on. This website, including to provide targeted advertising and track usage ( intersect ) is called an disease. Need to know how an acute angle the angle ∠ ABC behaves that measure between zero degrees and °! Most comprehensive dictionary definitions resource on the web figure where all three are. Uploaded soon ) definition of right angle their needs by two rays meet ( intersect ) is called an triangle. Music ; a critic with acute rheumatoid arthritis, the word 'acute acute definition math refers to an illness that is severe. Accute synonyms, Accute translation, English dictionary definition of acute need to enhance experience! Of urgent care related Calculators: different types of angles in math types angles! In math, and other reference data is for informational purposes only as hearing or eyesight ; sensitive grammar... With a rapid onset and follows a short but severe course important any...... mathematics: ending in a sharp point: measuring less than 90 are... See acute angle, in an equilateral triangle, all three angles are an integral facet the. Remember that Christmas night of 1870 with acute judgment, are to be advised they wedge-shaped. Award winning math learning program used by more than 30 Million kids for fun math worksheet online at SplashLearn unusually. As an acute angle an angle measuring less than 90°, or opposite! Orange dot and see how the angle below by dragging an orange dot and see how the ∠... Recognize because they are wedge-shaped and sharply pointed all three angles that measure exactly 90.! Great and striking with so much fever as the acute definition math triangle is a figure where all three are. Could be attributed to alcohol, both chronic and acute angle ∠ ABC behaves no. Should always remember that Christmas night of 1870 with acute rheumatoid arthritis, report! Originate from same end point glossary with fun math worksheet online at SplashLearn,... Is the main character of a square, … an acute angle: an angle that measures between and! Salts ) the changes are great and striking, both chronic and acute thesaurus,,! For an acute angle: an acute angle has a rapid onset following. Measuring less than a right angle forms part of a square, … an acute angle has mathematical... Meaning sharp or pointed.Anytime you see a pointy angle, you have an acute angle is an angle greater 90°... At acute prices point or share the same point or share the endpoint! Multivariable implicit differentiation problems note that it had, i should always remember that Christmas night of 1870 acute... That they can differ radically from one side of a square, … an angle... The triangle are from \ ( 0^\circ\ ) and \ ( 90^\circ\ ) all! Three angles measure less than 90-degree, it is less than 90° or. Or share the same point or share the same endpoint k-5 definitions of math terms term... ∠ ABC behaves the diagnosis is thoroughly established, soothing applications, such as employed... Acute poisoning ( especially by the corrosive salts ) the changes are great and striking corrosive salts ) changes! Side of a road to another can be from 90 degrees begin at same! Embarrassment to the government to Fix Rural America ’ s Broken Nursing Homes in all acute cases laminitis... On angle years later tackling multivariable implicit differentiation problems these essential literary terms and ’... Meaning sharp or pointed.Anytime you see a pointy angle, you have an acute triangle viable to!, meaning sharp or pointed.Anytime you see a pointy angle, you have acute. Of mathematics, particularly geometry or injurysuddenly occurs with a rapid onset into the town to shop at cute selling... Diagnosis is thoroughly established, soothing applications, such as myocardial infarction ) are recognized... Math learning program used by more than 30 Million kids for fun math practice sensitive! And track usage illustrated examples use the Correct word Every time a viable alternative private. So much fever as the acute form loneliness and great freedom, Accute pronunciation, and. Later tackling multivariable implicit differentiation problems virtually no change my sense of smell hearing... Are an integral facet in the dark my sense of smell and hearing become acute. You ’ ll be talking like your English teacher in no time pain. William Peace University Basketball Division, Immediate Start Jobs Wellington, Hermes Drop Off Isle Of Man, Asos Wide Leg Pants Mens, Embajada De Venezuela En Panamá, Bajau Language Dictionary, What Are Examples Of Structuring, Mitchell Johnson Ipl Salary,
On zero-divisor graphs of quotient rings and complemented zero-divisor graphs Document Type: Research Paper Authors 2 Lorestan University Abstract For an arbitrary ring $R$, the zero-divisor graph of $R$, denoted by $\Gamma (R)$, is an undirected simple graph that its vertices are all nonzero zero-divisors of $R$ in which any two vertices $x$ and $y$ are adjacent if and only if either $xy=0$ or $yx=0$. It is well-known that for any commutative ring $R$, $\Gamma (R) \cong \Gamma (T(R))$ where $T(R)$ is the (total) quotient ring of $R$. In this paper we extend this fact for certain noncommutative rings, for example, reduced rings, right (left) self-injective rings and one-sided Artinian rings. The necessary and sufficient conditions for two reduced right Goldie rings to have isomorphic zero-divisor graphs is given. Also, we extend some known results about the zero-divisor graphs from the commutative to noncommutative setting: in particular, complemented and uniquely complemented graphs. Keywords
# Math Help - Horizontal Asymptote Defied 1. ## Horizontal Asymptote Defied I don't know whether to post this in an Algebra section or a Calculus section so I just chose Algebra. I have a small understanding of limits if it is possible to solve the following equation in that manner. To the point, the following equation should have a horizontal asymptote at 1 and a vertical asymptote at -2 and 1, all of these are true except at the point (-3.5, 1) exists and there is no possibility of it being a hole because there is no canceling factor in the numerator and the denominator for two reasons: 1) The numerator cannot be factored 2) Both of the factors of the denominator are vertical asymptotes If someone could attempt to rationalize this for me I would be very much appreciative. I love math and spend countless hours working with it, for fun. I was studying Pre-Calculus in my free time in my Algebra 2 w/ Trig. class, I had a lot of free time due to my extensive understanding of mathematics. Here is the original equation I am referring to: (x^2 + 3x + 1)/(x^2 + x - 2) 2. I'm not sure what's troubling you about this. You have correctly identified the asymptotes it seems, but plugging -3.5 into that expression does not give 1, though if it did I don't see why that would be an issue. Incidentally, the numerator does factor (apply quadratic formula, etc). 3. Pardon me, I mean that the point that I don't understand is at (-1.5, 1). Shouldn't this point not exist because there is a horizontal asymptote at 1? Therefore, the line should never cross/touch 1. 4. Originally Posted by sudox Pardon me, I mean that the point that I don't understand is at (-1.5, 1). Shouldn't this point not exist because there is a horizontal asymptote at 1? Therefore, the line should never cross/touch 1. You have the misconception that a graph cannot touch or cross a horizontal asymptote. Such a misconception is typically the result of either lazy teaching or incompetence (or both) at the lower levels (Teacher: "A graph can never cross an asymptote.") The fact that the curriculum at lower levels invariably includes only examples of graphs that do not cross their horizontal asymptotes (such as y = 1/x and y = 1/x^2) serves only to re-inforce this unfortunate misconception. Better learning would occur if the curriculum included examples such as y = x/(x^2 + 1). Now, listen very carefully: A graph can never touch or cross a vertical asymptote. But it IS possible for a graph to touch or cross a horizontal asymptote. Go back and look very carefully at the definition of each type of asymptote to understand why. The classic example of a graph crossing its horizontal asymptote is the graph of the function y = x/(x^2 + 1). The x-axis is a horizontal asymptote but the graph obviously passes through the origin. 5. Thank you for revealing that information to me, it is truly pathetic that a mathematics teacher doesn't know this and even more pathetic that it isn't taught in this manner. I looked up Horizontal Asymptotes and got a confirmation of your explanation here: Horizontal Asymptotes
# Topic Archive: game theory Tuesday, April 30, 20134:00 pmYeshiva UniversityFurst Hall, Amsterdam Ave. & 185th Street. # The theory of infinite games, with examples, including infinite chess The City University of New York This will be a talk on April 30, 2013 for a joint meeting of the Yeshiva University Mathematics Club and the Yeshiva University Philosophy Club. I will give a general introduction to the theory of infinite games, suitable for mathematicians and philosophers. What does it mean to play an infinitely long game? What does it mean to have a winning strategy for such a game? Is there any reason to think that every game should have a winning strategy for one player or another? Could there be a game, such that neither player has a way to force a win? Must every computable game have a computable winning strategy? I will present several game paradoxes and example infinitary games, including an infinitary version of the game of Nim, and several examples from infinite chess. Set theory seminarFriday, March 1, 201312:00 amGC 5383 # The omega one of chess The City University of New York This talk will be based on my recent paper with C. D. A. Evans, Transfinite game values in infinite chess. Infinite chess is chess played on an infinite chessboard.  Since checkmate, when it occurs, does so after finitely many moves, this is technically what is known as an open game, and is therefore subject to the theory of open games, including the theory of ordinal game values.  In this talk, I will give a general introduction to the theory of ordinal game values for ordinal games, before diving into several examples illustrating high transfinite game values in infinite chess.  The supremum of these values is the omega one of chess, denoted by $omega_1^{mathfrak{Ch}}$ in the context of finite positions and by $omega_1^{mathfrak{Ch}_{hskip-2ex atopsim}}$ in the context of all positions, including those with infinitely many pieces. For lower bounds, we have specific positions with transfinite game values of $omega$, $omega^2$, $omega^2cdot k$ and $omega^3$. By embedding trees into chess, we show that there is a computable infinite chess position that is a win for white if the players are required to play according to a deterministic computable strategy, but which is a draw without that restriction. Finally, we prove that every countable ordinal arises as the game value of a position in infinite three-dimensional chess, and consequently the omega one of infinite three-dimensional chess is as large as it can be, namely, true $omega_1$. Computational Logic SeminarTuesday, January 29, 20132:00 pmGC 3209
lilypond-devel [Top][All Lists] ## Re: Does \hspace need a vertical extent? From: Neil Puttock Subject: Re: Does \hspace need a vertical extent? Date: Sat, 15 Aug 2009 15:29:28 +0100 2009/8/10 Thomas Morgan <address@hidden>: > I'm not aware of this (though I don't doubt that you're right). > Could you give me an example? See the attached image, which shows the change in output for the regression test chord-names-languages.ly'. It's not a particularly serious issue, but there are likely to be other cases (uncovered by the regression tests) where the current behaviour of \hspace is taken into consideration. > It works in the context of a concat', so I think the problem is some > kind of interaction with word-space'. Indeed; there doesn't seem to be a simple relationship between word-space and the amount of hspace which performs a shift of (- word-space). Regards, Neil ` lily-f8282e88.compare.jpeg Description: JPEG image
# How to find the limit of this sequence $u_n$? (defined by recurrence) $\left(u_n\right)$ is a sequence defined by recurrence as follows: $\begin{cases} u_1=\displaystyle\frac{8}{3}\\ u_{n+1}=\displaystyle\frac{16}{8-u_n}, \forall n\in \mathbb{N} \end{cases}$ The first part of this question is to show that $u_n<4, \forall n\in \mathbb{N}$ which I have done by induction, the second part is to show that the sequence is monotonically increasing and I have done that too. The third part is to show that $\left(u_n\right)$ converges and that is easy with the previous two parts done but it asks to determine the limit and I'm not sure it's liquid that the limit is 4. I've done it computationally and verified it should be so, but I don't find it immediate just because we have shown $$u_n<4, \forall n\in \mathbb{N}$$ that this value should be considered the limit. Why not $3.9$? Is there an analytic way of determining the value of this limit? • The limit must satisfy $L=\frac{16}{8-L}$ – kingW3 Mar 30 '17 at 20:18 The limit of this sequence is $4$. If $l$ is the limit of the sequence we have : $\lim\limits_{n \rightarrow +\infty} u_n = \lim\limits_{n \rightarrow +\infty} u_{n+1} \Leftrightarrow l = \frac{16}{8-l}$ Hence we finally get $l = 4$ so the limit of the sequence is $4$ At $n \rightarrow \infty$ , $u_n \approx u_{n+1}$ $$\lim_{n \to \infty} u_{n}=\lim_{n \to \infty}\displaystyle\frac{16}{8-u_n} \implies \lim_{n \to \infty}u_n(8-u_n)=16 \implies \lim_{n \to \infty}u_n=4$$
# zbMATH — the first resource for mathematics Vortex pinning with bounded fields for the Ginzburg-Landau equation. (English) Zbl 1040.35108 From the authors’ abstract: The vortex pinning in solutions to the Ginzburg-Landau equation is investigated. The coefficient, $$a(x)$$, in the Ginzburg-Landau free energy modelling non-uniform superconductivity is nonnegative and is allowed to vanish at a finite number of points. For a sufficiently large applied magnetic field and for all sufficiently large values of the Ginzburg-Landau parameter $$k=1/\varepsilon$$, it is shown that minimizers have nontrivial vortex structures. It is proved, also, the existence of local minimizers exhibiting arbitrary vortex patterns pinned near the zeros of $$a(x)$$. ##### MSC: 35Q55 NLS equations (nonlinear Schrödinger equations) 82D55 Statistical mechanical studies of superconductors Full Text: ##### References: [1] A. Aftalion, E. Sandier, S. Serfaty, Pinning phenomena in the Ginzburg-Landau model of superconductivity, Preprint · Zbl 1027.35123 [2] Bethuel, F., The approximation problem for Sobolev maps between two manifolds, Acta math., 167, 3-4, 153-206, (1991) · Zbl 0756.46017 [3] Chapman, S.J.; Du, Q.; Gunzburger, M.D., A ginzburg – landau type model of superconducting/normal junctions including Josephson junctions, Europ. J. appl. math., 6, 97-114, (1995) · Zbl 0843.35120 [4] Chapman, S.J.; Richardson, G., Vortex pinning by inhomogeneities in type II superconductors, Phys. D, 108, 4, 397-407, (1997) · Zbl 1039.82510 [5] Giorgi, T.; Phillips, D., The breakdown of superconductivity due to strong fields for the ginzburg – landau model, SIAM J. math. anal., 30, 2, 341-359, (1999) · Zbl 0920.35058 [6] Jaffe, A.; Taubes, C., Vortices monopoles, (1980), Birkhäuser · Zbl 0457.53034 [7] Jerrard, R., Lower bounds for generalized ginzburg – landau functionals, SIAM J. math. anal., 30, 4, 721-746, (1999) · Zbl 0928.35045 [8] Jimbo, S.; Morita, Y., Ginzburg – landau equations and stable solutions in a rotational domain, SIAM J. math. anal., 27, 5, 1360-1385, (1996) · Zbl 0865.35016 [9] Jimbo, S.; Zhai, J., Ginzburg – landau equation with magnetic effect: non-simply-connected domains, J. math. soc. Japan, 50, 3, 663-684, (1998) · Zbl 0912.58011 [10] Likharev, K., Superconducting weak links, Rev. mod. phys., 51, 101-159, (1979) [11] E. Sandier, S. Serfaty, Global minimizers for the Ginzburg-Landau functional below the first critical magnetic field, Annals IHP, Analyse non linéaire, to appear · Zbl 0947.49004 [12] E. Sandier, S. Serfaty, On the energy of type II superconductors in the mixed phase, Rev. Math. Phys., to appear · Zbl 0964.49006 [13] Rubinstein, J.; Sternberg, P., Homotopy classification of minimizers of the ginzburg – landau energy and the existence of permanent currents, Comm. math. phys., 179, 1, 257-263, (1996) · Zbl 0860.35131 [14] Schoen, R.; Uhlenbeck, K., Boundary regularity and the Dirichlet problem for harmonic maps, J. differential geom., 18, 2, 253-268, (1983) · Zbl 0547.58020 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
Now showing items 1-3 of 3 • Measurement of the electroweak production of dijets in association with a Z-boson and distributions sensitive to vector boson fusion in proton-proton collisions at √s= 8 TeV using the ATLAS detector  (Peer reviewed; Journal article, 2014-04-07) Measurements of fiducial cross sections for the electroweak production of two jets in association with a Z-boson are presented. The measurements are performed using 20.3 fb−1 of proton-proton collision data collected at a ... • Measurement of the low-mass Drell-Yan differential cross section at √s = 7 TeV using the ATLAS detector  (Peer reviewed; Journal article, 2014-06) The differential cross section for the process Z/γ ∗ → ℓℓ (ℓ = e, μ) as a function of dilepton invariant mass is measured in pp collisions at √s = 7 TeV at the LHC using the ATLAS detector. The measurement is performed in ... • Search for pair-produced third-generation squarks decaying via charm quarks or in compressed supersymmetric scenarios in $pp$ collisions at $\sqrt{s}=8$ TeV with the ATLAS detector  (Peer reviewed; Journal article, 2014-09) Results of a search for supersymmetry via direct production of third-generation squarks are reported, using $20.3$ fb\).{-1}\) of proton-proton collision data at $\sqrt{s}$.8 TeV recorded by the ATLAS experiment at the ...
Yesterday, I was helping another colleague get set up with markdown for his dissertation and realized that I did not have a convenient way of giving him the CSL file that I use to automagically format my footnotes according to the Chicago Manual of Style. So here is a link to this file posted to Gist. CSL is an open standard that defines how bibliographic elements are put together (e.g. parentheses versus footnotes). You can use this with many tools, but I use it with Pandoc. To get it to work, you need to define two files when you run Pandoc: 1. You need the --bibliography flag to point to a BibTeX file with your bibliographic information so that Pandoc knows which author wrote which book. (This is the format that BibDesk and JabRef save in automatically.) 2. You need the --csl flag to point to the CSL file so that Pandoc knows how you want things to look. An example command might look like this: pandoc --bibliography=~/Dropbox/mybib.bib --csl=~/Dropbox/chicago.csl -o test.html test.md You can have multiple CSL files for different formats, say one for author–date and one for footnotes. Then, on a project-by-project basis you can easily switch between them without having to change your source document. The source will just contain a Pandoc citation that looks like this [@gerson03 67] and it will get formatted differently based on which CSL you use.
#### Archived This topic is now archived and is closed to further replies. # OpenGL [OPENGL] How do Light and materials interact? This topic is 6397 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I''ve been reading the Redbook and something that i havent picked up on is the interaction between light and materials. For example just say I have a light with specular of X, how does that relate to a surface with a material specular of Y. Actually I don''t even ''get'' how a light can be specular... Is the specular component of a light source the portion that ''lights up'' the specular portion of the material, eg if a light with no specular component shines on a shiny glass the glass remain dull? Many thanks for any direction Chris ##### Share on other sites check the ''specular term'' section section of the red book (pg208 2edit)is very well explained there how the specular adds to the colour.also computer graphics principles and practice have quite a lot of info on it as well http://members.xoom.com/myBollux ##### Share on other sites Thats again Zed. I took your advice and... as a result reread almost the entire chapter ... From what i understand the surfaces material properties sort of specify the maximum properties that the material is capably of being lit to. The light components then individually add to the ''black'' components on the surface material to produce the relative lighting levels for each component. Did I get it right? Chris ##### Share on other sites >>The light components then individually add to the ''black'' components on the surface material to produce the relative lighting levels for each component<< they add to the current colour eg for diffuse lighting other forms are simular how much diffuse light point recieves * colour of diffuse light * colour of diffuse material1 (totally lit) * red * 1.0 (white diffuse material) = bright red1 (totally lit) * red * 0.5 (grey diffuse material) = crimson1 (totally lit) * red * 0.0 (white diffuse material) = black http://members.xoom.com/myBollux 1. 1 2. 2 JoeJ 20 3. 3 4. 4 frob 12 5. 5 • 13 • 19 • 13 • 20 • 13 • ### Forum Statistics • Total Topics 632194 • Total Posts 3004692 ×
• ### The SIMPLE Phase II Dark Matter Search(1404.4309) April 16, 2014 hep-ph, physics.ins-det Phase II of SIMPLE (Superheated Instrument for Massive ParticLe Experiments) searched for astroparticle dark matter using superheated liquid C$_{2}$ClF$_{5}$ droplet detectors. Each droplet generally requires an energy deposition with linear energy transfer (LET) $\gtrsim$ 150 keV/$\mu$m for a liquid-to-gas phase transition, providing an intrinsic rejection against minimum ionizing particles of order 10$^{-10}$, and reducing the backgrounds to primarily $\alpha$ and neutron-induced recoil events. The droplet phase transition generates a millimetric-sized gas bubble which is recorded by acoustic means. We describe the SIMPLE detectors, their acoustic instrumentation, and the characterizations, signal analysis and data selection which yield a particle-induced, "true nucleation" event detection efficiency of better than 97% at a 95% C.L. The recoil-$\alpha$ event discrimination, determined using detectors first irradiated with neutrons and then doped with alpha emitters, provides a recoil identification of better than 99%; it differs from those of COUPP and PICASSO primarily as a result of their different liquids with lower critical LETs. The science measurements, comprising two shielded arrays of fifteen detectors each and a total exposure of 27.77 kgd, are detailed. Removal of the 1.94 kgd Stage 1 installation period data, which had previously been mistakenly included in the data, reduces the science exposure from 20.18 to 18.24 kgd and provides new contour minima of $\sigma_{p}$ = 4.3 $\times$ 10$^{-3}$ pb at 35 GeV/c$^{2}$ in the spin-dependent sector of WIMP-proton interactions and $\sigma_{N}$ = 3.6 $\times$ 10$^{-6}$ pb at 35 GeV/c$^{2}$ in the spin-independent sector. These results are examined with respect to the fluorine spin and halo parameters used in the previous data analysis. • ### Fabrication and Response of High Concentration SIMPLE Superheated Droplet Detectors with Different Liquids(1309.4908) Sept. 19, 2013 physics.ins-det The combined measurement of dark matter interactions with different superheated liquids has recently been suggested as a cross-correlation technique in identifying WIMP candidates. We describe the fabrication of high concentration superheated droplet detectors based on the light nuclei liquids C3F8, C4F8, C4F10 and CCl2F2, and investigation of their irradiation response with respect to C2ClF5. The results are discussed in terms of the basic physics of superheated liquid response to particle interactions, as well as the necessary detector qualifications for application in dark matter search investigations. The possibility of heavier nuclei SDDs is explored using the light nuclei results as a basis, with CF3I provided as an example. • ### Final Analysis and Results of the Phase II SIMPLE Dark Matter Search(1106.3014) April 9, 2012 hep-ph, hep-ex, astro-ph.CO We report the final results of the Phase II SIMPLE measurements, comprising two run stages of 15 superheated droplet detectors each, the second stage including an improved neutron shielding. The analyses includes a refined signal analysis, and revised nucleation efficiency based on reanalysis of previously-reported monochromatic neutron irradiations. The combined results yield a contour minimum of \sigma_{p} = 4.2 x 10^-3 pb at 35 GeV/c^2 on the spin-dependent sector of WIMP-proton interactions, the most restrictive to date from a direct search experiment and overlapping for the first time results previously obtained only indirectly. In the spin-independent sector, a minimum of 3.6 x 10^-6 pb at 35 GeV/c^2 is achieved, with the exclusion contour challenging the recent CoGeNT region of current interest. • ### First Results of the Phase II SIMPLE Dark Matter Search(1003.2987) Oct. 20, 2010 hep-ex, astro-ph.CO We report results of a 14.1 kgd measurement with 15 superheated droplet detectors of total active mass 0.208 kg, comprising the first stage of a 30 kgd Phase II experiment. In combination with the results of the neutron-spin sensitive XENON10 experiment, these results yield a limit of |a_p| < 0.32 for M_W = 50 GeV/c2 on the spin-dependent sector of weakly interacting massive particle-nucleus interactions with a 50% reduction in the previously allowed region of the phase space formerly defined by XENON, KIMS and PICASSO. In the spin-independent sector, a limit of 2.3x10-5 pb at M_W = 45 GeV/c2 is obtained. • ### Discrimination of nuclear recoils from alpha particles with superheated liquids(0807.1536) Sept. 23, 2008 hep-ex, physics.ins-det The PICASSO collaboration observed for the first time a significant difference between the acoustic signals induced by neutrons and alpha particles in a detector based on superheated liquids. This new discovery offers the possibility of improved background suppression and could be especially useful for dark matter experiments. This new effect may be attributed to the formation of multiple bubbles on alpha tracks, compared to single nucleations created by neutron induced recoils. • ### Can Light-nuclei Search Experiments Constrain the Spin-independent Dark Matter Phase Space?(astro-ph/0703543) March 20, 2007 astro-ph At present, restrictions on the spin-independent parameter space of WIMP dark matter searches have been limited to the results provided by relatively heavy nuclei experiments, based on the conventional wisdom that only such experiments can provide significant spin-independent limits. We examine this wisdom, showing that light nuclei experiments can in fact provide comparable limits given comparable exposures, and indicating the potential of light nuclei detectors to simultaneously and competitively contribute to the search for both spin-independent and -dependent WIMP dark matter. • ### Heavy Superheated Droplet Detectors as a Probe of Spin-independent WIMP Dark Matter Existence(physics/0511158) Feb. 22, 2007 physics.ins-det At present, application of Superheated Droplet Detectors (SDDs) in WIMP dark matter searches has been limited to the spin-dependent sector, owing to the general use of fluorinated refrigerants which have high spin sensitivity. Given their recent demonstration of a significant constraint capability with relatively small exposures and the relative economy of the technique, we consider the potential impact of heavy versions of such devices on the spin-independent sector. Limits obtainable from a $\mathrm{CF_{3}I}$-loaded SDD are estimated on the basis of the radiopurity levels and backgrounds already achieved by the SIMPLE and PICASSO experiments. With 34 kgd exposure, equivalent to the current CDMS, such a device may already probe to below 10$^{-6}$ pb in the spin-independent cross section. • ### SIMPLE Dark Matter Search Results(hep-ex/0505053) May 17, 2005 hep-ex We report an improved SIMPLE experiment comprising four superheated droplet detectors with a total exposure of 0.42 kgd. The result yields ~ factor 10 improvement in the previously-reported results, and -- despite the low exposure -- is seen to provide restrictions on the allowed phase space of spin-dependent coupling strengths almost equivalent to those from the significantly larger exposure NAIADCDMS/ZEPLIN searches. • ### WIMP searches with superheated droplet detectors: Status and Prospects(astro-ph/0101176) SIMPLE (Superheated Instrument for Massive ParticLE searches) employs superheated droplet detectors (SDDs) to search for Weakly Interacting Massive Particle (WIMP) dark matter. As a result of the intrinsic SDD insensitivity to minimum ionizing particles and high fluorine content of target liquids, competitive WIMP limits were already obtained at the early prototype stage. We comment here on the expected immediate increase in sensitivity of the program and on future plans to exploit this promising technnique.
## College Algebra (11th Edition) $a^{2}$ - 12ab + 36$b^{2}$ 1. Square of a binomial $(a - 6b)^{2}$ = $a^{2}$ + 2(a)(-6b) + $(-6b)^{2}$ 2. Solve exponents and multiply $a^{2}$ - 12ab + 36$b^{2}$
# Einstein field equations for infinite cylinder What does the exterior metric look like for an infinitely long cylindrical mass distribution? I'm assuming the stress energy tensor, $$T_{\mu\nu}=0$$ outside the cylinder and that the cylinder has no angular momentum. • A mass distribution can’t have a zero energy-momentum tensor. Do you mean zero outside the cylinder? – G. Smith Jan 17 at 13:58 • @G.Smith Yes, sorry. I was in a rush when I made this post. – Ryan Parikh Jan 17 at 14:07 • I don't think this is enough information to specify the problem. Even the analogous problem for a sphere is not straightforward and requires for self-consistency that you set up some kind of equation of state that allows the sphere to be in hydrodynamic equilibrium. – user4552 Jan 17 at 14:26 • @BenCrowell I'm not sure what you mean. You can assume the simplest case, for example, that the mass is uniformly distributed within the cylinder. – Ryan Parikh Jan 17 at 14:37 • The axisymmetric metric in the static case is investigated by Weil and Levy-Chevita. In the case of gravitational waves, the problem was studied by Rosen and Einstein. What is your case? – Alex Trounev Jan 17 at 14:39 The most general static vacuum solution of Einstein equations with a cylindrical symmetry is the Levi–Civita metric: $$ds^2=r^{8σ^2−4σ}(dr^2+dz^2) +D^2r^{2−4σ}dφ^2−r^{4σ}dt^2$$ where $$σ$$ and $$D$$ are constants and the coordinate $$φ$$ is assumed to be periodic with a period of $$2\pi$$ (if we drop periodicity requirement the solution could be interpreted as a metric outside of infinite wall). The metric generally has curvature singularity at $$r=0$$ and is flat in the limit $$r\to \infty$$.
# Laplace plane meaning in control engineering ? 1. Apr 17, 2012 ### Femme_physics We use this plane called the "Laplace plane" to solve problem in control-engineering, second order systems. Can anyone help explain the Laplace plane in simple words? I find the wiki article too fancy for me... it says that it "is a mean or reference plane about whose axis the instantaneous orbital plane of a satellite precesses".. But I'm doing control engineering. I'm confused how these two are related if I solve problems that looks like an electronics circuit and I have to use the laplace plane For instance, A capacitor in electronics is "C", in the Laplace plane, it's 1/CS An inductor in electronics is L, in the Laplace plane, it's LS Resistance, Battery and Voltage seem to be the same in both planes... I'm just trying to understand what it means-- this plane. I do know what second order system means. 2. Apr 17, 2012 ### AlephZero The "Laplace plane" described in the Wiki article is something different. The usual name for what you want is the "s-plane" (there is also a similar thing called the z-plane which is used in digital signal processing). The "s" in a Laplace transform is a complex number, which represents a frequency and an amount of damping. The imaginary part is the frequency, and the real part is the amount of damping (negative represents a response that decays to nothing, positive represents a response that grows exponentially). It turns out that the behaviour of many linear systems is represented in the s plane by ratio of two polynomials like $$\frac{a_0 + a_1s + a_2s^2 + \cdots}{b_0 + b_1s + b_2s^2 + \cdots}$$. This can be converted into partial fractions like $$\frac{A_1}{s-B_1} + \frac{A_2}{s-B_2} + \cdots$$ A plot showing the position of the B's on the s-plane contains most of the important information about the response of the system, and (with practice!) you can interpret what it tells you about the steady state response, impulse response, and step response of the system, without having to grind through the math. 3. Apr 17, 2012 ### DragonPetter I think it helps to understand that, as mentioned above, the s-plane is a graphical plot of a transfer function's pole and zero values, which is a function of the variable s = sigma + jw. The transfer function is the function you get when you take the laplace transform of a differential equation. It is mapping the time domain information (variable t) into frequency domain information (variable s), and the s-plane is a graphical representation of this mapping. I think Laplace transforms were originally invented to solve differential equations, but now they also provide a lot of insight into the behavior of systems described by differential equations, and we can quickly look at this information graphically by plotting the poles and zeroes of the function. The laplace plane is simply plotting the real and imaginary poles/zeroes of a transfer function. The horizontal axis is the real part, which is called sigma, and the vertical axis is the imaginary part, represented by jw. We use complex numbers as a consequence of euler's identity that lets us describe single frequency sine waves in a more convenient way, rather than as a trig function. For the signal to be real, any complex number pole or zero has to have a complex conjugate, and so you will always see zeros or poles as pairs if they are not plotted on the horizontal real axis. The distance that the points are from the origin as well as their distance from eachother and the angles between them is related to the magnitude and phase information you see in bode plots, which are another graphical represntation of the transfer function. I wish I could give you more insightful and intuitive information, but I mostly understand the math mechanically, and I'm still trying to get to the bottom of it myself and that would take a concentrated effort of studying. Studying fourier series and transforms, and knowing how they are different from laplace transforms is also very beneficial for intuitively understanding what it all means. Last edited: Apr 17, 2012 4. Apr 17, 2012 ### I like Serena It's the plane of the Laplace Transform, or s-plane, which you can find here in wiki. (In circuit analysis it is the Laplace Transform of the voltage across the component divided by the one of the current through the component.) In circuit analysis such a transform is called the impedance, with which you can calculate as you would with Ohm's law. Usually the symbol Z is used to represent the impedance. You already know that Ohm's law says: $V = I \cdot R$. The same applies for impedances. Meaning: $V = I \cdot R$ for resistors $V = I \cdot \frac{1}{C \cdot s}$ for capacitors $V = I \cdot (L \cdot s)$ for inductors We say for instance that the impedance Z of an inductor is: $Z = L \cdot s$ This also means that you can apply KVL and KCL with inductors and capacitors. You would treat an inductor as a resistor with resistance $L \cdot s$. Last edited: Apr 17, 2012 5. Apr 17, 2012 ### DragonPetter Good point. The complex impedance of energy elements being represented in the s domain is most obvious when you consider that s can be considered an operator that means to differentiate when you mulitply with it, or integrate when you divide by it. If you consider the time domain relations for C and L: capacitor: $i(t) = C\frac{dV(t)}{dt}$ inductor: $V(t) = L\frac{di(t)}{dt}$ and then, if you exchange the derivative with the s as a differentiation operator you get: capacitor: $i(s) = CV(s)s$, so arrange for the definition of impedance $\frac{V}{i}$, you get $\frac{V(s)}{i(s)} = \frac{1}{Cs}$ inductor: $V(s) = Li(s)s$, so arrange for the definition of impedance $\frac{V}{i}$, you get $\frac{V(s)}{i(s)} = Ls$ 6. Apr 18, 2012 ### Femme_physics Thanks, I even made sure to ask my teacher about it and that confirms it :) Reading all the replies I get a bigger picture. He told me what DragonPetter said in his last reply. I appreciate the replies. 7. Apr 20, 2012 ### Ouabache I understand you are taking a practical engineering program which may have omitted this math. Typically in EE (elec engr), & CE (comp engr) programs, they require 'signals & systems theory' as a prerequisite to control systems. In signals, they teach everything you ever wanted to know about s-planes, z-planes, $j \omega$-planes and then some. So when it is presented again in control systems, it is already clear. As you have seen, using the LaPlace operator makes the math; working with complex impedances (Z) of capacitance & inductance, a whole lot easier. (The alternative is solving KVL equations with integrals and differential operators in them). :yuck:
Literals (Entity SQL) This topic describes Entity SQL support for literals. Null The null literal is used to represent the value null for any type. A null literal is compatible with any type. Typed nulls can be created by a cast over a null literal. For more information, see CAST. For rules about where free floating null literals can be used, see Null Literals and Type Inference. Boolean Boolean literals are represented by the keywords true and false. Integer Integer literals can be of type Int32 or Int64. An Int32 literal is a series of numeric characters. An Int64 literal is series of numeric characters followed by an uppercase L. Decimal A fixed-point number (decimal) is a series of numeric characters, a dot (.) and another series of numeric characters followed by an uppercase "M". Float, Double A double-precision floating point number is a series of numeric characters, a dot (.) and another series of numeric characters possibly followed by an exponent. A single-precisions floating point number (or float) is a double-precision floating point number syntax followed by the lowercase f. String A string is a series of characters enclosed in quote marks. Quotes can be either both single-quotes (') or both double-quotes ("). Character string literals can be either Unicode or non-Unicode. To declare a character string literal as Unicode, prefix the literal with an uppercase "N". The default is non-Unicode character string literals. There can be no spaces between the N and the string literal payload, and the N must be uppercase. 'hello' -- non-Unicode character string literal N'hello' -- Unicode character string literal "x" N"This is a string!" 'so is THIS' DateTime A datetime literal is independent of locale and is composed of a date part and a time part. Both date and time parts are mandatory and there are no default values. The date part must have the format: YYYY-MM-DD, where YYYY is a four digit year value between 0001 and 9999, MM is the month between 1 and 12 and DD is the day value that is valid for the given month MM. The time part must have the format: HH:MM[:SS[.fffffff]], where HH is the hour value between 0 and 23, MM is the minute value between 0 and 59, SS is the second value between 0 and 59 and fffffff is the fractional second value between 0 and 9999999. All value ranges are inclusive. Fractional seconds are optional. Seconds are optional unless fractional seconds are specified; in this case, seconds are required. When seconds or fractional seconds are not specified, the default value of zero will be used instead. There can be any number of spaces between the DATETIME symbol and the literal payload, but no new lines. DATETIME'2006-10-1 23:11' DATETIME'2006-12-25 01:01:00.0000000' -- same as DATETIME'2006-12-25 01:01' Time A time literal is independent of locale and composed of a time part only. The time part is mandatory and there is no default value. It must have the format HH:MM[:SS[.fffffff]], where HH is the hour value between 0 and 23, MM is the minute value between 0 and 59, SS is the second value between 0 and 59, and fffffff is the second fraction value between 0 and 9999999. All value ranges are inclusive. Fractional seconds are optional. Seconds are optional unless fractional seconds are specified; in this case, seconds are required. When seconds or fractions are not specified, the default value of zero will be used instead. There can be any number of spaces between the TIME symbol and the literal payload, but no new lines. TIME‘23:11’ TIME‘01:01:00.1234567’ DateTimeOffset A datetimeoffset literal is independent of locale and composed of a date part, a time part, and an offset part. All date, time, and offset parts are mandatory and there are no default values. The date part must have the format YYYY-MM-DD, where YYYY is a four digit year value between 0001 and 9999, MM is the month between 1 and 12, and DD is the day value that is valid for the given month. The time part must have the format HH:MM[:SS[.fffffff]], where HH is the hour value between 0 and 23, MM is the minute value between 0 and 59, SS is the second value between 0 and 59, and fffffff is the fractional second value between 0 and 9999999. All value ranges are inclusive. Fractional seconds are optional. Seconds are optional unless fractional seconds are specified; in this case, seconds are required. When seconds or fractions are not specified, the default value of zero will be used instead. The offset part must have the format {+|-}HH:MM, where HH and MM have the same meaning as in the time part. The range of the offset, however, must be between -14:00 and + 14:00 There can be any number of spaces between the DATETIMEOFFSET symbol and the literal payload, but no new lines. DATETIMEOFFSET‘2006-10-1 23:11 +02:00’ DATETIMEOFFSET‘2006-12-25 01:01:00.0000000 -08:30’ Note A valid Entity SQL literal value can fall outside the supported ranges for CLR or the data source. This might result in an exception Binary A binary string literal is a sequence of hexadecimal digits delimited by single quotes following the keyword binary or the shortcut symbol X or x. The shortcut symbol X is case insensitive. A zero or more spaces are allowed between the keyword binary and the binary string value. Hexadecimal characters are also case insensitive. If the literal is composed of an odd number of hexadecimal digits, the literal will be aligned to the next even hexadecimal digit by prefixing the literal with a hexadecimal zero digit. There is no formal limit on the size of the binary string. Binary'00ffaabb' X'ABCabc' BINARY '0f0f0f0F0F0F0F0F0F0F' X'' –- empty binary string Guid A GUID literal represents a globally unique identifier. It is a sequence formed by the keyword GUID followed by hexadecimal digits in the form known as registry format: 8-4-4-4-12 enclosed in single quotes. Hexadecimal digits are case insensitive. There can be any number of spaces between the GUID symbol and the literal payload, but no new lines. Guid'1afc7f5c-ffa0-4741-81cf-f12eAAb822bf' GUID '1AFC7F5C-FFA0-4741-81CF-F12EAAB822BF'
# [.net] System.DllNotFoundException ## Recommended Posts zorlack    122 I am working on an opengl project using Tao Framaework's dll on delphi8 for .Net. When i try to run the aplication a system.dllnotfoundexception occurs that says "Unable to load DLL (ilu.dll)" when I use any of the Tao.DevIl clases. I don't know why this error occurcs cuase I copied that practicular dll on the folder of the proyect and on the system32 forlder. What can i do to fix this problem? ##### Share on other sites Ridge    188 You're missing the native devil library. ##### Share on other sites zorlack    122 Thanks, I downloaded the native devil liberies. But now a I get a System.NullReferenceException with message "Object reference not set to an instance of an object". I have never come with this error, what should I do to fix it. ##### Share on other sites capn_midnight    1707 Quote: Original post by zorlackThanks, I downloaded the native devil liberies. But now a I get a System.NullReferenceException with message "Object reference not set to an instance of an object". I have never come with this error, what should I do to fix it. this happens when you don't instantiate an object. example: MyClass me; //currently, set to null.me.doSomething(); //bingo, NullReferenceException/***remedy***/MyClass me = new MyClass(); //constructed and ready to gome.doSomething(); //No null reference! [Edited by - capn_midnight on February 15, 2005 8:11:56 AM]
Journal cover Journal topic Natural Hazards and Earth System Sciences An interactive open-access journal of the European Geosciences Union Journal topic Nat. Hazards Earth Syst. Sci., 18, 759-764, 2018 https://doi.org/10.5194/nhess-18-759-2018 Nat. Hazards Earth Syst. Sci., 18, 759-764, 2018 https://doi.org/10.5194/nhess-18-759-2018 07 Mar 2018 07 Mar 2018 # Dynamic magnification factors for tree blow-down by powder snow avalanche air blasts Tree blow-down by powder avalanches Perry Bartelt1, Peter Bebi1, Thomas Feistl2, Othmar Buser2, and Andrin Caviezel1 Perry Bartelt et al. • 1WSL Institute for Snow and Avalanche Research SLF, Flüelastrasse 11, 7260 Davos Dorf, Switzerland • 2Lawinenwarnzentrale im bayerischen Landesamt für Umwelt, Hessstrasse 128, 80797 Munich, Germany Abstract We study how short duration powder avalanche blasts can break and overturn tall trees. Tree blow-down is often used to back-calculate avalanche pressure and therefore constrain avalanche flow velocity and motion. We find that tall trees are susceptible to avalanche air blasts because the duration of the air blast is near to the period of vibration of tall trees, both in bending and root-plate overturning. Dynamic magnification factors for bending and overturning failures should therefore be considered when back-calculating avalanche impact pressures. 1 Introduction In this paper we develop a simple method to determine the dynamic response of trees to impulsive loads. This is an important problem in natural hazards engineering where historical evidence of forest destruction or tree breakage is often used to evaluate the potential avalanche hazard. Any indication of forest damage is particularly valuable to avalanche engineers because it helps define the destructive reach of an extreme and infrequent event. Fallen tree stems delineate the spatial extent of an avalanche and create a natural vector field indicating the primary flow direction of the movement (Fig. 1). The age of the destroyed trees can be additionally used to link the historical observations to the avalanche return period . In many cases observations of forest destruction are the only data the engineer has to quantify avalanche danger. The problem with using evidence of tree destruction for avalanche mitigation planning is that a simple relationship between avalanche impact pressure and tree failure is difficult to establish. Tree-breaking depends on both the avalanche loading and tree strength. Trees fall if the bending stress exerted by the avalanche exceeds the bending strength of the tree stem or if the applied torque overcomes the strength of the root-soil plate, leading to uprooting and overturning . Both mechanisms depend on the local flow height of the avalanche. Recent observations by suggest that the magnitude of the avalanche impact pressure is strongly related to the avalanche flow regime. Although long recognised that dense flowing avalanches can easily break, overturn and uproot trees , tree destruction by powder avalanche air blasts has received less attention. A mechanical understanding of how trees are blown-down by powder avalanche blasts would allow engineers to quantify powder avalanche pressures from case studies and historical records. Here we develop a mechanical model to predict the natural frequency of trees subject to full-height air-blasts of powder snow or ice avalanches. We assume two deformation modes: stem bending and root-plate overturning, see Figs. 2 and 3. The ratio of the natural tree frequency to the frequency of the avalanche air-blast defines the dynamic magnification factor D . This value is used to magnify the non-impulsive loadings D> 1 to account for the increase in stress under an impulsive load. The eigenfrequency of the tree is a function of the tree height, stiffness and mass distribution between the stem and branches. It therefore depends on forest age and tree species. We show that dynamic magnification factors for fully grown trees are large indicating that mature forests are especially vulnerable to powder snow avalanches. As we shall see, an error of up to 25 % can be made when back-calculating avalanche velocities. For example, an avalanche travelling at 35 m s−1 exerts the same pressure as an avalanche travelling at 50 m s−1 if the impulsive nature of the loading is considered. These are significant differences in hazard mitigation studies. Figure 1Tree breakage caused by the air blast of a powder avalanche, Zernez, Switzerland, 1999. The trees failed through a combination of bending and root-plate overturning. Photograph: Peter Bebi, SLF. Measurements on real avalanches reveal that the air-blast is intermittent and of short duration, lasting only a few seconds . When a powder avalanche hits a forest the ice-dust cloud is typically moving at velocities in excess of 50 m s−1 (similar to extreme wind gusts). The height of the cloud is equal, if not larger, than the height of the tree, i.e. H> 20 m. The pressure blast thus acts over the entire width and height of the tree, producing large bending moments in the stem and straining the root base plate. The impulsive character of the powder avalanche air-blast, however, magnifies the static stress state . The fallen tree stems often point in the direction of the flow, indicating that the trees had little time to sway and react to blast and that the inertial effects are of considerable importance. To calculate the dynamic magnification factor D we first make three simplifying assumptions. Firstly, the air blast can be expressed as a sine wave impulse with duration time t0. Moreover, $\begin{array}{}\text{(1)}& F\left(t\right)={F}_{\mathrm{0}}\mathrm{sin}\stackrel{\mathrm{‾}}{\mathit{\omega }}t,\end{array}$ where $\stackrel{\mathrm{‾}}{\mathit{\omega }}$ is the circular frequency of the loading $\stackrel{\mathrm{‾}}{\mathit{\omega }}$=πt0. The magnitude of the force F0 is as follows: $\begin{array}{}\text{(2)}& {F}_{\mathrm{0}}={p}_{\mathrm{0}}A=\frac{\mathrm{1}}{\mathrm{2}}{c}_{\mathrm{d}}\mathit{\rho }{U}_{\mathrm{max}}^{\mathrm{2}}A,\end{array}$ where p0 is the amplitude of the avalanche pressure given by the density of the powder cloud ρ, the form drag coefficient of the tree cd and the maximum velocity of the blast Umax . The tree area over which the blast acts is denoted A, typically given by the tree height H and effective tree width W. Thus, if the cloud density and velocity are known as well as the tree geometry, the magnitude of the applied blast force F0 is given. After the loading time t0, the tree vibrates freely with natural frequency ω. The natural frequency is found using the Rayleigh quotient method , which assumes the deflected form is known (but not the magnitude of deformation). The assumption of a deflected shape reduces the tree to a single degree of freedom system. The frequency is found by equating the maximum strain energy Vmax to the maximum kinetic energy Tmax developed during the tree response. By calculating the strain and kinetic energy produced by the avalanche blast, we find the generalised stiffness K and generalised mass M of the tree: $\begin{array}{}\text{(3)}& {\mathit{\omega }}^{\mathrm{2}}=\frac{K}{M}.\end{array}$ The natural frequency for two different deformation modes, stem bending ωsb and root-overturning ωro will be determined in the next sections. In both cases the total tree height is H. Tree mass is divided into two parts: the stem mass ms (a mass per unit length of the tree kg m−1) and the total mass of the branches Mb (kg). The branch mass, including the mass of needles, is lumped at the tree centre-of-mass. The mass Mb can include the mass of snow held by the branches and thus, like the tree elasticity, have some seasonal variation. As we assume a constant stem diameter d the stem mass per unit length is, $\begin{array}{}\text{(4)}& {m}_{\mathrm{s}}={\mathit{\rho }}_{\mathrm{t}}{A}_{\mathrm{t}}\end{array}$ with $\begin{array}{}\text{(5)}& {A}_{\mathrm{t}}=\frac{\mathit{\pi }}{\mathrm{4}}{d}^{\mathrm{2}}.\end{array}$ The density of the stem wood is ρt. For both the bending and overturning cases, the concentrated load F0 acts at the tree centre-of-mass, which is located a distance a from the ground (see Figs. 2 and 3). Finally, the third assumption, the maximum response of the tree, will be reached before the damping forces can absorb the energy of the air blast. Only the undamped response to a short duration blast is considered. Figure 2A tree of height H breaks in bending. The avalanche exerts a loading p(t) of known (but short) duration. The load acts in the centre-of-mass of the tree located a distance a from the ground. The mass of the linear distributed mass of the tree stem is mt and the lumped mass of the branches is Mb. Tree deformation is given by the non-linear distribution x(z). ## 2.1 Eigenfrequency: tree bending mode For the case of tree bending, the deformation x(z) at height z is given by (see Fig. 2): $\begin{array}{ll}{x}_{\mathrm{1}}\left(z\right)=& {X}_{\mathrm{0}}{\mathit{\psi }}_{\mathrm{1}}\left(z\right)=\frac{F{a}^{\mathrm{2}}\left(\mathrm{3}H-a\right)}{\mathrm{3}EI}\left[\frac{\mathrm{3}a{z}^{\mathrm{2}}-{z}^{\mathrm{3}}}{\mathrm{2}{a}^{\mathrm{2}}\left(\mathrm{3}H-a\right)}\right]\\ \text{(6)}& & \mathrm{for}\phantom{\rule{0.25em}{0ex}}z\phantom{\rule{0.125em}{0ex}}\le \phantom{\rule{0.125em}{0ex}}a\end{array}$ and $\begin{array}{ll}{x}_{\mathrm{2}}\left(z\right)=& {X}_{\mathrm{0}}{\mathit{\psi }}_{\mathrm{2}}\left(z\right)=\frac{F{a}^{\mathrm{2}}\left(\mathrm{3}H-a\right)}{\mathrm{3}EI}\left[\frac{\mathrm{3}z{a}^{\mathrm{2}}-{a}^{\mathrm{3}}}{\mathrm{2}{a}^{\mathrm{2}}\left(\mathrm{3}H-a\right)}\right]\\ \text{(7)}& & \mathrm{for}\phantom{\rule{0.25em}{0ex}}z\phantom{\rule{0.125em}{0ex}}>\phantom{\rule{0.125em}{0ex}}a,\end{array}$ where E is the modulus of elasticity of the tree stem and I is the moment of inertia. The functions ψ1(z) and ψ2(z) represent interpolation functions for the deformation field. These equations for lateral tree deformation are found by assuming the tree is a statically determinate cantilever-type structure fixed at the base to the ground (see Fig. 2 and ). The largest bending moment in the tree is found at the tree base, z= 0. The quantity X0 is the static deformation under the blast load F, $\begin{array}{}\text{(8)}& {X}_{\mathrm{0}}=\frac{F{a}^{\mathrm{2}}\left(\mathrm{3}H-a\right)}{\mathrm{3}EI}.\end{array}$ The moment of inertia is taken for circular stem sections, $\begin{array}{}\text{(9)}& I=\frac{\mathit{\pi }{d}^{\mathrm{4}}}{\mathrm{64}}.\end{array}$ The maximum potential strain energy in bending is as follows $\begin{array}{}\text{(10)}& {V}_{\mathrm{max}}=\frac{\mathrm{1}}{\mathrm{2}}{X}_{\mathrm{0}}^{\mathrm{2}}\underset{\mathrm{0}}{\overset{a}{\int }}EI\left(z\right){x}_{\mathrm{1}}^{\mathrm{2}}\left(z\right)\mathrm{d}z=\frac{\mathrm{1}}{\mathrm{2}}\frac{\mathrm{3}EI}{{a}^{\mathrm{2}}\left(\mathrm{3}H-a\right)}{X}_{\mathrm{0}}^{\mathrm{2}}.\end{array}$ In the bending case, the tree is firmly rooted in the ground and strain energy is stored in the tree stem between the ground and the point of load application z=a. The tree stem above z>a is stress free, swaying back and forth as a rigid body. The maximum kinetic energy Tmax is composed of two parts containing the stem energy ${T}_{\mathrm{max}}^{\mathrm{s}}$ and the branch energy ${T}_{\mathrm{max}}^{\mathrm{b}}$ of the tree, Tmax=${T}_{\mathrm{max}}^{\mathrm{s}}$+${T}_{\mathrm{max}}^{\mathrm{b}}$ : $\begin{array}{ll}{T}_{\mathrm{max}}^{\mathrm{s}}& =\frac{{\mathit{\omega }}_{\mathrm{sb}}^{\mathrm{2}}}{\mathrm{2}}\underset{\mathrm{0}}{\overset{a}{\int }}{m}_{\mathrm{s}}{x}_{\mathrm{1}}^{\mathrm{2}}\left(z\right)\mathrm{d}z+\frac{{\mathit{\omega }}_{\mathrm{sb}}^{\mathrm{2}}}{\mathrm{2}}\underset{a}{\overset{H}{\int }}{m}_{\mathrm{s}}{x}_{\mathrm{2}}^{\mathrm{2}}\left(z\right)\mathrm{d}z\\ \text{(11)}& & =\frac{\mathrm{1}}{\mathrm{280}}{m}_{\mathrm{s}}\frac{\left[\mathrm{105}{H}^{\mathrm{3}}-\mathrm{105}a{H}^{\mathrm{2}}+\mathrm{35}H{a}^{\mathrm{2}}-\mathrm{2}{a}^{\mathrm{3}}\right]}{\left(\mathrm{3}H-a{\right)}^{\mathrm{2}}}{X}_{\mathrm{0}}^{\mathrm{2}},\end{array}$ and $\begin{array}{}\text{(12)}& {T}_{\mathrm{max}}^{\mathrm{b}}=\frac{{M}_{\mathrm{b}}{\mathit{\omega }}_{\mathrm{sb}}^{\mathrm{2}}}{\mathrm{2}}{x}_{\mathrm{1}}^{\mathrm{2}}\left(z=a\right)=\frac{{M}_{\mathrm{b}}{\mathit{\omega }}_{\mathrm{sb}}^{\mathrm{2}}}{\mathrm{2}}{X}_{\mathrm{0}}^{\mathrm{2}}\frac{{a}^{\mathrm{2}}}{\left(\mathrm{3}H-a{\right)}^{\mathrm{2}}}.\end{array}$ The eigenfrequency ${\mathit{\omega }}_{\mathrm{sb}}^{\mathrm{2}}$ is found by equating Tmax=Vmax: $\begin{array}{}\text{(13)}& {\mathit{\omega }}_{\mathrm{sb}}^{\mathrm{2}}=\frac{\mathrm{420}EI\left(\mathrm{3}H-a\right)}{{a}^{\mathrm{2}}{m}_{\mathrm{s}}\left[\mathrm{105}{H}^{\mathrm{3}}-\mathrm{105}a{H}^{\mathrm{2}}+\mathrm{35}H{a}^{\mathrm{2}}-\mathrm{2}{a}^{\mathrm{3}}+\frac{\mathrm{140}{a}^{\mathrm{2}}{M}_{\mathrm{b}}}{{m}_{\mathrm{t}}}\right]}.\end{array}$ Figure 3A tree of height H breaks by overturning at the root-plate. The avalanche exerts a loading p(t) of known (but short) duration. The load acts in the centre-of-mass of the tree located a distance a from the ground. The mass of the linear distributed mass of the tree stem is mt and the lumped mass of the branches is Mb. Tree deformation is given by the linear distribution v(z). ## 2.2 Eigenfrequency: tree overturning mode For the tree overturning case, $\begin{array}{}\text{(14)}& x\left(z\right)={X}_{\mathrm{0}}\mathit{\psi }\left(z\right)=\frac{FaH}{k}\left[\frac{z}{H}\right],\end{array}$ where k is the overturning stiffness of the root-plate. This equation is found by assuming the lateral tree deformation is governed by a torsional spring, representing the stiffness of the root-plate (see Fig. 3 and ). The maximum potential strain energy (overturning) is then $\begin{array}{}\text{(15)}& {V}_{\mathrm{max}}=\frac{\mathrm{1}}{\mathrm{2}}F{X}_{\mathrm{0}}=\frac{\mathrm{1}}{\mathrm{2}}\frac{k}{aH}{X}_{\mathrm{0}}^{\mathrm{2}}.\end{array}$ Similar to the bending case, the maximum kinetic energy is found by considering the stem and branch energies separately: $\begin{array}{}\text{(16)}& {T}_{\mathrm{max}}^{\mathrm{s}}=\frac{{\mathit{\omega }}_{\mathrm{ro}}^{\mathrm{2}}}{\mathrm{2}}\underset{a}{\overset{H}{\int }}{m}_{\mathrm{s}}{x}_{\mathrm{1}}^{\mathrm{2}}\left(z\right)\mathrm{d}z=\frac{\mathrm{1}}{\mathrm{6}}{m}_{\mathrm{s}}\frac{{a}^{\mathrm{3}}}{{H}^{\mathrm{2}}}{X}_{\mathrm{0}}^{\mathrm{2}}\end{array}$ and $\begin{array}{}\text{(17)}& {T}_{\mathrm{max}}^{\mathrm{b}}=\frac{{M}_{\mathrm{b}}{\mathit{\omega }}_{\mathrm{ro}}^{\mathrm{2}}}{\mathrm{2}}{x}^{\mathrm{2}}\left(z=a\right)=\frac{{M}_{\mathrm{b}}{\mathit{\omega }}_{\mathrm{ro}}^{\mathrm{2}}}{\mathrm{2}}{X}_{\mathrm{0}}^{\mathrm{2}}\frac{{a}^{\mathrm{2}}}{{H}^{\mathrm{2}}}.\end{array}$ The eigenfrequency ${\mathit{\omega }}_{\mathrm{ro}}^{\mathrm{2}}$ is found by equating Tmax=Vmax: $\begin{array}{}\text{(18)}& {\mathit{\omega }}_{\mathrm{ro}}^{\mathrm{2}}=\frac{\mathrm{3}}{\left[{m}_{\mathrm{s}}a+\mathrm{3}{M}_{\mathrm{b}}\right]}\frac{Hk}{{a}^{\mathrm{3}}}.\end{array}$ 3 Dynamic magnification of avalanche blast The equation of motion for an undamped system subjected to a harmonic loading is as follows: $\begin{array}{}\text{(19)}& M\stackrel{\mathrm{¨}}{x}\left(t\right)+Kx\left(t\right)=F\left(t\right)={F}_{\mathrm{0}}\mathrm{sin}\stackrel{\mathrm{‾}}{\mathit{\omega }}t\end{array}$ which has the general solution for 0 tt0, $\begin{array}{}\text{(20)}& x\left(t\right)=\frac{{F}_{\mathrm{0}}}{K}\frac{\mathrm{1}}{\mathrm{1}-{\mathit{\beta }}^{\mathrm{2}}}\left(\mathrm{sin}\stackrel{\mathrm{‾}}{t}-\mathit{\beta }\mathrm{sin}\mathit{\omega }t\right)\end{array}$ and for t>t0: $\begin{array}{}\text{(21)}& x\left(t\right)=\frac{\stackrel{\mathrm{˙}}{x}\left({t}_{\mathrm{0}}\right)}{\mathit{\omega }}\mathrm{sin}\stackrel{\mathrm{‾}}{\mathit{\omega }}\left(t-{t}_{\mathrm{0}}\right)-x\left({t}_{\mathrm{0}}\right)\mathrm{sin}\mathit{\omega }\left(t-{t}_{\mathrm{0}}\right),\end{array}$ where β=$\frac{\stackrel{\mathrm{‾}}{\mathit{\omega }}}{\mathit{\omega }}$ is the ratio between the frequency of the avalanche blast and eigenfrequency of the tree. The magnitude of the dynamic response therefore depends on the ratio of the load duration to the period of vibration of the tree. For the case when β< 1 the maximum deformation occurs when the impulsive load is active. It can be shown (see ) that the time to this peak response tmax is: $\begin{array}{}\text{(22)}& \stackrel{\mathrm{‾}}{\mathit{\omega }}{t}_{\mathrm{max}}=\frac{\mathrm{2}\mathit{\pi }\mathit{\beta }}{\mathit{\beta }+\mathrm{1}},\end{array}$ which can be substituted into the general solution to find the dynamic magnification factor for a long duration impulse: $\begin{array}{}\text{(23)}& D=\frac{\mathrm{1}}{\mathrm{1}-{\mathit{\beta }}^{\mathrm{2}}}\left[\mathrm{sin}\stackrel{\mathrm{‾}}{\mathit{\omega }}{t}_{\mathrm{max}}-\mathit{\beta }\mathrm{sin}\frac{\stackrel{\mathrm{‾}}{\mathit{\omega }}{t}_{\mathrm{max}}}{\mathit{\beta }}\right].\end{array}$ It can likewise be shown that the maximum response for the free vibration case occurs when β> 1, t>t0. For this case, the dynamic magnification factor for a short duration impulse is: $\begin{array}{}\text{(24)}& D=\frac{\mathrm{2}\mathit{\beta }}{\mathrm{1}-{\mathit{\beta }}^{\mathrm{2}}}\mathrm{cos}\frac{\mathit{\pi }}{\mathrm{2}\mathit{\beta }}.\end{array}$ For the resonance case β= 1 $\begin{array}{}\text{(25)}& D=\frac{\mathit{\pi }}{\mathrm{2}}.\end{array}$ Table 1Numerical values for the mass distribution of spruce for different tree heights. Table is constructed from data contained in , and . The stated values represent average values for spruce trees in alpine environments. Values are approximate and will change depending on their location in forest, slope exposition, etc. Branch mass includes needle mass, which is given in parenthesis. Intercepted snow mass is not included in the calculations. 4 Application To demonstrate how the dynamic magnification factor D can be found, we consider the following problem: a powder snow avalanche enters a spruce forest with considerable speed (> 50 m s−1) and exerts a short duration air-blast with frequency $\stackrel{\mathrm{‾}}{\mathit{\omega }}$. The duration of the blast is on the order of a few seconds. The height of the trees is between 25 and 30 m, which is also the height of the powder cloud. The cloud has decoupled from the avalanche core which has stopped before reaching the forest. Moreover, the only loading on the trees is the air-blast. Table 2Natural frequencies in bending and overturning for spruce trees of different heights. E= 10 GPa. A reduced stem diameter d= 0.5dDBH produces a good agreement to measured frequencies. Mass distribution taken from Table 1. Using the measured mass values tabulated in Table 1, we set the total branch and needle mass of a single tree to be Mb= 540 kg. The stem mass per length is approximately 60 kg m−1 (wood density 480 kg m−3). The total force of the avalanche impact acts at the tree's centre-of-mass which is located a= 16.5 m above ground. This allows us to define the natural frequency in bending of the tree by Eq. (13), ωsb= 1.48 rad s−1 (0.24 Hz), see Table 2. This value is in very good agreement with the measurements (see ). The modulus of elasticity was set to E= 10 GPa based on experimental measurements . For the calculations, a tree diameter somewhat smaller than the diameter at breast height (DBH) is selected. In this case d= 0.2 m, which is 1∕2 of the DBH diameter (this provides the best match to the experimental frequencies). Consider first a duration sine impulse lasting 2.50 s ($\stackrel{\mathrm{‾}}{\mathit{\omega }}$=π6). In this case β= 0.699; that is, the maximum deformation occurs during the time the load is acting. For this case, application of Eq. (23), we find D= 1.76, a rather large magnification factor. For a shorter duration impulse lasting 1.66 s, β= 1.27 and from Eq. (24), we find D= 1.36. The primary conclusion to draw from this analysis is that the natural frequency in bending of tall trees is close to the frequency of the applied avalanche air-blast. Measurements of air-blast duration times reported by Russian researchers are within this range, lasting only a few seconds (see ). Measurements of root plate stiffness are rare; however, values for 10–14 m high spruce reported in vary between k= 80 kN m (H= 10 m) and k= 1200 kN m (H= 14 m). These values suggest a large variation in k depending on growth conditions. The application of these k stiffness values for spruce trees predicts natural frequencies for root-plate overturning in ωo> 2 Hz (Eq. 18), see Table 2. The calculated β factors for overturning are typically β< 1. This result suggests that large dynamic magnification factors can only be generated by very short duration impulses (less than t< 0.5 s). Tall trees (H> 20 m) with low root plate stiffness (k 100 kN m) are vulnerable to powder avalanche air-blasts. 5 Conclusions We draw several conclusions from our analysis. Firstly, the natural frequency of tall trees – in bending and overturning – is close to the loading frequency of powder avalanches, ω$\stackrel{\mathrm{‾}}{\mathit{\omega }}$. Thus, tall trees are susceptible to powder avalanche blow-down. When using tree blow-down to estimate avalanche impact pressures (and therefore speed and density of the powder cloud) a dynamic magnification factor should be applied in the analysis. Moreover, powder avalanches can knock down trees with lower velocity than is presently assumed. This result is also valid for other types of tall structures, including power pylons, or buildings with long over-hanging roofs. Secondly, both tree bending and root-plate overturning are possible tree failure modes when hit by a powder avalanche. Interestingly, the natural frequencies of tree bending and root-plate overturning are similar, when the root-plate stiffness is low (k< 100 kN m) and the tree is tall (H> 20 m). Although there is considerable data available to constrain the value of the modulus of elasticity of wood E, there is less information available to constrain the root-plate stiffness. In the future, field investigations that document forest destruction should clearly separate bending and overturning failures. This would help understand the variability of tree anchorage on mountain slopes. The field examinations should also quantify the stem diameter d at more than one location as this is necessary to accurately determine the bending eigenfrequency. Finally, the fact that tall trees can be broken in bending and overturning indicates the nature of the avalanche air blast. It appears to be a high velocity, short duration pulse of flowing material (ice-dust), similar to a high-density gust of wind. It is not a compression wave travelling at the speed of sound. Data availability Data availability. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. This work was performed within the framework of the joint Austrian-Swiss project bDFA, a study of avalanche motion beyond the dense flow avalanche regime. We thank the Austrian Academy of Science (ÖAW) for their financial support as well as the Austrian research partners (Austrian Research Centre for Forests, Torrent and Avalanche Control and the University of Innsbruck). Edited by: Oded Katz Reviewed by: two anonymous referees References Bartelt, P. and Stöckli, V.: The influence of tree and branch fracture, overturning and debris entrainment on snow avalanche flow, Ann. Glaciol., 32, 209–216, 2001. a Bozhinskiy, A. N. and Losev, K. S.: The fundamentals of avalanche science, Mitt. Eidgenöss. Inst. Schnee- Lawinenforsch., Davos, p. 280, 1998. a Chajes, A.: Principles of Structural Stability Theory, Prentice Hall Inc, Englewood Cliffs, p. 336, 1974. a Clough, R. W. and Penzien, J.: Dynamics of Structures, McGraw-Hill Inc, New York, p. 634, 1975. a, b, c, d, e, f, g Coutts, M.: Root architecture and tree stability, Plant Soil, 71, 171–188, 1983. a Feistl, T., Bebi, P., Christen, M., Margreth, S., Diefenbach, L., and Bartelt, P.: Forest damage and snow avalanche flow regime, Nat. Hazards Earth Syst. Sci., 15, 1275-1288, https://doi.org/10.5194/nhess-15-1275-2015, 2015a. a, b Feistl, T., Bebi, P., Teich, M., Bühler, Y., Christen, M., Thuro, K., and Bartelt, P.: Observations and modeling of the braking effect of forests on small and medium avalanches, J. Glaciol., 60, 124–138, https://doi.org/10.3189/2014JoG13J055, 2015b. a Gadek, B., Kaczka, R. J., Raczkowska, Z., Rojan, E., Casteller, A., and Bebi, P.: Snow avalanche activity in Zleb Zandarmerii in a time of climate change (Tatra Mts., Poland), Catena, 158, 201–212, 2017. a Grigoryan, S., Urubayev, N., and Nekrasov, I.: Experimental investigation of an avalanche air blast, Data Glaciol. Stud., 44, 87–93, 1982. a, b Haines, D. W., Leban, J. M., and Herbe, C.: Determination of Young's modulus for spruce, fir and isotropic materials by the resonance flexure method with comparisons to static flexure and other dynamic methods, Wood Sci. Technol., 30, 253–263, 1996. a Indermühle, M. P.: Struktur, Alters- und Zuwachsuntersuchungenin einem Fichten-Plenterwald der subalpinen Stufe, in: Beiheft Nr. 60 zur Schweiz, Z. Forstwesen, Dissertaion ETHZ-Zürich, Zürich, p. 98, 1978.  a Johnson, E. A.: The relative importance of snow avalanche disturbance and thinning on canopy plant populations, Ecology, 68, 43–53, 1987. a Jonsson, M. J., Foetzki, A., Kalberer, M., Lundström, T., Ammann, W., and Stöckli, V.: Root-soil rotation stiffness of norway spruce (Picea abies (L.) Karst) growing on subalpine forested slopes, Plant Soil, 285, 267–277, 2006. a Jonsson, M. J., Foetzki, A., Kalberer, M., Lundström, T., Ammann, W., and Stöckli, V.: Natural frequencies and damping ratios of Norway spruce (Picea abies (L.) Karst) growing on subalpine forested slopes, Trees, 21, 541–548, https://doi.org/10.1007/s00468-007-0147-x, 2007. a Kalberer, M.: Quantifizierung und Optimierung der Schutzwaldleistung gegenüber Steinschlag, Dissertaion Albert-Ludwigs-Universität, Freiburg, 2006. a Kramer, H.: Waldwachstumslehre, Parey, Hamburg, Berlin, p. 374, 1988. a Mattheck, C. and Breloer, H.: Handbuch der Schadenskunde von Bäumen: Der Baumbruch in Mechanik und Rechtsprechung, Rombach, Freiburg im Breisgau, 1994. a, b Neild, S. A. and Wood, C. J.: Estimating stem and root-anchorage flexibility in trees, Tree Physiol., 19, 141–151, 1998. a Peltola, H., Nykänen, M. L., and Kellomäki, S.: Model computations on the critical combination of snow loading and windspeed for snow damage of scots pine, norway spruce and birch at stand edge, Forest Ecol. Manage., 95, 229–241, 1997. a Peltola, H., Kellomäki, S., Väisänen, H., and Ikonen, V.: A mechanisticmodel for assessing the risk of wind and snow damage to single trees and stands of scots pine, norway spruce, and birch, Can. J. Forest Res., 29, 647–661, 1999. a Reardon, B. A., Pederson, G. T., Caruso, C. J., and Fagre, D. B.: Spatial Reconstructions and Comparisons of Historic Snow Avalanche Frequency and Extent Using Tree Rings in Glacier National Park, Montana, U.S.A., Arct. Antarct. Alp. Res., 40, 148–160, 2008. a Schläppy, R., Eckert, N., Jomelli, C., Stoffel, M., Grancher, D., Brunstein, D., Naaim, M., and Deschatres, M.: Validation of extreme snow avalanches and related return periods derived from a statistical-dynamical model using tree-ring techniques, Cold Reg. Sci. Technol., 99, 12–26, 2004. a Sukhanov, G.: The mechanism of avalanche air blast formation as derived from field measurements, Data Glaciol. Stud., 44, 94–98, 1982. a, b Sukhanov, G. and Kholobaeva, P.: Vriability of avalanche air blast in time and space, Data Glaciol. Stud., 44, 98–105, 1982. a, b
## Wednesday, December 29, 2010 To solve this question, we observe that, $\left ( 5+2\sqrt{6} \right )\left ( 5-2\sqrt{6} \right )=25-24 = 1$ So we can write, $5-2\sqrt{6}=\frac{1}{5+2\sqrt{6}}$ The given equation can now be written as: $\left ( 5+2\sqrt{6} \right )^{x^{2}-3}+\frac{1}{\left ( 5-2\sqrt{6} \right )^{x^{2}-3}}=10$ Now let's put $\left ( 5+2\sqrt{6} \right )^{x^{2}-3}=y$ The equation then becomes $y+\frac{1}{y}=10$ or $y^{2}-10y+1=0$ Solving this we get, $y=\frac{10\pm \sqrt{100-4}}{2}=\frac{10\pm 4\sqrt{6}}{2}=5 \pm 2\sqrt{6}$ But, $y=\left ( 5+2\sqrt{6} \right )^{x^{2}-3}$ Therefore, $\inline \left ( 5+2\sqrt{6} \right )^{x^{2}-3} = 5 + 2\sqrt{6}$ or $\inline 5 - 2\sqrt{6}$ $\inline \therefore x^{2}-3= +1 \mbox{ or} -1$ $\inline \Rightarrow x^{2}=4 \mbox{ or } x^{2}=2$ ## Thursday, July 15, 2010 1. Solve the following equation for $\fn_jvn \small \left ( 5+2\sqrt{6} \right )^{x^2-3}+\left ( 5-2\sqrt{6} \right )^{x^2-3}=10$ Hi All, In this forum I would like to post math questions of at the senior school level. The solutions would be discussed subsequently. Many of these questions have been asked in engineering entrance exams in India. Lok
• Rudra P Sarkar Articles written in Proceedings – Mathematical Sciences • A complete analogue of Hardy’s theorem on SL2(ℝ) and characterization of the heat kernel A theorem of Hardy characterizes the Gauss kernel (heat kernel of the Laplacian) on ℝ from estimates on the function and its Fourier transform. In this article we establisha full group version of the theorem for SL2(ℝ) which can accommodate functions with arbitraryK-types. We also consider the ‘heat equation’ of the Casimir operator, which plays the role of the Laplacian for the group. We show that despite the structural difference of the Casimir with the Laplacian on ℝn or the Laplace—Beltrami operator on the Riemannian symmetric spaces, it is possible to have a heat kernel. This heat kernel for the full group can also be characterized by Hardy-like estimates. • Cowling-price theorem and characterization of heat kernel on symmetric spaces We extend the uncertainty principle, the Cowling-Price theorem, on noncompact Riemannian symmetric spacesX. We establish a characterization of the heat kernel of the Laplace-Beltrami operator onX from integral estimates of the Cowling-Price type. • On the Schwartz Space Isomorphism Theorem for Rank One Symmetric Space In this paper we give a simpler proof of the $L^p$-Schwartz space isomorphism $(0 &lt; p\leq 2)$ under the Fourier transform for the class of functions of left 𝛿-type on a Riemannian symmetric space of rank one. Our treatment rests on Anker’s [2] proof of the corresponding result in the case of left 𝐾-invariant functions on 𝑋. Thus we give a proof which relies only on the Paley–Wiener theorem. • Abel Transform on $PSL(2, \mathbb{R})$ and some of its Applications We shall investigate the use of Abel transform on $PSL_2(\mathbb{R})$ as a tool beyond 𝐾-biinvariant setup, discuss its properties and show some applications. • # Proceedings – Mathematical Sciences Current Issue Volume 129 | Issue 5 November 2019 • # Editorial Note on Continuous Article Publication Posted on July 25, 2019
A Group Preserving Scheme for Burgers Equation with Very Large Reynolds Number Chein-Shan Liu doi:10.3970/cmes.2006.012.197 Source CMES: Computer Modeling in Engineering & Sciences, Vol. 12, No. 3, pp. 197-212, 2006 Download Full length paper in PDF format. Size = 237,539 bytes Keywords Burgers equation, Lie algebra, Lorentz Group, Group preserving scheme, Spatial rescale. Abstract In this paper we numerically solve the Burgers equation by semi-discretizing it at the$n$ interior spatial grid points into a set of ordinary differential equations:$\mathaccentV {dot}05F{\bf u}={\bf f}({\bf u},t)$,${\bf u} \in {\@mathbb R}^n$. Then, we take the dissipative behavior of Burgers equation into account by considering the magnitude$\delimiter "026B30D {\bf u}\delimiter "026B30D$ as another component; hence, an augmented quasilinear differential equations system$\mathaccentV {dot}05F{\bf X}={\bf A}{\bf X}$ with${\bf X}:=({\bf u}^{\unhbox \voidb@x \hbox {\relax \fontsize {8}{9.5}\selectfont T}}, \delimiter "026B30D {\bf u}\delimiter "026B30D )^{\unhbox \voidb@x \hbox {\relax \fontsize {8}{9.5}\selectfont T}} \in {\@mathbb M}^{n+1}$ is derived. According to a Lie algebra property of${\bf A} \in so (n,1)$ we thus develop a new numerical scheme with the transformation matrix${\bf G}\in SO_o(n,1)$ being an element of the proper orthochronous Lorentz group. The numerical results were in good agreement with exact solutions, and it can be seen that the group preserving scheme is better than other numerical methods. Even for very large Reynolds number the group preserving scheme supplemented with a spatial rescaling technique also provides a reliable result without inducing numerical instability.
# R – L Circuit : Growth & Decay of Current ### Growth of Current : A series combination of an inductor L and a resistor R are connected across a cell of e.m.f. E through a switch S as shown. When switch is closed current starts increasing in the Inductor . This causes an induction of e.m.f. in the Inductor. The induced e.m.f. opposes the growth of current in the circuit. Let at any time t current in the circuit be I . From loop rule we obtain, $\displaystyle \xi = L\frac{dI}{dt} + IR$ $\displaystyle -L\frac{dI}{dt} = IR -\xi$ $\displaystyle \frac{dI}{IR – \xi} = -\frac{1}{L} dt$ Integrating , $\displaystyle I = \frac{\xi}{R} (1 – e^{-R t/L})$ Here, I represents the instantaneous current in the circuit. #### Decay of Current : In this case source of emf. is disconnected from the circuit $\displaystyle -L\frac{dI}{dt} – IR = 0$ $\displaystyle \int_{I_0}^{I} \frac{dI}{I} = -\frac{R}{L}\int_{0}^{t} dt$ $\displaystyle I = I_0 e^{-R t/L}$ (L/R) is called time constant as its dimension is same as that of time. Example : A current of I = 10 A is passed through the part of a circuit shown in the figure. What will be the potential difference between A and B when I is decreased at constant rate of 102 amp/s, at the beginning? Solution : Applying the law of potential between the points A and B we obtain, VB − VA = −IR + E −L di/dt => VB − VA = −10 × 2 + 12 − 5 × 10-3 × 102 => VB − VA = −20 + 12 − 0.5 => VB − VA = −8.5 volt.
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 22 Jul 2018, 07:50 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # A circular field that measures 100π meters in circumference has six mo Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 47168 A circular field that measures 100π meters in circumference has six mo  [#permalink] ### Show Tags 23 Jan 2015, 07:29 00:00 Difficulty: 15% (low) Question Stats: 81% (01:11) correct 19% (02:08) wrong based on 105 sessions ### HideShow timer Statistics A circular field that measures 100π meters in circumference has six monuments evenly spaced around that circumference. If a hexagonal path has been installed to allow patrons to walk straight from monument to as pictured, what is the length, in meters, of that path. A. 240 B. 270 C. 300 D. 330 E. 360 Attachment: monuments.png [ 6.75 KiB | Viewed 2183 times ] _________________ Math Expert Joined: 02 Aug 2009 Posts: 6278 Re: A circular field that measures 100π meters in circumference has six mo  [#permalink] ### Show Tags 23 Jan 2015, 07:33 1 ans C 300... each sector is of 60 degree... which makes it eq triangle.. so each side is 50 which ir radius found from circumference. ans 50*6=300 _________________ 1) Absolute modulus : http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372 2)Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html 3) effects of arithmetic operations : https://gmatclub.com/forum/effects-of-arithmetic-operations-on-fractions-269413.html GMAT online Tutor Manager Joined: 03 Oct 2014 Posts: 137 Location: India Concentration: Operations, Technology GMAT 1: 720 Q48 V40 WE: Engineering (Aerospace and Defense) Re: A circular field that measures 100π meters in circumference has six mo  [#permalink] ### Show Tags 23 Jan 2015, 07:55 1 6 equilateral triangles with side 50 units......(2 pi r = 100) So path length - 50*6 = 300 Math Expert Joined: 02 Sep 2009 Posts: 47168 Re: A circular field that measures 100π meters in circumference has six mo  [#permalink] ### Show Tags 26 Jan 2015, 04:26 Bunuel wrote: A circular field that measures 100π meters in circumference has six monuments evenly spaced around that circumference. If a hexagonal path has been installed to allow patrons to walk straight from monument to as pictured, what is the length, in meters, of that path. A. 240 B. 270 C. 300 D. 330 E. 360 Attachment: monuments.png VERITAS PREP OFFICIAL SOLUTION: The key to solving this question is recognizing that, with six equally spaced items around a circle, the lines connecting the center of the circle to each item will form 60-degree angles (each pair of adjacent lines will form a 1/6 section of the circle). That means that, since you know the circumference of the circle is 100π the radius is 50. And what you're really looking to calculate - the distance from each monument to the next - will then form a set of equilateral triangles with all sides 50. Each monument-to-monument distance will form the third side of an isosceles triangle (each radius is the same) with one angle 60, so they must all be equilateral. Therefore, the straight-line distance from each monument to the next will be 50, and with six such distances forming the path, the path will total 300 meters in length. _________________ SVP Status: The Best Or Nothing Joined: 27 Dec 2012 Posts: 1837 Location: India Concentration: General Management, Technology WE: Information Technology (Computer Software) A circular field that measures 100π meters in circumference has six mo  [#permalink] ### Show Tags 03 Feb 2015, 00:53 1 Attachment: monuments.png [ 8.28 KiB | Viewed 1776 times ] Six equilateral triangles formed with each side = radius of circle $$= \frac{100\pi}{2\pi} = 50$$ Total length of path = 6*50 = 300 _________________ Kindly press "+1 Kudos" to appreciate Non-Human User Joined: 09 Sep 2013 Posts: 7326 Re: A circular field that measures 100π meters in circumference has six mo  [#permalink] ### Show Tags 11 Aug 2017, 10:44 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: A circular field that measures 100π meters in circumference has six mo &nbs [#permalink] 11 Aug 2017, 10:44 Display posts from previous: Sort by # Events & Promotions Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
Abstract Face Line Art, Laser Hair Removal Technician Jobs, Ffxiv Beech Log, Show Foot Anti-slip Spray For Dogs, Senguard Permanent Marble And Granite Sealer, Sister In Newari Language, Burger King Menu Prijzen 2020, Bracketing 5/2 In Aeb Mode, White Wicker Lounge Chair, " /> Abstract Face Line Art, Laser Hair Removal Technician Jobs, Ffxiv Beech Log, Show Foot Anti-slip Spray For Dogs, Senguard Permanent Marble And Granite Sealer, Sister In Newari Language, Burger King Menu Prijzen 2020, Bracketing 5/2 In Aeb Mode, White Wicker Lounge Chair, " /> Abstract Face Line Art, Laser Hair Removal Technician Jobs, Ffxiv Beech Log, Show Foot Anti-slip Spray For Dogs, Senguard Permanent Marble And Granite Sealer, Sister In Newari Language, Burger King Menu Prijzen 2020, Bracketing 5/2 In Aeb Mode, White Wicker Lounge Chair, " /> # derivative meaning math 12 December 2020 f Power functions (in the form of Taylor Series (uses derivatives) ⋅ In the first section of the Limits chapter we saw that the computation of the slope of a tangent line, the instantaneous rate of change of a function, and the instantaneous velocity of an object at $$x = a$$ all required us to compute the following limit. ) behave differently from linear functions, because their exponent and slope vary. {\displaystyle {\frac {d}{dx}}\ln \left({\frac {5}{x}}\right)} x {\displaystyle f(x)={\tfrac {1}{x}}} 2 , this can be reduced to: The cosine function is the derivative of the sine function, while the derivative of cosine is negative sine (provided that x is measured in radians):[2]. {\displaystyle ax+b} While differential calculus focuses on the curve itself, integral calculus concerns itself with the space or area under the curve.Integral calculus is used to figure the total size or value, such as lengths, areas, and volumes. Introduction to Derivatives 2. x We also saw that with a small change of notation this limit could also be written as. In the first section of the Limits chapter we saw that the computation of the slope of a tangent line, the instantaneous rate of change of a function, and the instantaneous velocity of an object at $$x = a$$ all required us to compute the following limit. {\displaystyle x} Let’s compute a couple of derivatives using the definition. Next, we need to discuss some alternate notation for the derivative. While, admittedly, the algebra will get somewhat unpleasant at times, but it’s just algebra so don’t get excited about the fact that we’re now computing derivatives. ) 6 a Definition of Derivative: The following formulas give the Definition of Derivative. {\displaystyle f'\left(x\right)=6x}, d ( {\displaystyle f'(x)} Another common notation is x {\displaystyle {\frac {d}{dx}}\left(ab^{f\left(x\right)}\right)=ab^{f(x)}\cdot f'\left(x\right)\cdot \ln(b)}. The derivative of a function at some point characterizes the rate of change of the function at this point. ("dy over dx", meaning the difference in y divided by the difference in x). {\displaystyle y} x {\displaystyle y=x} 3 = a Derivatives are a fundamental tool of calculus. ) Consider $$f\left( x \right) = \left| x \right|$$ and take a look at. Derivative Rules 6. 1 In this example we have finally seen a function for which the derivative doesn’t exist at a point. ⁡ First, we didn’t multiply out the denominator. Another example, which is less obvious, is the function x = This page was last changed on 15 September 2020, at 20:25. A derivative is a securitized contract between two or more parties whose value is dependent upon or derived from one or more underlying assets. x ′ = f This one is going to be a little messier as far as the algebra goes. ⋅ x It will make our life easier and that’s always a good thing. You appear to be on a device with a "narrow" screen width (, Derivatives of Exponential and Logarithm Functions, L'Hospital's Rule and Indeterminate Forms, Substitution Rule for Indefinite Integrals, Volumes of Solids of Revolution / Method of Rings, Volumes of Solids of Revolution/Method of Cylinders, Parametric Equations and Polar Coordinates, Gradient Vector, Tangent Planes and Normal Lines, Triple Integrals in Cylindrical Coordinates, Triple Integrals in Spherical Coordinates, Linear Homogeneous Differential Equations, Periodic Functions & Orthogonal Functions, Heat Equation with Non-Zero Temperature Boundaries, Absolute Value Equations and Inequalities. ⁡ 2 Finding Maxima and Minima using Derivatives 11. Then make Δxshrink towards zero. Simplify it as best we can 3. 2 A function $$f\left( x \right)$$ is called differentiable at $$x = a$$ if $$f'\left( a \right)$$ exists and $$f\left( x \right)$$ is called differentiable on an interval if the derivative exists for each point in that interval. The concept of Derivative is at the core of Calculus and modern mathematics. x b The following problems require the use of the limit definition of a derivative, which is given by They range in difficulty from easy to somewhat challenging. Derivative (mathematics) synonyms, Derivative (mathematics) pronunciation, Derivative (mathematics) translation, English dictionary definition of Derivative (mathematics). The inverse operation for differentiation is known as In this topic, we will discuss the derivative formula with examples. x ln a This is a fact of life that we’ve got to be aware of. The derivative of Free Derivative using Definition calculator - find derivative using the definition step-by-step. The typical derivative notation is the “prime” notation. {\displaystyle {\tfrac {dy}{dx}}} do not change if the graph is shifted up or down. d ( https://www.shelovesmath.com/.../definition-of-the-derivative $$Without the limit, this fraction computes the slope of the line connecting two points on the function (see the left-hand graph below). {\displaystyle f(x)} Undefined derivatives. For example, ) When ⋅ ⋅ x ⋅ Differentiable 10. ( ) For functions that act on the real numbers, it is the slope of the tangent line at a point on a graph. Derivative definition, derived. ( In mathematics (particularly in differential calculus), the derivative is a way to show instantaneous rate of change: that is, the amount by which a function is changing at one given point. at point ′ This one will be a little different, but it’s got a point that needs to be made. ⋅ {\displaystyle {\frac {d}{dx}}\left(3\cdot 2^{3{x^{2}}}\right)} Derivatives of linear functions (functions of the form See more. Derivatives are fundamental to the solution of problems in calculus and differential equations. So, $$f\left( x \right) = \left| x \right|$$ is continuous at $$x = 0$$ but we’ve just shown above in Example 4 that $$f\left( x \right) = \left| x \right|$$ is not differentiable at $$x = 0$$. Derivatives as dy/dx 4. d ( 3 x It is an important definition that we should always know and keep in the back of our minds. 3 {\displaystyle {\tfrac {d}{dx}}(\log _{10}(x))} = That is, the slope is still 1 throughout the entire graph and its derivative is also 1. This article goes through this definition carefully and with several examples allowing a beginning student to … Section 3-1 : The Definition of the Derivative. x In an Algebra class you probably only rationalized the denominator, but you can also rationalize numerators. 2 However, this is the limit that gives us the derivative that we’re after. [2] That is, if we give a the number 6, then The derivative of x 2 is 2x means that with every unit change in x, the value of the function becomes twice (2x). f Let f(x) be a function where f(x) = x 2. This is such an important limit and it arises in so many places that we give it a name. 2 regardless of where the position is. = ... High School Math Solutions – Derivative Calculator, Trigonometric Functions. ( Note that we replaced all the a’s in $$\eqref{eq:eq1}$$ with x’s to acknowledge the fact that the derivative is really a function as well. 's number by adding or subtracting a constant value, the slope is still 1, because the change in ) x x 3 The derivative is the function slope or slope of the tangent line at point x. x Derivative, in mathematics, the rate of change of a function with respect to a variable. 1. {\displaystyle x} x However, outside of that it will work in exactly the same manner as the previous examples. Resulting from or employing derivation: a derivative word; a derivative process. It tells you how quickly the relationship between your input (x) and output (y) is changing at any exact point in time. x ( ⁡ We call it a derivative. ) d ) {\displaystyle b=2}, f Power functions, in general, follow the rule that 2 {\displaystyle y} {\displaystyle x} One is geometrical (as a slope of a curve) and the other one is physical (as a rate of change). x This does not mean however that it isn’t important to know the definition of the derivative! x Derivatives will not always exist. f d = The derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Simply put, it’s the instantaneous rate of change. As in that section we can’t just cancel the h’s. = 6 When the dependent variable Notice that every term in the numerator that didn’t have an h in it canceled out and we can now factor an h out of the numerator which will cancel against the h in the denominator. Partial Derivatives 9. That is, as the distance between the two x points (h) becomes closer to zero, the slope of the line between them comes closer to resembling a tangent line. {\displaystyle x_{1}} ) 3 a ln x In Leibniz notation: x {\displaystyle {\tfrac {d}{dx}}(x)=1} Find Resulting from or employing derivation: a derivative word; a derivative process. {\displaystyle a=3}, b How to use derivative in a sentence. 1. You do remember rationalization from an Algebra class right? 3 x x Second Derivative and Second Derivative Animation 8. 0 That is, the derivative in one spot on the graph will remain the same on another. ( are constants and However, if we want to calculate \displaystyle \pdiff{f}{x}(0,0), we have to use the definition of the partial derivative. ⁡ 's value ( In this case that means multiplying everything out and distributing the minus sign through on the second term. 1 ) y Note: From here on, whenever we say "the slope of the graph of f at x," we mean "the slope of the line tangent to the graph of f at x.". Derivative definition The derivative of a function is the ratio of the difference of function value f(x) at points x+Δx and x with Δx, when Δx is infinitesimally small. x The formula gives a more precise (i.e. 3 The derivative of a function is one of the basic concepts of calculus mathematics. x d x x y Note as well that on occasion we will drop the $$\left( x \right)$$ part on the function to simplify the notation somewhat. is a function of d 2 Learn. x x {\displaystyle x} {\displaystyle y} With Limits, we mean to say that X approaches zero but does not become zero. x , where The d is not a variable, and therefore cannot be cancelled out. . d 6 If you are going to try these problems before looking at the solutions, you can avoid common mistakes by making proper use of functional notation and careful use of basic algebra. . ( d ) So, upon canceling the h we can evaluate the limit and get the derivative. ) Recall that the definition of the derivative is$$ \displaystyle\lim_{h\to 0} \frac{f(x+h)-f(x)}{(x+h) - x}. can be broken up as: A function's derivative can be used to search for the maxima and minima of the function, by searching for places where its slope is zero. Use the definition of the derivative to find the derivative of, $f\left( x \right) = 6$ Show Solution There really isn’t much to do for this problem other than to plug the function into the definition of the derivative and do a little algebra. Be careful and make sure that you properly deal with parenthesis when doing the subtracting. and In this excerpt from http://www.thegistofcalculus.com the definition of the derivative is described through geometry. b {\displaystyle x} 2 Together with the integral, derivative occupies a central place in calculus. 2 So, if we want to evaluate the derivative at $$x = a$$ all of the following are equivalent. The next theorem shows us a very nice relationship between functions that are continuous and those that are differentiable. In mathematics (particularly in differential calculus), the derivative is a way to show instantaneous rate of change: that is, the amount by which a function is changing at one given point. = {\displaystyle ab^{f\left(x\right)}} Continuous and those that are differentiable give it a name derivative that changed. Are differentiable this problem is asking for the derivative with parenthesis when doing the subtracting for $\pdiff f... In that section we can ’ t exist either parties whose value is upon... Note as well that this doesn ’ t say anything about whether or the. Using definition calculator - find derivative using definition calculator - find derivative using definition calculator - find derivative definition! That you properly deal with parenthesis when doing the subtracting a much more compact manner to us! Careful and make sure that you properly deal with parenthesis when doing the subtracting calculus and modern.. We can evaluate the limit doesn ’ t just cancel the h ’ s compute a of. ( Opens a modal ) Possible mastery points Interactive ) 3 ( f'\left ( x \right ) x... ( f\left ( x \right ) \ ) as “ f prime of x ” a,., this is the rate of change ) where f ( x \right ) = x. Are fundamental to the solution of problems in calculus, the slope of a function is one the! Which the derivative at a point could also be written as between functions that on! Will make our life easier and that ’ s keep it simple when using the definition of the,... Letters in the previous two examples you probably only rationalized the denominator will just complicate... Two one-sided limits are different and so for this problem is asking for the derivative of a function for the. T important to know the definition of derivative derivative formula with examples wrote the fraction a much compact. With a small change of f ( x+Δx ) − f ( x ) \left|. Function slope or slope of the derivative, and show convenient ways to calculate derivatives arises in many! Derivative of a function at a point ( Interactive ) 3 didn ’ exist... Between functions that are differentiable one sided limits and recall that, the derivative ’... Case we will need to discuss some alternate notation for the derivative the concept of differential calculus the... Be made aware of discuss the derivative can be broken up into parts... Slope formula: ΔyΔx = f ( x+Δx ) − f ( x+Δx ) − (... This video introduces basic concepts required to understand the derivative at a point that needs be! One quantity changes in relation to another quantity a modal ) Possible points. That this theorem does not become zero to combine the two terms in the.! At that point one is physical ( as a slope of the derivative at (... Only one of the tangent line at a point \pdiff { f } { x }$ word! X \right|\ ) and take a look at calculus is concerned with how one changes. No formulas that apply at points around which a function for which the derivative can be in... Modal ) Possible mastery points of mathematics or not the derivative, show. The previous examples { x } $formulas give the definition finishing this let s... Point we ’ ve got to be working with all that much of the derivative as we ’ re going... A particular point on a graph we also saw that with a small change of notation this limit also! We wrote derivative meaning math fraction a much more compact manner to help us with the integral, derivative a. Explore one of the following formulas give the definition of the derivative a. Derivative can be approached in two different ways but does not mean however that it will work reverse. Go ahead and use that in our work properly deal with parenthesis when doing the subtracting derivative also! The instantaneous rate of change ) is geometrical ( as a rate change! To the solution of problems in calculus called differentiation.The inverse operation for differentiation known. And its derivative is described through geometry of derivatives using the fractional notation x ) that. S the rationalizing work for this problem we ’ re not going to have to look at careful make... Modern mathematics important to know the definition of derivative: the following formulas give definition... You probably only rationalized the denominator, but it ’ s combine the two one sided limits and that... Things a little the instantaneous rate of change some alternate notation for evaluating derivatives when using the definition of derivative... T just cancel the h we can evaluate the derivative let f ( )... Formed from another word or base: a derivative word ; a derivative process however, of. Δyδx = f ( x+Δx ) − f ( x \right ) = \left| \right|\! ( Interactive ) 3 underlying assets at some point characterizes the rate of change of notation this limit also. Also note that we ’ ve got to be aware of can ’ t exist at a point a! Exists anywhere else always a good thing s cover that rationalize numerators and make sure that you properly with. Just cancel the h ’ s left in the numerator ” \ ( h = )... We should always know and keep in the back of our minds is - word., There is another notation that is, the slope is still 1 throughout the entire and. In two different ways get the derivative of a curve at a point on graph... Used on occasion we also saw that with a small change of notation this limit could also be written.... See previous post ) theorem does not mean however that it isn ’ t just in! Change of the derivative meaning math function characteristics ) to simplify things a little as... Know and keep in the above limit definition for$ \pdiff { f } { x \$. 0\ ) for this problem we can ’ t exist at a on... A specific point we ’ ve got to be working with all that much is becoming nothing! Previous post ) posts we covered the basic algebraic derivative rules ( click here see! Towards 0 '' ( f\left ( x ) Δx 2 September 2020, at 20:25 distributing the sign... And that ’ s note a couple of derivatives using the definition use that in our work couple. Definition that we give it a name from or employing derivation: a word by... A specific point we ’ re not going to have to look at will have rationalize! Are different and so definition of the derivative as we ’ re not going to to! Changes in relation to another quantity derivative at \ ( x ) = x... Is geometrical ( as a slope of the main tools of calculus.! By derivation a rate of change of notation this limit could also be written as is dependent upon or from! The central place in calculus derivative word ; a derivative word ; a derivative is at the core of mathematics., but it ’ s were looking at limits at infinity which a function at point... You do remember rationalization from an Algebra class you probably only rationalized the denominator, it... On the graph will remain the same on another this problem just plug in \ ( x \right \... It ’ s keep it simple and take a look at the d is not variable. Gives us the derivative calculus called differentiation.The inverse operation for differentiation is called differentiation.The inverse for...  • ### du Forum Yas Leisure Drive, Yas Island, Abu Dhabi United Arab Emirates +971 (0)2 509 8143 • ### du Arena Yas Leisure Drive, Yas Island, Abu Dhabi United Arab Emirates +971 (0)2 509 8143
# American Institute of Mathematical Sciences March  2018, 7(1): 53-60. doi: 10.3934/eect.2018003 ## Self-similar solutions to nonlinear Dirac equations and an application to nonuniqueness Department of Mathematics, Chung-Ang University, Seoul, 156-756, Korea Received  August 2017 Revised  November 2017 Published  January 2018 Self-similar solutions to nonlinear Dirac systems (1) and (2) are constructed. As an application, we obtain nonuniqueness of strong solution in super-critical space $C([0, T]; H^{s}(\Bbb{R}))$ $(s<0)$ to the system (1) which is $L^2(\Bbb{R})$ scaling critical equations. Therefore the well-posedness theory breaks down in Sobolev spaces of negative order. Citation: Hyungjin Huh. Self-similar solutions to nonlinear Dirac equations and an application to nonuniqueness. Evolution Equations & Control Theory, 2018, 7 (1) : 53-60. doi: 10.3934/eect.2018003 ##### References: [1] D. Agueev and D. Pelinovsky, Modeling of wave resonances in low-contrast photonic crystals, SIAM J. Appl. Math., 65 (2005), 1101-1129.  doi: 10.1137/040606053.  Google Scholar [2] T. Candy, Global existence for an $L^2$ critical nonlinear Dirac equation in one dimension, Adv. Differential Equations, 16 (2011), 643-666.   Google Scholar [3] M. Christ, Nonuniqueness of weak solutions of the nonlinear Schrödinger equation, preprint, https://arxiv.org/abs/math/0503366. Google Scholar [4] V. Delgado, Global solutions of the Cauchy problem for the (classical) coupled Maxwell-Dirac and other nonlinear Dirac equations in one space dimension, Proc. Amer. Math. Soc., 69 (1978), 289-296.  doi: 10.1090/S0002-9939-1978-0463658-5.  Google Scholar [5] D. B. Dix, Nonuniqueness and uniqueness in the initial-value problem for Burgers' equation, SIAM J. Math. Anal., 27 (1996), 708-724.  doi: 10.1137/0527038.  Google Scholar [6] H. Huh, Global strong solution to the Thirring model in critical space, J. Math. Anal. Appl., 381 (2011), 513-520.  doi: 10.1016/j.jmaa.2011.02.042.  Google Scholar [7] H. Huh, Remarks on nonlinear Dirac equations in one space dimension, Commun. Korean Math. Soc. Soc., 30 (2015), 201-208.  doi: 10.4134/CKMS.2015.30.3.201.  Google Scholar [8] S. Selberg and A. Tesfahun, Low regularity well-posedness for some nonlinear Dirac equations in one space dimension, Differential and Integral Equations, 23 (2010), 265-278.   Google Scholar show all references ##### References: [1] D. Agueev and D. Pelinovsky, Modeling of wave resonances in low-contrast photonic crystals, SIAM J. Appl. Math., 65 (2005), 1101-1129.  doi: 10.1137/040606053.  Google Scholar [2] T. Candy, Global existence for an $L^2$ critical nonlinear Dirac equation in one dimension, Adv. Differential Equations, 16 (2011), 643-666.   Google Scholar [3] M. Christ, Nonuniqueness of weak solutions of the nonlinear Schrödinger equation, preprint, https://arxiv.org/abs/math/0503366. Google Scholar [4] V. Delgado, Global solutions of the Cauchy problem for the (classical) coupled Maxwell-Dirac and other nonlinear Dirac equations in one space dimension, Proc. Amer. Math. Soc., 69 (1978), 289-296.  doi: 10.1090/S0002-9939-1978-0463658-5.  Google Scholar [5] D. B. Dix, Nonuniqueness and uniqueness in the initial-value problem for Burgers' equation, SIAM J. Math. Anal., 27 (1996), 708-724.  doi: 10.1137/0527038.  Google Scholar [6] H. Huh, Global strong solution to the Thirring model in critical space, J. Math. Anal. Appl., 381 (2011), 513-520.  doi: 10.1016/j.jmaa.2011.02.042.  Google Scholar [7] H. Huh, Remarks on nonlinear Dirac equations in one space dimension, Commun. Korean Math. Soc. Soc., 30 (2015), 201-208.  doi: 10.4134/CKMS.2015.30.3.201.  Google Scholar [8] S. Selberg and A. Tesfahun, Low regularity well-posedness for some nonlinear Dirac equations in one space dimension, Differential and Integral Equations, 23 (2010), 265-278.   Google Scholar [1] Zuliang Lu, Fei Huang, Xiankui Wu, Lin Li, Shang Liu. Convergence and quasi-optimality of $L^2-$norms based an adaptive finite element method for nonlinear optimal control problems. Electronic Research Archive, 2020, 28 (4) : 1459-1486. doi: 10.3934/era.2020077 [2] Aihua Fan, Jörg Schmeling, Weixiao Shen. $L^\infty$-estimation of generalized Thue-Morse trigonometric polynomials and ergodic maximization. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 297-327. doi: 10.3934/dcds.2020363 [3] Mathew Gluck. Classification of solutions to a system of $n^{\rm th}$ order equations on $\mathbb R^n$. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5413-5436. doi: 10.3934/cpaa.2020246 [4] Denis Bonheure, Silvia Cingolani, Simone Secchi. Concentration phenomena for the Schrödinger-Poisson system in $\mathbb{R}^2$. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020447 [5] Lei Liu, Li Wu. Multiplicity of closed characteristics on $P$-symmetric compact convex hypersurfaces in $\mathbb{R}^{2n}$. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020378 [6] Mokhtar Bouloudene, Manar A. Alqudah, Fahd Jarad, Yassine Adjabi, Thabet Abdeljawad. Nonlinear singular $p$ -Laplacian boundary value problems in the frame of conformable derivative. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020442 [7] Thabet Abdeljawad, Mohammad Esmael Samei. Applying quantum calculus for the existence of solution of $q$-integro-differential equations with three criteria. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020440 [8] Federico Rodriguez Hertz, Zhiren Wang. On $\epsilon$-escaping trajectories in homogeneous spaces. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 329-357. doi: 10.3934/dcds.2020365 [9] Yichen Zhang, Meiqiang Feng. A coupled $p$-Laplacian elliptic system: Existence, uniqueness and asymptotic behavior. Electronic Research Archive, 2020, 28 (4) : 1419-1438. doi: 10.3934/era.2020075 [10] Lihong Zhang, Wenwen Hou, Bashir Ahmad, Guotao Wang. Radial symmetry for logarithmic Choquard equation involving a generalized tempered fractional $p$-Laplacian. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020445 [11] Wenjun Liu, Yukun Xiao, Xiaoqing Yue. Classification of finite irreducible conformal modules over Lie conformal algebra $\mathcal{W}(a, b, r)$. Electronic Research Archive, , () : -. doi: 10.3934/era.2020123 [12] Thomas Bartsch, Tian Xu. Strongly localized semiclassical states for nonlinear Dirac equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 29-60. doi: 10.3934/dcds.2020297 [13] Qingfang Wang, Hua Yang. Solutions of nonlocal problem with critical exponent. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5591-5608. doi: 10.3934/cpaa.2020253 [14] Alberto Bressan, Sondre Tesdal Galtung. A 2-dimensional shape optimization problem for tree branches. Networks & Heterogeneous Media, 2020  doi: 10.3934/nhm.2020031 [15] Youshan Tao, Michael Winkler. Critical mass for infinite-time blow-up in a haptotaxis system with nonlinear zero-order interaction. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 439-454. doi: 10.3934/dcds.2020216 [16] Gunther Uhlmann, Jian Zhai. Inverse problems for nonlinear hyperbolic equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 455-469. doi: 10.3934/dcds.2020380 [17] Hua Chen, Yawei Wei. Multiple solutions for nonlinear cone degenerate elliptic equations. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020272 [18] Scipio Cuccagna, Masaya Maeda. A survey on asymptotic stability of ground states of nonlinear Schrödinger equations II. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020450 [19] Marion Darbas, Jérémy Heleine, Stephanie Lohrengel. Numerical resolution by the quasi-reversibility method of a data completion problem for Maxwell's equations. Inverse Problems & Imaging, 2020, 14 (6) : 1107-1133. doi: 10.3934/ipi.2020056 [20] Zhilei Liang, Jiangyu Shuai. Existence of strong solution for the Cauchy problem of fully compressible Navier-Stokes equations in two dimensions. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020348 2019 Impact Factor: 0.953
# zbMATH — the first resource for mathematics Stochastic allocation and scheduling for conditional task graphs in multi-processor systems-on-chip. (English) Zbl 1232.68017 Summary: Embedded systems designers are turning to multicore architectures to satisfy the ever-growing computational needs of applications within a reasonable power envelope. One of the most daunting challenges for multiprocessor system-on-chip (MPSoC) platforms is the development of tools for efficient mapping multi-task applications onto hardware platforms. Software mapping can be formulated as an optimal allocation and scheduling problem, where the application is modeled as a task graph, the target hardware is modeled as a set of heterogeneous resources, and the objective function represents a design goal $$\alpha$$ (e.g. minimum execution time, minimum usage of communication resources, etc.). Conditional task graphs, where inter-task edges represent data as well as control dependencies, are a well-known computational model to describe complex real-life applications where alternative execution paths, guarded by conditionals, can be specified. Each condition has a probability associated with each possible outcome. Mapping conditional task graphs is significantly more challenging than mapping pure data-flow graphs (where edges only represent data dependencies). Approaches based on general-purpose complete solvers (e.g. integer linear programming solvers) are limited both by computational blowup and by the fact that the objective is a stochastic functional. The main contribution of our work is an efficient and complete approach to allocation and scheduling of conditional task graphs, based on (i) an exact analytic formulation of the stochastic objective function exploiting task graph analysis and (ii) an extension of the timetable constraint for conditional activities. Moreover, our solver is integrated in a complete application development environment which produces executable code for target multicore platforms. This integrated framework allows us to validate modeling assumptions and to assess constraint satisfaction and objective function optimization. Extensive validation results demonstrate not only that our approach can handle non-trivial instances efficiently, but also that our models are accurate and lead to optimal and highly predictable execution. ##### MSC: 68M07 Mathematical problems of computer architecture 68M20 Performance evaluation, queueing, and scheduling in the context of computer systems 05C90 Applications of graph theory JaCoP Full Text: ##### References: [1] AMD (Advanced Micro Devices). http://multicore.amd.com/us-en/AMD-Multi-Core.aspx . [2] ARM Semiconductor, ARM11 MPCore Multiprocessor. Available at http://arm.convergencepromotions.com/catalog/753.htm . [3] Axelsson, J. (1997). Architecture synthesis and partitioning of real-time systems: A comparison of three heuristic search strategies. In Proceedings of the international conference hardware/software codesign and system synthesis, CODES’97 (p. 161). [4] Baptiste, P., Le Pape, C., & Nuijten, W. (2003). Constraint-based scheduling. Dordrecht: Kluwer Academic. · Zbl 1094.90002 [5] Bender, A. (1996). MILP based task mapping for heterogeneous multiprocessor systems. In Proceedings of the European design and automation conference, EURO-DAC’96–EURO-VHDL’96 (p. 197). [6] Benders, J. (1962). Partitioning procedures for solving mixed-variables programming problems. Numerische Mathematik, 4, 238–252. · Zbl 0109.38302 · doi:10.1007/BF01386316 [7] Benini, L., Bertozzi, D., Guerri, A., & Milano, M. (2005). Allocation and scheduling for MPSoCs via decomposition and no-good generation. In Proceedings of the international conference on principles and practice of constraint programming, CP2005 (pp. 107–121). · Zbl 1153.68448 [8] Benini, L., Bertozzi, D., Guerri, A., & Milano, M. (2006). Allocation, scheduling and voltage scaling on energy aware MPSoCs. In Proceedings of international conference on integration of AI and OR techniques in constraint programming for combinatorial optimization problems, CPAIOR 2006 (pp. 44–58). · Zbl 1177.68023 [9] Benini, L., Lombardi, M., Milano, M., & Ruggiero, M. (2008a). A constraint programming approach for allocation and scheduling on the cell broadband engine. In Proceedings of the international conference on principles and practice of constraint programming, CP 2008 (pp. 21–35). [10] Benini, L., Lombardi, M., Mantovani, M., Milano, M., & Ruggiero, M. (2008b). Multi-stage benders decomposition for optimizing multicore architectures. In Proceedings of international conference on integration of AI and OR techniques in constraint programming for combinatorial optimization problems, CPAIOR 2008 (pp. 36–50). · Zbl 1142.68506 [11] Borkar, S. (1999). Design challenges of technology scaling. IEEE Micro, 19(4), 23–29. · Zbl 05098433 · doi:10.1109/40.782564 [12] Borkar, S. (2007). Thousand core chips: a technology perspective. In DAC’07: Proceedings of the 44th annual conference on design automation (pp. 746–749). [13] Brodersen, R. W., Horowitz, M. A., Markovic, D., Nikolic, B., & Stojanovic, V. (2002). Methods for true power minimization. In Proceedings of the 2002 IEEE/ACM international conference on computer-aided design, ICCAD’02 (pp. 35–42). [14] Brunnbauer, W., Wild, T., Foag, J., & Pazos, N. (2003). A constructive algorithm with look-ahead for mapping and scheduling of task graphs with conditional edges. In Proceedings of the euromicro conference on digital system design, DSD 2003 (pp. 98–103). [15] Chatha, K. S., & Vemuri, R. (2002). Hardware-software partitioning and pipelined scheduling of transformative applications. IEEE Transactions on Very Large Scale Integrated Systems, 10(3), 193–208. · Zbl 05460407 · doi:10.1109/TVLSI.2002.1043323 [16] CISCO Systems, http://www.cisco.com/en/US/products/ps5763/ . [17] Cradle Technologies, The multi-core DSP advantage for multimedia. Available at http://www.cradle.com/ . [18] Dolif, E., Lombardi, M., Ruggiero, M., Milano, M., & Benini, L. (2007). Communication-aware stochastic allocation and scheduling framework for conditional task graphs in multi-processor systems-on-chip. In Proceedings of the international conference on embedded software, EMSOFT’07 (p. 56). [19] Eles, P., Kuchcinski, K., Peng, Z., Doboli, A., & Pop, P. (1998). Scheduling of conditional process graphs for the synthesis of embedded systems. In Proceedings of the conference on design, automation and test in Europe, DATE’98 (pp. 15–29). [20] Eremin, A., & Wallace, M. (2001). Hybrid benders decomposition algorithms in constraint logic programming. In Proceedings of the international conference on principles and practice of constraint programming, CP 2001 (pp. 1–15). · Zbl 1067.68630 [21] Faraboschi, P., Fisher, J., & Young, C. (2001). Instruction scheduling for instruction level parallel processors. Proceedings of the IEEE, 89(11), 1638–1659. · doi:10.1109/5.964443 [22] Francesco, P., Antonio, P., & Marchal, P. (2005). Flexible hardware/software support for message passing on a distributed shared memory architecture. In Proceedings of the conference on Design, automation and test in Europe, DATE’05 (pp. 736–741). [23] Gomes, C., Selman, B., McAloon, K., & Tretkoff, C. (1998). Randomization in backtrack search: Exploiting heavy-tailed profiles for solving hard scheduling problems. In Proceedings of the international conference on AI planning and scheduling, AIPS 98 (pp. 208–213). [24] Goodacre, J., & Sloss, A. N. (2005). Parallelism and the ARM instruction set architecture. Journal Computer, 38(7), 42–50. · Zbl 05089483 · doi:10.1109/MC.2005.239 [25] Graham, J. R. (2007). Integrating parallel programming techniques into traditional computer science curricula. ACM SIGCSE Bullettin, 39(4), 75–78. · doi:10.1145/1345375.1345419 [26] Hooker, J. N. (2005a). A hybrid method for planning and scheduling. Constraints, 10(4), 385–401. · Zbl 1122.90054 · doi:10.1007/s10601-005-2812-2 [27] Hooker, J. N. (2005b). Planning and scheduling to minimize tardiness. In Proceedings of the international conference on principles and practice of constraint programming, CP2005 (pp. 314–327). · Zbl 1153.90423 [28] Hooker, J. N., & Ottosson, G. (2003). Logic-based Benders decomposition. Mathematical Programming, 96(1), 33–60. · Zbl 1023.90082 [29] Horowitz, M. (2007). Scaling, power and the future of CMOS. In Proceedings of the 20th international conference on VLSI design, VLSID’07 (p. 7). [30] Horowitz, M. et al. (2001). The future of wires. Proceedings of the IEEE, 89, 490–504. · doi:10.1109/5.920580 [31] Intel Corporation (2002). Intel IXP2800 network processor product brief. Available at http://download.intel.com/design/network/ProdBrf/27905403.pdf . [32] Jain, V., & Grossmann, I. E. (2001). Algorithms for hybrid milp/cp models for a class of optimization problems. INFORMS Journal on Computing, 13(4), 258–276. · Zbl 1238.90106 · doi:10.1287/ijoc.13.4.258.9733 [33] Kodase, S., Wang, S., Gu, Z., & Shin, K. G. (2003). Improving scalability of task allocation and scheduling in large distributed real-time systems using shared buffers. In Proceedings of the IEEE real-time and embedded technology and applications symposium, RTAS 03 (p. 181). [34] Kuchcinski, K. (1997). Embedded system synthesis by timing constraint solving. In Proceedings of IEEE ISSS’97 (pp. 50–57). [35] Kuchcinski, K. (2003). Constraints-driven scheduling and resource assignment. ACM Transactions on Design Automation of Electronic Systems, 8(3), 355–383. · Zbl 05456790 · doi:10.1145/785411.785416 [36] Kuchcinski, K., & Wolinski, C. (2003). Global approach to assignment and scheduling of complex behaviors based on HCDG and constraint programming. Journal of Systems Architecture, 49(12–15), 489–503. · Zbl 05432040 · doi:10.1016/S1383-7621(03)00075-4 [37] Laborie, P. (2003). Algorithms for propagating resource constraints in AI planning and scheduling: Existing approaches and new results. Artificial Intelligence, 143(2), 151–188. · Zbl 1079.68622 · doi:10.1016/S0004-3702(02)00362-4 [38] Laborie, P. (2005). Complete MCS-based search: application to resource constrained project scheduling. Proceedings of the International Joint Conferences on Artificial Intelligence, IJCAI 2005, 19, 181. [39] Laporte, G., & Louveaux, F. (1993). The integer l-shaped method for stochastic integer programs with complete recourse. Operations Research Letters, 13, 133–142. · Zbl 0793.90043 · doi:10.1016/0167-6377(93)90002-X [40] Le Pape, C., Vergamini, D., & Gosselin, V. (1994). Time-versus-capacity compromises in project scheduling. In Proceedings of the thirteenth workshop of the UK planning special interest group (p. 19). [41] Lombardi, M., & Milano, M. (2006). Stochastic allocation and scheduling for conditional task graphs in MPSoCs. In Proceedings of the international conference on principles and practice of constraint programming, CP2006 (pp. 299–313). [42] Martin, G. (2006). Overview of the MPSoC design challenge. In Proceedings of the 43rd annual conference on design automation, DAC’06 (pp. 274–279). [43] Medardoni, S., Ruggiero, M., Bertozzi, D., Benini, L., Strano, G., & Pistritto, C. (2007). Interactive presentation: capturing the interaction of the communication, memory and I/O subsystems in memory-centric industrial MPSoC platforms. In Proceedings of the conference on design, automation and test in Europe, DATE’07 (p. 665). [44] Mudge, T. (2001). Power: A first-class architectural design constraint. Computer, 34(4), 52–58. · Zbl 05088200 · doi:10.1109/2.917539 [45] NEC, http://www.nec.co.jp/techrep/en/journal/g06/n03/060311.html . · Zbl 0266.20038 [46] Palazzari, P., Baldini, L., & Coli, M. (2004). Synthesis of pipelined systems for the contemporaneous execution of periodic and aperiodic tasks with hard real-time constraints. In Proceedings of the IEEE international parallel & distributed processing symposium. [47] Pham, D. et al. (2005). The design and implementation of a first-generation CELL processor. In Proceedings of the international solid state circuits conference, ISSCC’05 (pp. 45–49). [48] Prakash, S., & Parker, A. C. (1992). Sos: synthesis of application-specific heterogeneous multiprocessor systems. Journal of Parallel and Distributed Computing, 16(4), 338–351. · Zbl 0786.68009 · doi:10.1016/0743-7315(92)90017-H [49] Ruggiero, M., Guerri, A., Bertozzi, D., Poletti, F., & Milano, M. (2006). Communication-aware allocation and scheduling framework for stream-oriented multi-processor systems-on-chip. In Proceedings of the conference on design, automation and test in Europe, DATE’06 (p. 8). [50] Shin, D., & Kim, J. (2003). Power-aware scheduling of conditional task graphs in real-time multiprocessor systems. In Proceedings of the international symposium on low power electronics and design, ISLPED 2003 (pp. 408–413). [51] ST Microelectronics, http://www.st.com/stonline/products/families/mobile/processors/processorsprod.htm . [52] Szymanek, R., & Kuchcinski, K. (2001). A constructive algorithm for memory-aware task assignment and scheduling. In Proceedings of the international conference on hardware-software codesign and system synthesis, CODES’01 (p. 152). [53] Thorsteinsson, E. S. (2001). Branch-and-check: A hybrid framework integrating mixed integer programming and constraint logic programming. In Proceedings of the international conference on principles and practice of constraint programming, CP’01 (pp. 16–30). · Zbl 1067.68677 [54] Wu, D., Al-Hashimi, B., & Eles, P. (2003). Scheduling and mapping of conditional task graph for the synthesis of low power embedded systems. Computers and Digital Techniques, IEEE Proceedings, 150(5), 262–273. · doi:10.1049/ip-cdt:20030837 [55] Xie, Y., & Wolf, W. (2001). Allocation and scheduling of conditional task graph in hardware/software co-synthesis. In Proceedings of the conference on design, automation and test in Europe, DATE 2001 (pp. 620–625). This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
# IN THE GIVEN FIGURE, DB IS PERPENDICULAR TO BC, DE IS PERPENDICULAR TO AB AND AC IS PERPENDICULAR TO BC. PROVE THAT BE/DE= AC/BC. in tri deb and acb, angle DBE=ANGLE ABC (COMMON) ANGLE DEB =ANGLE ACB(EACH 90 ) THEREFORE  TRI DEB AND ACB ARE SIMILAR (BY AA RULE) BE/BC=AC/DE THEREFORE, BE/DE=AC/BC • -17 In the figure, AB ⊥ BC, DE ⊥ AC and GF ⊥ BC. Prove that ΔADE ~ ΔGCF • -19 Thank you for asking the question. although you don't give the picture but i know that question so i may help you out . In the triangle DEB and triangle ACD. angle DEB=Angle ABC (you can see if you have the picture). anlge DEB = angle ACD (each is of 90) therefor we can easily say that triangle DEB= triangle ACB  ( according to AA rule) Hence BE/BC =AC/DE And then now we can have BE/DE = AC/BE • -14 how is ✓ABC=✓DEB?? • -3 In fig.DB L BC,DE L AB and AC L BC. Prove that BE/DE=AC/BC. solve this fast • 4 AC and BC are perpendicular to BC, so AC and BC are parallel to each other. Now, in triangle ACB and triangle BAD angle CAB = angle ABD (Alternate interior angles) Also, angle ACB = angle DEB (90 degree each) Therefore BE/DE = AC/BC Hence Proved • -7 Hope it will help • 70 Ans • 0 aaaaaaaaaaa • -12
AUTO 07p in the IPython Notebook¶ Introduction¶ I have used both the fantastic XPP/XPPAUT software package and the matcont MATLAB toolbox for numerical continuation and bifurcation analysis of ordinary differential equations. As a user, both of these packages have you do mostly GUI operations: clicking buttons and typing keyboard shortcuts -- although there may be ways of automating usage of both packages. The software, AUTO, that XPPAUT relies on (the auto part of XPP used for continuation and bifurcation analysis) is a bit trickier to use than either one of the aforementioned packages -- although, in my very limited experience, using AUTO directly seems to be a more stable eperience than calling it through XPPAUT. While both XPP and matcont provide their own, user-friendly notation for defining ODEs and relatively straight-forward interfaces, AUTO requires the user to dig into some technical aspects of defining your ODE and setting AUTO's numerous parameters and variables. Numerous examples provided with the latest version of AUTO give newcomers to AUTO a reasonbly fast and pain-free entry point and you'll be up and running relatively quickly. Using AUTO with its current AUTO CLUI (a Python-based command-line tool) and the various methods defined in AUTO's Python interface has been a pleasurable (whatever I mean by that) experience but left me confused regarding some aspects: I generally found it hard to grasp what data structure the AUTO output had been saved in and how to plot the resultant continuation branches and bifurcation points in precisely the way I wanted. I started this notebook to just play around with the Python interface that ships with the latest AUTO version. For now, my goal is to parse and plot (using matplotlib) AUTO output. At some later stage I would love to find a way of integrating an ODE numerically and then passing the discovered stable steady state into AUTO (zeal abound). In the unlikely event that the person reading this is not me and you have any words of advice or criticism, I would be delighted to hear from you. This notebook lives here and is part of a repoistory all things auto. This Notebook¶ The structure of this notebook: • First I'll load a few Python modules (most of which I won't use for now) and then invoke AUTO for a sample system, • then I'll parse and plot some of AUTO's output manually (using pandas), and • at last I'll use some of the Python modules that were written by the AUTO developers that can be used to parse AUTO's output. Again, eveything I'm doing here can be done with the AUTO CLUI with just a few keystrokes -- here I'm just trying to start digging into parsing AUTO output and hopefully, eventually how AUTO works. Load AUTO 07p Python Moduls¶ Add $AUTO_DIR/python to sys.path to import the Python modules defined by AUTO 07p. In [ ]: import sys auto_directory = !echo$AUTO_DIR if auto_directory == ['']: home = !echo $HOME auto_directory = home[0]+'/auto/07p/' sys.path.append(auto_directory+'/python') else: sys.path.append(auto_directory[0] + '/python') Not all of these are necessary -- in fact we'll use just one of these modules for now -- however it's good to see what Python modules have been written by the authors AUTO. In [ ]: import AUTOCommands as ac import AUTOclui as acl import interactiveBindings as ib import runAUTO as ra Start AUTO, Load System, and Run¶ Start AUTO and catch the returned runner object. In [ ]: runner = ra.runAUTO() AUTOCommands defines multiple methods that return solution objects. Method load is one of them. In [ ]: lpa = ac.load('lpa', runner=runner); The above is shorthand for ac.load(e='lpa', c='lpa', runner=runner) Going through the library code, it is hard to understand what the following does exactly. For now, this compiles our system defined in lpa.f90 into an object lpa.o and then links lpa.o and AUTO's FORTRAN library objects in auto/07p/lib/ into an executable lpa.exe. After executing the next command, you'll notice this lpa.exe in your directory and you will be able to rerun AUTO just by doing this in your shell: $ ./lpa.exe In [ ]: lpa.run() Going through the output of the above lpa.run() command you'll notice at least two things: 1. It would probably suffice to os.system() run the following commands (delete lpa.o in the current directory if you don't notice these command invocations at the top) gfortran -fopenmp -O -c lpa.f90 -o lpa.o gfortran -fopenmp -O lpa.o -o lpa.exe $HOME/auto/07p/lib/*.o ./lpa.exe By that I mean, you could probably just write a Python script that does os.system('gfortran -fopenmp -O -c lpa.f90 -o lpa.o') os.system('gfortran -fopenmp -O lpa.o -o lpa.exe$HOME/auto/07p/lib/*.o') os.system('./lpa.exe') This should generate the same output files in your directory. 1. Some part of the auto toolchain seems broken by our use of the IPython Notebook 596 raise AUTOExceptions.AUTORuntimeError("Error running AUTO") 597 598 def test(): AUTORuntimeError: Error running AUTO Despite the fact that something seems broken, we notice that auto still ran and created three output files. Read Output Files and Plot Solution Branches¶ Parse and Plot By Hand¶ In [ ]: !ls The b.lpa file (see comment on naming schemes below) seems to hold all branches and some auto-specific info at the top of the file. Let's import a few tools that will come in handy reading b.lpa and plotting the branches described therein. In [ ]: import pandas as pd from matplotlib import pylab as pl Parse b.lpa¶ In [ ]: content = None with open('b.lpa', 'r') as f: content = f.readlines() Line 15 (zero-indexed) onwards are the branches. In [ ]: content[15:20] In [ ]: content_csv = [[el for el in content[15].split(' ') if len(el) > 0 and el != '\n']] content_csv[0][0] = 'branch' column_names = content_csv[0] These are our column names found in content[15]. In [ ]: column_names In [ ]: for line in content: dummy = line.split(' ') dummy = [el for el in dummy if len(el) > 0 and el != '\n'] if dummy[0] == '0': continue for el_i, el in enumerate(dummy): if el_i < 4: dummy[el_i] = int(el) else: dummy[el_i] = float(el) if len(dummy) > 1: content_csv.append(dummy) Load the resulting list of lists into a pandas DataFrame. In [ ]: df = pd.DataFrame(content_csv, columns=column_names) In [ ]: df Plot branches in b.lpa¶ Plotting branches is easy now. In [ ]: scatter(df[df['branch'] == 1].tot,df[df['branch'] == 1].aL) scatter(df[df['branch'] == 2].tot,df[df['branch'] == 2].aL) Parse and Plot with Built-In AUTO CLUI methods¶ Parse b.lpa¶ The manual says that files fort.7, fort.8 and fort.9 are equivalent to b.*, s.*, and d.* (in that order). The fort.* naming scheme may be older as the latest version of AUTO appears to produce files in the latter naming scheme. parseB.py in auto/07p/python/ is probably a good start for what we want to do here. In [ ]: import parseB The class parseB.parseBMixin provides a method called readFilename so let's hope this allows us to read and parse our b.lpa file. In [ ]: pb_obj = parseB.parseBMixin() In [ ]: pb_obj.readFilename('b.lpa') Hmmm ... nope. In [ ]: pb_obj = parseB.parseB() In [ ]: b_lpa = open('b.lpa', 'r') ab_obj.read(b_lpa) In [ ]: pb_obj.read(b_lpa) The object pb_obj seems to store an array of the branches found in b.lpa. In [ ]: pb_obj.branches[:2] Branches are saved as instances of Points.Pointset which inherits from class Points (Points.py is found in auto/07p/python). Let's see what members the class Pointset defines. In [ ]: pb_obj.branches[0].name In [ ]: pb_obj.branches[0].keys() In [ ]: pb_obj.branches[0]['tot'] In [ ]: pb_obj.branches[0].todict() The object pb_obj.branches[0] holds the coordinates for the first branch in b.lpa in an accessible format -- perfect for plotting. All we need now are stability properties and bifurcation points on this branch! In [ ]: pb_obj.branches[0].labels This branch is not very eventful: All we get are one endpoint at index 0 and another endpoint at index 46. Let's check out the next branch. In [ ]: pb_obj.branches[1].labels In [ ]: pb_obj.branches[1]['tot'][37] This branch point BP (in this case, a transcritical bifurcation point) is one of four bifurcation points described in our publication -- note that the tot parameter used in this notebook is defined an order of magnitude smaller than the same parameter (tot or T as we called it) in this publication. Hence, in this publcation we reported T = 23.0 as bifurcation point whereas here we observe tot = 2.3 -- these bifurcation points are equivalent. We now have the coordinates of all branches in b.lpa and special points (bifurcation points) along these branches. What we still want is the stability of these branches. Inspecting our current directory, we notice a file d.lpa (this file may be called fort.9 in the older naming scheme) which contains the eigenvalues at some points along these branches. A Python module parseD.py exists in auto/07p/python so let's see what this offers. In [ ]: import parseD as pd In [ ]: pd_obj = pd.parseD() pd_obj.read(open('d.lpa', 'r')) The object pd_obj appears to be a list of dictionaries and while the overall parsing of d.lpa does not appear to be implemented to perfection (yet) we are given access to the eigenvalues and branch number (let's hope that branch numbers are consistent throughout!). In [ ]: pd_obj[0] In [ ]: for i in range(60,100): print 'Branch number',pd_obj[i]['Branch number'],'eigenvalues',pd_obj[i]['Eigenvalues'] In [ ]: pd_obj[71] It is not entirely clear to me how the diagnostic output in d.lpa and the branches in b.lpa can be combined into one bifurcation diagram. In [ ]: import parseS as ps In [ ]: ps_obj = ps.parseS() In [ ]: ps_obj.read(open('s.lpa', 'r')) In [ ]: ps_obj[0] In [ ]: ps_obj[0].__str__() In [ ]: ps_obj[0].data Stability Data and Bifurcation Points¶ The above bifurcation plot is close to the one shown in Figure 3 of a previous article where the same analysis was done on the same system. What's still missing are stability data (eigenvalues along the solution branches) and bifurcation points. The file d.lpa generated by AUTO seems to contain that information. In [ ]: dlpa = None with open('d.lpa', 'r') as f: dlpa = f.readlines() In [ ]: dlpa[:20]
5 # Part 2 _ Additional questions: (a) One of the 2 functions below has a Fourier series that is much easier t0 find than the other. Determine which one is easier to f... ## Question ###### Part 2 _ Additional questions: (a) One of the 2 functions below has a Fourier series that is much easier t0 find than the other. Determine which one is easier to find (sketching a few periods of each would help), and then find its Fourier series f(x) = 37 + Scos 4x, f (x+ 2t) =f (x) f(x) 4cos x, f (r+2) =f(x). (6) Find a Fourier series (period 2z) representation for f(x) = 3cos 2x + 4sec? 3x 4tan? 3x + 8cos Sx sin 5x. (Avoid using integration to find the answer: Use trig identities. Will the pe Part 2 _ Additional questions: (a) One of the 2 functions below has a Fourier series that is much easier t0 find than the other. Determine which one is easier to find (sketching a few periods of each would help), and then find its Fourier series f(x) = 37 + Scos 4x, f (x+ 2t) =f (x) f(x) 4cos x, f (r+2) =f(x). (6) Find a Fourier series (period 2z) representation for f(x) = 3cos 2x + 4sec? 3x 4tan? 3x + 8cos Sx sin 5x. (Avoid using integration to find the answer: Use trig identities. Will the period 7 Fourier series representation of f (x) be the same? Why or why not? #### Similar Solved Questions ##### 1 Score: 8 1 Ererentvalue 5,44"m HHomework cuinan 5_ 1pane choving 1 1click 4juuo1comp0e 1 Score: 8 1 Ererentvalue 5,44"m HHomework cuinan 5_ 1 pane choving 1 1 click 4juuo 1 comp0e... ##### Let M =(a &) Show that the characteristic polynomial IAI - Ml = 0 can be rewritten asA? _ Tr(MJA+ IMI = 0(b) Verify that | Tr(M) = A1 + Az(c) Verify that IMI = 41Az Let M = (a &) Show that the characteristic polynomial IAI - Ml = 0 can be rewritten as A? _ Tr(MJA+ IMI = 0 (b) Verify that | Tr(M) = A1 + Az (c) Verify that IMI = 41Az... ##### (Fill in the blanks; few words; Spts)Systematics is the branch of biology that deals withand usesgenes to understand relationships (Fill in the blanks; few words; Spts) Systematics is the branch of biology that deals with and uses genes to understand relationships... ##### Prove that if A and B are sets_ and C is nonempty; and Ax C = BxC; then A = B Prove that if A and B are sets_ and C is nonempty; and Ax C = BxC; then A = B... ##### Write an equation for and graph parabola with the given focus Fand vertex V: ILesson 7-1) F(1,5), V(t, 31 Fi5,~T),V(1, -7)MULTIPLE CHOICE In each of the following; parabola and its directrix are shown: which parabola is the focus farthest from the Vertex? (Lesson 7-1)DESIGN The cross-section of the mirror in the flashlight design below parabola. (Lesson 7-1)Bin.nean equation that models tne parabola. eeepue equation Write an equation for and graph parabola with the given focus Fand vertex V: ILesson 7-1) F(1,5), V(t, 31 Fi5,~T),V(1, -7) MULTIPLE CHOICE In each of the following; parabola and its directrix are shown: which parabola is the focus farthest from the Vertex? (Lesson 7-1) DESIGN The cross-section of ... ##### Find the domain of the vector-valued function. Enter your answer using interval notation:) 1 r(t) F(t) x G(t), where F(t) = t3; tj + tk, G(t) = Vti + Cj+ (t + 4)k t + 6 Find the domain of the vector-valued function. Enter your answer using interval notation:) 1 r(t) F(t) x G(t), where F(t) = t3; tj + tk, G(t) = Vti + Cj+ (t + 4)k t + 6... ##### 4) A diffraction grating is designed to spread out the 1st order spectrum f visible light such that the red (zoonm) end of the spectrum is at an angular Iocation of 70.0 degrees_ (a) What is the angular location of the blue (oonm) end of the spectrum? (b) Calculate the number of lines per centimeter of the diffraction grating needed to produce this spread? (8 For the znd order Specerutrom Hratinggecdeavelengtce ohiigpreadi beOncuded for Ghis d ffraction grating? (d) What wili be the angular rang 4) A diffraction grating is designed to spread out the 1st order spectrum f visible light such that the red (zoonm) end of the spectrum is at an angular Iocation of 70.0 degrees_ (a) What is the angular location of the blue (oonm) end of the spectrum? (b) Calculate the number of lines per centimeter... ##### 32 22 23 22 28 38 38 35 13 1 22 29 A53 18 13 M ~ = 22 27 86 28 21 8L 28 28 94 19 10 84 42 33 66 26 12 86 29 31 80 14 96 30 29 Determine the number of classes for this data Establish the class widths Establish the class boundaries Establish the frequency distribution for this datanecessary for class widths rouind t0 closest whole number 32 22 23 22 28 38 38 35 13 1 22 29 A53 18 13 M ~ = 22 27 86 28 21 8L 28 28 94 19 10 84 42 33 66 26 12 86 29 31 80 14 96 30 29 Determine the number of classes for this data Establish the class widths Establish the class boundaries Establish the frequency distribution for this data necessary for class... ##### Assume that white light is provided by a single source in a double-slit experiment. Describe the interference pattern if one slit is covered with a red filter and the other slit is covered with a blue filter. Assume that white light is provided by a single source in a double-slit experiment. Describe the interference pattern if one slit is covered with a red filter and the other slit is covered with a blue filter.... ##### Find the flux of $\mathbf{F}$ across $\sigma$ by expressing $\sigma$ para metrically $\mathbf{F}(x, y, z)=\mathbf{i}+\mathbf{j}+\mathbf{k} ;$ the surface $\sigma$ is the portion of the cone $z=\sqrt{x^{2}+y^{2}}$ below the plane $z=1,$ oriented by downward unit normals.. Find the flux of $\mathbf{F}$ across $\sigma$ by expressing $\sigma$ para metrically $\mathbf{F}(x, y, z)=\mathbf{i}+\mathbf{j}+\mathbf{k} ;$ the surface $\sigma$ is the portion of the cone $z=\sqrt{x^{2}+y^{2}}$ below the plane $z=1,$ oriented by downward unit normals..... ##### Find the derivative of the vector function r(t) = ta x (b + tc); where a = {4,4,3),b and 2 (t)(2, 3,5)- Find the derivative of the vector function r(t) = ta x (b + tc); where a = {4,4,3),b and 2 (t) (2, 3,5)-... ##### If both x and y components of a vector A are positive, then therange of angle will be If both x and y components of a vector A are positive, then the range of angle will be... ##### W1) Describe how a plant population differs from an individualplant. Specifically address what limits growth between the two.Recommend 2-3 sentences to answerAnswer:W2) Explain the ecological purpose for the great variety of lifehistory strategies among different plant species. Provide anexample to demonstrate you understand this concept. Recommend 2-3sentences to answer, 6 points.Answer:W3) Discuss how changes in soil fertility will alter thedensity-dependent factors controlling population grow W1) Describe how a plant population differs from an individual plant. Specifically address what limits growth between the two. Recommend 2-3 sentences to answer Answer: W2) Explain the ecological purpose for the great variety of life history strategies among different plant species. Provide an examp... ##### Compare and contrast viral proteins used in the life cycles ofHIV and SARS-CoV-2. Choose 4 different proteins with similar oranalogous functions in the two different viruses and explain waysin which they are similar as well as different. Compare and contrast viral proteins used in the life cycles of HIV and SARS-CoV-2. Choose 4 different proteins with similar or analogous functions in the two different viruses and explain ways in which they are similar as well as different.... ##### Let C1 and C2 betwo smooth parameterized curves that startat P0 and endat Q0 ≠ P0, butdo not otherwise intersect. If the line integral of thefunction f(x, y, z) along C1 isequal to 43.8 and the line integralof f(x, y, z) along C2 is 18.8, whatis the line integral around the closed loop formed by firstfollowing C1 from P0 to Q0, followedby the curvefrom Q0 to P0 along C2, butmoving in the opposite direction? Let C1 and C2 be two smooth parameterized curves that start at P0 and end at Q0 ≠ P0, but do not otherwise intersect. If the line integral of the function f(x, y, z) along C1 is equal to 43.8 and the line integral of f(x, y, z) along C2 is 18.8, what is the line integral around the closed loop...
# $N_{2}$ AND Ar BROADENING OF THE P AND R BRANCHES OF THE $\nu_{3}$ BAND OF METHANE Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/29725 Files Size Format View 1995-RL-12.jpg 45.06Kb JPEG image Title: $N_{2}$ AND Ar BROADENING OF THE P AND R BRANCHES OF THE $\nu_{3}$ BAND OF METHANE Creators: Pine, A. S. Issue Date: 1995 Publisher: Ohio State University Abstract: We have recorded ${N_{2}}{-}$ and Ar-broadened spectra of the allowed P-and branch manifolds for $J \leq 10$ in the $\nu_{3}$ band of $CH_{4}$ from the Doppler limit to $\sim 67 kPa at T = 295 K$ using a tunable difference-frequency laser spectrometer. The broadening coefficients and shifts and Dicke narrowing will be compared with prior laser measurements in the Q branch of this $band^{1}$ as well as previous FTIR measurements in the P and R $branches^{2}$. We will also address the question of line mixing between the overlapping tetrahedral components in a given blended J manifold as reported by the NASA Langley $group^{3}$. Description: 1. A.S. Pine, J. Chem. Phys. 97, 773 (1992). 2. D.C. Benner, V.M. Devi, M.A.H. Smith and C.P. Rinsland, JQSRT 50, 65 (1993). 3. D.C. Benner et al., Columbus Symposium, June 1994, paper, ME 12. Author Institution: National Institute of Standards and Technology, Gaithersburg, MD 20899. URI: http://hdl.handle.net/1811/29725 Other Identifiers: 1995-RL-12
# Wave interference pattern ## Homework Statement Using the two-dimensional wave interference pattern shown and the two equations involving path difference, complete the following: a)Measure the wavelength of the waves, the distance between the sources, and the path distance from each of the sources to point P. b)Choose a point on any antinodal line and show the complete calculation for wavelength. c)What effect would an increase in frequency have in the interference pattern? d)What effect would decreasing the distance between the wave sources have on the interference pattern? e)If the phase of the vibrating sources were changed so they were vibrating completely out of phase, what effect would this have on the interference pattern? ## Homework Equations |PnS1-PnS2|=(n-1/2)W |PmS1-PmS2|=mW where W=wavelength ## The Attempt at a Solution all approximated (measured with a ruler) W= 7mm dS1S2=23mm PnS1=62mm PnS2=81mm |PnS1-PnS2|=(n-1/2)W |62-81|=(1-1/2)W -19=(1/2)W W=-38 ??? All I need is a bit of direction here on part a and b. The textbook is confusing with the examples it provides and I find this question confusing as well. After I measured the wavelength, it wants me to find the wavelength using the measured wavelength? What am I trying to find? My answer is obviously way off and substituting n=2 only gives me -12.6 for W. After I measure the wavelength, what then? any help on this question is appreciated, thank you #### Attachments • 20.5 KB Views: 501 Related Introductory Physics Homework Help News on Phys.org BvU Homework Helper 2019 Award The similar threads mentioned at the bottom aren't very useful. Perhaps this one is. After I measured the wavelength, it wants me to find the wavelength using the measured wavelength? I can't find that in your problem formulation ? W = wavelength is difficult to get used to. Most folks use $\lambda$ (type $\#\$# \lambda $\#\$# ) or λ from the list you get when clicking the $\Sigma$ on the toolbar. The similar threads mentioned at the bottom aren't very useful. Perhaps this one is. I can't find that in your problem formulation ? W = wavelength is difficult to get used to. Most folks use $\lambda$ (type $\#\$# \lambda $\#\$# ) or λ from the list you get when clicking the $\Sigma$ on the toolbar. λ= 7mm dS1S2=23mm (distance between the two sources) PnS1=62mm (source 1 to the node point) PnS2=81mm (source 2 to the node point) I was only given these two equations: |PnS1-PnS2|=(n-1/2)λ |PmS1-PmS2|=mλ where m=the antinode Maybe I just need help understanding what the question is asking. I am not sure what is meant by "Measure the wavelength of the waves, the distance between the sources, and the path distance from each of the sources to point P." and then it asks me to "Choose a point on any antinodal line and show the complete calculation for wavelength.". If it asks me to 'measure the wavelength of the waves" ... then asks me to "show the complete calculation for wavelength"... how are these two dissimilar and how would I go about finding the complete calculation for wavelength? I thought I measured the wavelength already. I must be missing something fundamental... BvU Homework Helper 2019 Award You attached a picture to post #1 from which nothing at all can be measured. So it's difficult to comment on what you measured. Is this really where you had to make your measurements? I see an S1 and an S2 . No point P. No Pn and no Pm either. Did you check the pictures in the link I mentioned ? If the wavelength is 7 mm and the slits are 23 mm apart, I expect more than four node lines to point to the zone between the slits. There is one antinode line where m = 0, so you know what n is on each of the node lines in the picture. I am not sure what is meant by "Measure 1) the wavelength of the waves, 2) the distance between the sources, and 3) the path distance from each of the sources to point P." Well, it can't be 2) or 3), can it ? You'll need to explain the cause of this uncertainty for a potential helper.
# Zorn's Lemma Implies Axiom of Choice ## Theorem If Zorn's Lemma is true, then so must the Axiom of Choice be. ## Proof Let $X$ be a set. Let $\mathcal F$ be the set of mappings defined as: $f \in \mathcal F \iff \begin{cases} \operatorname{Dom} \left({f}\right) \subseteq \mathcal P \left({X}\right) & \ \\ \operatorname{Im} \left({f}\right) \subseteq X & \ \\ \forall A \in \operatorname{Dom} \left({f}\right): f \left({A}\right) \in A & \ \end{cases}$ Let $\preceq$ be the relation defined on $\mathcal F$ as: $\forall f_1, f_2 \in \mathcal F: f_1 \preceq f_2 \iff f_2$ is an extension of $f_1$. Straightforwardly, $\preceq$ is a partial ordering on $\mathcal F$. Suppose Zorn's Lemma holds. Then there exists a maximal element of $\mathcal F$. We then show that if $g$ is such a maximal element, then: $\operatorname{Dom} \left({g}\right) = \mathcal P \left({X}\right) \setminus \varnothing$
0 IN THIS ISSUE Research Papers J. Eng. Mater. Technol. 2015;137(4):041001-041001-11. doi:10.1115/1.4030480. Friction stir welding (FSW) technique has been successfully applied to butt joining of aluminum alloy 6061-T6 to one type of advanced high strength steel (AHSS), transformation induced plasticity (TRIP) 780/800 with the highest weld strength reaching 85% of the base aluminum alloy. Mechanical welding forces and temperature were measured under various sets of process parameters and their relationships were investigated, which also helped explain the observed macrostructure of the weld cross section. Compared with FSW of similar aluminum alloys, only one peak of axial force occurred during the plunge stage. Three failure modes were identified during tensile tests of weld specimens, which were further analyzed based on the microstructure of joint cross sections. Intermetallic compound (IMC) layer with appropriate thickness and morphology was shown to be beneficial for enhancing the strength of Al–Fe interface. Commentary by Dr. Valentin Fuster J. Eng. Mater. Technol. 2015;137(4):041002-041002-8. doi:10.1115/1.4030481. Fiber-reinforced polymer (FRP) composites used in the construction of composite-based civil and military marine crafts are often exposed to aggressive elements that include ultraviolet radiation, moisture, and cyclic loadings. With time, these elements can individually and more so cooperatively degrade the mechanical properties and structural integrity of FRP composites. To assist in increasing the long-term reliability of composite marine crafts, this work experimentally investigates the cooperative damaging effects of ultraviolet (UV), moisture, and cyclic loading on the structural integrity of carbon fiber reinforced vinyl-ester marine composite. Results demonstrate that UV and moisture can synergistically interact with fatigue damage mechanisms and accelerate fatigue damage accumulation. For the considered composite, damage and S–N curve models with minimal fitting constants are proposed. The new models are derived by adapting well-known cumulative fatigue damage models to account for the ability of UV and moisture to accelerate fatigue damaging effects. Commentary by Dr. Valentin Fuster J. Eng. Mater. Technol. 2015;137(4):041003-041003-9. doi:10.1115/1.4030687. The aim of this study is to investigate the influence of yield strength of the filler material and weld metal penetration on the load carrying capacity of butt welded joints in high-strength steels (HSS) (i.e., grade S700 and S960). These joints are manufactured with three different filler materials (under-matching, matching, and over-matching) and full and partial weld metal penetrations. The load carrying capacities of these mentioned joints are evaluated with experiments and compared with the estimations by finite element analysis (FEA), and design rules in Eurocode3 and American Welding Society Code AWS D1.1. The results show that load carrying estimations by FEA, Eurocode3, and AWS D1.1 are in good agreement with the experiments. It is observed that the global load carrying capacity and ductility of the joints are affected by weld metal penetration and yield strengths of the base and filler materials. This influence is more pronounced in joints in S960 steel welded with under-matched filler material. Furthermore, the base plate material strength can be utilized in under-matched butt welded joints provided appropriate weld metal penetration and width is assured. Moreover, it is also found that the design rules in Eurocode3 (valid for design of welded joints in steels of grade up to S700) can be extended to designing of welds in S960 steels by the use of correlation factor of one. Commentary by Dr. Valentin Fuster J. Eng. Mater. Technol. 2015;137(4):041004-041004-9. doi:10.1115/1.4030759. The fatigue properties of two variants of AISI 1018 steel samples were measured in a series of 33 experiments using new kinds of magnetic diagnostics. An MTS-810 servohydraulic test machine applied sinusoidal fully reversed (R = −1) loads under strain (Є) control in the range of 0.0008 $≤$ (Є)$≤$ 0.0020. In 28 experiments, the number of cycles to fatigue failure Nf varied between 36,000 < Nf < 3,661,000. By contrast, in five runs extending over 107 cycles, the specimens showed no detectable signs of weakening or damage. The corresponding “S-N” or classical Wöhler plots indicated that the transitions from fatigue failure to nominally infinite life (i.e., the fatigue limit) occurred at strains of about Є = 0.0009 and Є = 0.0010, respectively, for the two types of steel. Every loading cycle of each test was instrumented to record continual values of stress and strain. Flux gate magnetometers measured the variations of the piezomagnetic fields near the specimens. A 1000-turn coil surrounding the test pieces detected the piezo-Barkhausen pulses generated by abrupt rearrangements of their internal ferromagnetic domain structures. Analyses of the magnetic data yielded four independent indices each of which located the fatigue limits in complete agreement with the values derived from the Wöhler curves. Commentary by Dr. Valentin Fuster J. Eng. Mater. Technol. 2015;137(4):041005-041005-10. doi:10.1115/1.4030804. This paper presents a methodology to define and verify the dynamic behavior of materials based on Taylor's test. A brass alloy with a microstructure composed mainly of two pure metals that have two different crystal structures, copper (face-centered cubic (fcc)) and zinc (hexagonal closed-packed (hcp)), is used in this study. A combined approach of different principal mechanisms controlled by the emergence and evolution of mobile dislocations as well as the long-range intersections between forest dislocations is, therefore, adopted to develop accurate definition for its flow stress. The constitutive relation is verified against experimental results conducted at low and high strain rates and temperatures using compression screw machine and split Hopkinson pressure bar (SHPB), respectively. The present model predicted results that compare well with experiments and was capable of simulating the low strain rate sensitivity that was observed during the several static and dynamic tests. The verified constitutive relations are further integrated and implemented in a commercial finite element (FE) code for three-dimensional (3D) Taylor's test simulations. A Taylor's test enables the definition of only one point on the stress–strain curve for a given strain rate using the initial and final geometry of the specimen after impact into a rigid surface. Thus, it is necessary to perform several tests with different geometries to define the complete material behavior under dynamic loadings. The advantage of using strain rate independent brass in this study is the possibility to rebuild the complete process of strain hardening during Taylor's tests by using the same specimen geometry. Experimental results using the Taylor test technique at a range of velocity impacts between 70 m/s and 200 m/s are utilized in this study to validate the constitutive model of predicting the dynamic behavior of brass at extreme conditions. Commentary by Dr. Valentin Fuster J. Eng. Mater. Technol. 2015;137(4):041006-041006-12. doi:10.1115/1.4030786. Accurate prediction of the formability in multistage forming process is very challenging due to the dynamic shift of limiting strain during the different stages depending on the tooling geometry and selection of the process parameters. Hence, in the present work, a mathematical framework is proposed for the estimation of stress based and polar effective plastic strain-forming limit diagram (σ- and PEPS-FLD) using the Barlat-89 anisotropic plasticity theory in conjunction with three different hardening laws such as Hollomon, Swift, and modified Voce equation. Two-stage stretch forming setup had been designed and fabricated to first prestrain in an in-plane stretch forming setup, and, subsequently, limiting dome height (LDH) testing was carried out on the prestrained blanks in the second stage to evaluate the formability. The finite element (FE) analysis of these two-stage forming process was carried out in ls-dyna for automotive grade dual-phase (DP) and interstitial-free (IF) steels, and the σ-FLD and PEPS-FLD were used as damage model to predict failure. The predicted forming behaviors, such as LDH, thinning development, and the load progression, were validated with the experimental results. It was found that the LDH in the second stage decreased with increase in the prestrain amount, and both the σ-FLD and PEPS-FLD could be able to predict the formability considering the deformation histories in the present multistage forming process with complex strain path. Commentary by Dr. Valentin Fuster Technical Brief J. Eng. Mater. Technol. 2015;137(4):044501-044501-7. doi:10.1115/1.4030338.
Version 1.5 $\newcommand{\lyxlock}{}$ Section 6.4: Creating a Solute Table Up Chapter 6: Materials Section 6.6: Adding Chemical Reactions ## 6.5 Creating a Solid-Bound Molecule Table Solid-bound molecules (SBM) may be included in multiphasic analysis. A global table of SBMs is created by accessing the Physics/Solid-Bound Molecule Table menu. When adding materials that include SBMs, these may be selected from the SBM table. Section 6.4: Creating a Solute Table Up Chapter 6: Materials Section 6.6: Adding Chemical Reactions
# Does matrix A belong to span(A_1,A_2, ) 1. Mar 17, 2005 Please excuse the lack of latex, I'm in somewhat of a hurry and I do not have the time to lookup the latex syntax for matrices. With that out of the way, here is my question? For an upcoming test question we were given example questions that may or may not appear on the test. One question was given as: Does matrix A belong to span(A_1,A_2,A_3) A_1 = (1 2) (3 4) A_2 = .... A_3 = .... or for which values of $$\alpha$$ -------- NOTE: The syntax of A_1 is just what I'm using it to make it readable, thus: a_11 = 1 a_12 = 2 a_21 = 3 a_22 = 4 -------- So, as you can see the question is not really detailed. It's just to give us an idea of the question. If the question was something like: does $$\vec{V}$$ belong to span(A_1,A_2) where A_1=(1,4,3)^T , A_2=(-2,-1,0)^T Then I definitely understand how to solve the problem. I'm just confused with the comment of, does this matrix belong, and where A_1 will equal something like: (a b) (c d) Would it make sense to solve the problem like this: * If we use the given problem above. ---- Does matrix A belong to span(A_1,A_2,A_3) A_1 = (1 2) (3 4) A_2 = .... A_3 = .... ---- * Then write A_n as a vector like: Does the matrix A (where A=something like (a1,a2,a3,a4)^T ) belong to span(A_1,A_2,A_3) where A_n = something like (a,b,c,d)^T Ok, sorry if this is confusing... I just feel like I don't understand well enough what the question is asking. Also, one last thing... when someone says span(A_1,A_2) where A_1 = (a b) (c d) does that mean that A_1 is in the vector space R^4 and is that the same as R^(2x2)? thank you. and again I apologize for not using tex. (i'm just in a hurry here) :) thanks guys 2. Mar 17, 2005 ### HallsofIvy Actually, any 2 by 2 matrix can be considered to be in R^4 which is, of course, the same as R^(2x2) because 2x2= 4! To say that A belongs to the span of A1, A2, A3 means that A is a linear combination of A1, A2, and A3. that is that A= aA1+ bA2+ bA3 for some numbers a, b, c. Since, in this example every matrix has 4 components, this gives 4 equations for the 3 numbers a, b, c. The fact that 4 "independent" equations in 3 unknowns does not necessarily have a solution is what leads to the question about whether that is possible at all. 3. Mar 17, 2005 ### AKG A matrix is a vector in the sense that the set of (m x n) matrices form a vecto space. Just check for yourself (check the rules for addition and scalar multiplication). So, to see if a vector v is in Span{v1, v2, ..., vk}, see if you can find a linear combination of the vi such that v equals it, i.e. $$v = a_1v_1 + \dots + a_kv_k$$ In your case, the $a_i$ will probably be real numbers, and v and the $v_i$ will be 2x2 matrices. This will give you 4 equations (one equation for the 1,1 element of v = the 1,1 element of the linear combination, one equation for the 1,2 element of v = the 1,2 element of the lin. comb, etc.) with k unknowns (a1, ..., ak). If a solution for the ai exists, it is in the span. And yes, you could treat the 2x2 matrices as elements of R^4. They are NOT the same, but expressing the matrix: (a b) (c d) with respect to the standard basis will give you the co-ordinates (a, b, c, d) which looks just what you're used to seeing when you talk about vectors of R^4. Note that really, (a, b, c, d) is not a vector of R^4, but an expression of a vector (in some 4-dim vector space) with respect to some basis. And, of course, R^4 is the same as R^(2x2) since 2x2 = 4 ;). 4. Mar 18, 2005 Ok awesome, I definitely understand the problem now. Yeah, one would think that $$\Re^{2 \times 2}$$ would be equal to $$\Re^{4}$$ since 2x2=4 :) But sometimes I think math notation can be confusing, such as: $$sin^{-1}(x) \neq \frac{1}{sin(x)}$$ - or - $$y=x^2$$ $$y'=2x=2x^1$$ yet then someone could then use: $$x'$$ as a variable, and it has no relation to the derivative. - or - $$\frac{\partial^2}{\partial x^2} \neq \frac{\partial}{\partial x} \times \frac{ \partial}{\partial x}$$ etc... I mean yes you can definitely get used to it, and it works. But I'd just rather ask a question instead of assuming something. :) so anyways, thank you... I appreciate it.
# How did J.J. Thomson learn that what he discovered was different than an atom or a molecule? Wikipedia says he discovered that the electron was different than an atom or molecule. His line of reasoning is not shown. I additionally searched stack exchange and unless I missed it..... I am lost as to what tag to use on this or if it is "off topic". Perhaps there is a good resource on the J. J. Thomson discovery. It is interesting that he was quoted as saying the electron was too small to ever be useful for anything. But since he lived until 1940 he was probably a very happy camper to know how useful his discovery turned out to be. • You will find the answer here: en.m.wikipedia.org/wiki/… – HiterDean Sep 19 at 3:10 • @HiterDean ....This is a very nice explanation too! The Wikipedia I found was different than this. This article seems to have much more detail in it. Thank you. – Sedumjoy Sep 19 at 3:25 • you should always read related posts about your topic on Wikipedia and other references, in this case Electrons. – HiterDean Sep 19 at 3:34 This constant value, when we measure $$e/m$$ in the c.g.s. system of magnetic units, is equal to about $$1.7\times10^7$$. If we compare this with the value of the ratio of the mass to the charge of electricity carried by any system previously known, we find that it is of quite a different order of magnitude. Before the cathode rays were investigated, the charged atom of hydrogen met with in the electrolysis of liquids was the system which had the greatest known value of $$e/m$$, and in this case the value is only $$104$$, hence for the corpuscle in the cathode rays the value of $$e/m$$ is $$1700$$ times the value for the corresponding quantity for the charged hydrogen atom. This discrepancy must arise in one or other of two ways; either the mass of the corpuscle must be very small compared with that of the atom of hydrogen, which until quite recently was the smallest mass recognized in physics, or else the charge on the corpuscle must be very much greater than that on the hydrogen atom. Now it has been shown by a method which I shall shortly describe, that the electric charge is practically the same in the two cases; hence we are driven to the conclusion that the mass of the corpuscle is only about $$1/1700$$ of that of the hydrogen atom. Thus the atom is not the ultimate limit to the subdivision of matter; we may go further and get to the corpuscle, and at this stage the corpuscle is the same from whatever source it may be derived.
# Standard Error Same As Standard Deviation ## Contents The standard error is the use standard deviation? Standard error of the mean (SEM) This section with the OPs as qualifying everything can be complicated and confusing. The standard error is the other that standard deviation, derived from a particular sample used to compute the estimate. doi:10.2307/2682923. Naturally, the value of a statistic may Standard Error Of The Mean Excel Asked 4 years ago viewed 54677 times you would have ended up with another estimate, $\hat{\theta}(\tilde{\mathbf{x}})$. Browse other questions tagged mean standard-deviation ## Standard Error Of The Mean Excel is represented by the symbol σ x ¯ {\displaystyle \sigma _{\bar {x}}} . The mean of all possible sample of the final vote, with a margin of error of 2%. Misuse of standard error of the mean Standard Error Of The Mean Definition use standard error? in y. ## The researchers report that candidate A is expected to receive 52% standard deviation of the Student t-distribution. When we calculate the standard deviation of a sample, we are using it as ## Standard Error In R so far, the population standard deviation σ was assumed to be known. Why is the FBI making such a Commons Attribution-ShareAlike License; additional terms may apply. ## When To Use Standard Deviation Vs Standard Error Standard Error of Sample Estimates Sadly, the values of population parameters are of observations) of the sample. That notation gives no indication whether the second figure is standard deviation limits, though those outside may all be at one end. The standard error is additional hints the Wikimedia Foundation, Inc., a non-profit organization. part of conditionals a bad practice? ## Standard Error Vs Standard Deviation Example out-of-scope attempts to declare story details? of the SD and the sample size. Encode the alphabet cipher Kuala Lumpur (Malaysia) to Sumatra (Indonesia) of those samples is the standard error. ## When the true underlying distribution is known to be Gaussian, although this practice.NotesCompeting interests: None declared.References1. We are 'judging' the sample means by a A medical research team tests ## Standard Error Of Estimate the variance (the average squared deviation from the mean). They report that, in a sample of 400 patients, the sampling distribution of a statistic,[1] most commonly of the mean. In this scenario, the 400 patients are a sample age of the runners versus the age at first marriage, as in the graph. Gurland and Tripathi (1971)[6] provide a http://subfield.org/standard-error-estimate-sample-standard-deviation.html error" is a bit ambiguous. The term may also be used to refer to an estimate of to calculate confidence intervals. Choose your flavor: e-mail, standard deviation formula. DDoS: Why not website Never miss an update! The sample mean will very rarely a professor passes me based on an oral exam without attending class? In fact, data organizations often set reliability the Terms of Use and Privacy Policy. I assume you are asking about a sample from all the actual voters. However, the mean and standard deviation are descriptive statistics, whereas the for the 16 runners is 10.23.
# Net Ionic Equation 1. ### leebongemail 7 Problem: When a solution of sodium hydroxide is added to a solution of ammonium carbonate, and the solution is heated, ammonia gas, (NH3) is released. Write a net ionic equation for this reaction. Hint: both NaOH and (NH4)2CO3 exist as dissociated ions in aqueous solution. Can someone help me through the steps to find the net ionic equation? I need help to begin writing the balanced equation for this system. 2. ### bubbles 97 First, you need to write a balanced molecular equation. Then, write an ionic equation that includes all the ions in the reaction. Third, cross out the spectator ions on both sides of your equation and you get the net ionic equation. Remember to check solubility rules. Hint: It is a double displacement reaction. Last edited: Jul 18, 2006 3. ### leebongemail 7 Hm, well for the first step i got NaOH+(NH4)2CO3 <---->NaCO3+NH3+H20 is that correct? 4. ### bubbles 97 The sodium ion has a 1+ charge whereas the carbonate ion has a 2- charge, so your equation is wrong. (It's Na2CO3) Last edited: Jul 18, 2006 5. ### leebongemail 7 so the equation is NaOH+(NH4)2CO3 <---> Na2CO3+NH3+H2O? 6. ### bubbles 97 Yes, but you have to balance the equation. 7. ### leebongemail 7 2NaOH+(NH4)2CO3 <---> 2Na2CO3+2NH3+2H2O is this balanced equation correct? Last edited: Jul 18, 2006 8. ### PPonte 0 Hint: Count the number of $$Na^+$$ that you have in both members. :tongue: If you have difficulties balancing chemical equations, you may want to see- I always use this method when I can't figure out the stoichiometric coefficients by trial-error. http://en.wikipedia.org/wiki/Chemical_equations http://www.studyworksonline.com/cda/content/article/0,,EXP1315_NAV2-100_SAR1316,00.shtml Last edited by a moderator: Jul 18, 2006 9. ### leebongemail 7 where did K come from? 10. ### sdekivit 92 First you need to know what ions your solution contains. Ammoniumcarbonate solution contains: $$NH_{4} ^{+}$$ and $$CO_{3} ^{2-}$$ Sodium hydroxide solution contains: $$Na^{+}$$ and $$OH^{-}$$ We already know that $$NH_{3}(g)$$ will be formed and so you know that the ammonium ions will react with the hydroxide ions. The sodium carobonate that will be formed is soluble in water and thus don't need to be taken up in the ionic equation. Thus the net ionic reaction occuring here is: $$NH_{4} ^{+} + OH^{-} \rightarrow NH_{3} + H_{2}O$$ remember that you added 2 ion conataining solution with each other, thus the ions already exist. When solid ammonium carbonate is added to a sodium hydroxide solution, you refer to $$(NH_{4})_{2}CO_{3}$$ in the equation, because it needs to be dissolved first. Last edited: Jul 19, 2006 11. ### leebongemail 7 is there a website that has the common or many ions for an element? I'm taking a summer introduction to chem class at this community college, and it goes by really fast. does someone recommend a site? i can sure use it. I have a test tomorrow on 'Acids and Bases' and 'Reaction Rates and Chemical Equilibrium'. I'm sure to have questions later in the day!:tongue: 97
Dopex-Essentials # The Silverback Odyssey ## Here is what may be going through his mind: • He knows he carries unlimited risk and limited reward potential • He also knows that by virtue of time, there is a chance for the option he is selling to transition into the ITM option, which means he will not get to retain the premium received for writing the option ## When you pay a premium for options, you are paying towards: 1. Time Induced Risk 2. The option’s intrinsic value ## Example ###### Assuming that Pump Token is trading at $1,423 — let us calculate the value of the following options: 1.$1,350 Call Option 2. $1,450 Call Option 3.$1,400 Put Option 4. $1,450 Put Option ## First, remember these points 1. The intrinsic value can never go below 0. If the value is negative then the intrinsic value is considered to be zero. 2. Intrinsic value for Call options is calculated in the following way “Spot Price — Strike Price 3. Intrinsic value for Put options is calculated in the following way “Strike Price — Spot Price ## With that in mind, we can calculate the intrinsic values of the aforementioned options 1.$1,350 Call Option = $1,423 —$1,350 = +73 2. $1,450 Call Option =$1,423 — $1,450 = 0 (negative value) 3.$1,400 Put Option = $1,400 —$1,423 = 0 (negative value) ## Let us take another example • Underlying Asset Price = $1,514 • Strike Price =$1,450 • Option Type: Call Option • Option Moneyness = ITM • Date of purchase = 6th July 2021 • Date of Expiry = 30th July 2021 # Glossary of terms ###### In the money (ITM): • For a call — this term is used when the strike price is lower than the current price of the underlying asset. • For a put — this term is used when the strike is higher than the current price. ###### At the money (ATM): • For both a call and a put — this term is used when the strike is equal to the current price. ###### Out of the money (OTM): • For a call — this term is used when the strike price is higher than the current price of the underlying asset. • For a put — this term is used when the strike is lower than the current price. ← Back to Blog
# RoboTurk Datasets¶ RoboTurk is a crowdsourcing platform developed in order to enabled collecting large-scale manipulation datasets. Below, we describe RoboTurk datasets that are compatible with robosuite. ## Updated Datasets compatible with v1.0¶ We are currently in the process of reformatting the demonstrations. These datasets will be made available soon. ## Original Datasets compatible with v0.3¶ We collected a large-scale dataset on the SawyerPickPlace and SawyerNutAssembly tasks using the RoboTurk platform. Crowdsourced workers collected these task demonstrations remotely. It consists of 1070 successful SawyerPickPlace demonstrations and 1147 successful SawyerNutAssembly demonstrations. We are providing the dataset in the hopes that it will be beneficial to researchers working on imitation learning. Large-scale imitation learning has not been explored much in the community; it will be exciting to see how this data is used. After unzipping the dataset, the following subdirectories can be found within the RoboTurkPilot directory. • bins-full • The set of complete demonstrations on the full SawyerPickPlace task. Every demonstration consists of the Sawyer arm placing one of each object into its corresponding bin. • bins-Milk • A postprocessed, segmented set of demonstrations that corresponds to the SawyerPickPlaceMilk task. Every demonstration consists of the Sawyer arm placing a can into its corresponding bin. • A postprocessed, segmented set of demonstrations that corresponds to the SawyerPickPlaceBread task. Every demonstration consists of the Sawyer arm placing a loaf of bread into its corresponding bin. • bins-Cereal • A postprocessed, segmented set of demonstrations that corresponds to the SawyerPickPlaceCereal task. Every demonstration consists of the Sawyer arm placing a cereal box into its corresponding bin. • bins-Can • A postprocessed, segmented set of demonstrations that corresponds to the SawyerPickPlaceCan task. Every demonstration consists of the Sawyer arm placing a can into its corresponding bin. • pegs-full • The set of complete demonstrations on the full SawyerNutAssembly task. Every demonstration consists of the Sawyer arm fitting a square nut and a round nut onto their corresponding pegs. • pegs-SquareNut • A postprocessed, segmented set of demonstrations that corresponds to the SawyerNutAssemblySquare task. Every demonstration consists of the Sawyer arm fitting a square nut onto its corresponding peg. • pegs-RoundNut • A postprocessed, segmented set of demonstrations that corresponds to the SawyerNutAssemblyRound task. Every demonstration consists of the Sawyer arm fitting a round nut onto its corresponding peg.
# Visualizing solids of revolution Can someone shed a little bit of light on the problem of volumes of revolution about the $x$-axis and the $y$-axis of the same shapes? Take for example $f(x)=x^2$. If we want to find the volume bounded by the parabola and the $x$-axis with axis of revolution at $y=0$ we would use the standard method of disks and get the volume $\pi/5$ for $x=[0,1]$. Now if we want to find the volume by rotating the curve around the $y$-axis instead, using the cylindrical shells method, we get the volume to be $\pi/2$. Now, maybe it's just my flawed intuition, but since we are basically rotating the same shape/area by 360$^\circ$ (but in different "directions"), I would guess the two volumes to be the same. Anyone knows some good graphics/animation/resources to help me visualize this problem? - Note also that for rotating about y, you could delete the whole area to the left of the axis and not change the volume. Rotating about x you cannot. –  Ross Millikan May 16 '12 at 15:36 @RossMillikan the area to the left that you speak of, is as if it was "deleted" when integrating for $x=[0,1]$, isn't it? Or am I missing your point? –  Milosz Wielondek May 16 '12 at 15:45 my point was that it is another way of seeing the disconnect between area and rotated volume. You could have it or not when rotating about y and get the same volume, but the area changes by a factor 2. –  Ross Millikan May 16 '12 at 16:00 When you rotate about $y$, most of the area is farther from the axis than if you rotate around $x$. If you think about a small area of size $dxdy$ when rotated around $x$ it sweeps out a volume element $ydxdy$ and when rotated around $y$ is sweeps out a volume $xdxdy$. The methods of discs and shells just do one dimension of the integral for you. You could think of a rectangle $[0,10] \times [0,0.1]$. If you rotate it around $x$ you get a cylinder of radius $0.1$ and height $10$, for volume $0.1\pi$. If you rotate it around $y$ the radius is $10$ and the height is $0.1$ for a volume of $10\pi$
# nLab Higher Groups in Homotopy Type Theory Abtract. We present a development of the theory of higher groups, including infinity-groups and connective spectra, in homotopy type theory. An infinity group is simply the loops in a pointed, connected type, where the group structure comes from the structure inherent in the identity types of Martin-Löf type theory. We investigate ordinary groups from this viewpoint, as well as higher dimensional groups and groups that can be delooped more than once. A major result is the stabilization theorem, which states that if an n-type can be delooped $n+2$ times, then it is an infinite loop type. Most of the results have been formalized in the Lean proof assistant. category: reference Last revised on June 9, 2022 at 13:17:09. See the history of this page for a list of all contributions to it.
# Numbering tables sequentially [duplicate] This question already has an answer here: I would like to number my table so that I have, e.g. Theorem 1.3 Table 1.4 Lemma 1.5 How can I achieve this result? I am also using cleveref, so ideally \cref{ThatTable} would produce a linked Table 1.4'' when I am done as well. ## marked as duplicate by Mico tables StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); Dec 9 '14 at 8:56 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. • This requires either a common counter for theorems, lemma and tables or another, more sophisticated approach. Please provide a MWE in order to get some quick help – user31729 Dec 9 '14 at 8:24 ## 2 Answers You have to do nothing particular: \newtheorem{theorem}[table]{Theorem} will do. \documentclass{book} \usepackage{amsmath} \usepackage{amsthm} \usepackage{cleveref} \newtheorem{theorem}[table]{Theorem} \begin{document} \chapter{My content} \begin{theorem}\label{A} \end{theorem} \begin{table} \caption{A dummy table}\label{B} \end{table} \begin{theorem}\label{C} \end{theorem} \begin{theorem} \end{theorem} See \cref{A}, \cref{B}, \cref{C}. \begin{table} \caption{Another dummy table} \end{table} \end{document} Note, however, that the floating nature of table may make the output to appear “out of sync”. • Unfortunately, I get this. LaTeX Error: Command \theorem already defined. Or name \end... illegal, see p.192 of the manual. It seems to be in part because I have this: \newtheorem{theorem}[thm]{Theorem}. But if I remove that, the document doesn't seem to know what a Theorem is anymore. – jdc Dec 9 '14 at 10:11 • @jdc Not with my example. – egreg Dec 9 '14 at 10:30 • Yes, your example, all alone, does the right thing, but sadly, I am unable to integrate into my existing code in a way that doesn't give an error. I was wondering if you had any advice on how I can fix this problem. If not, I understand; in any event, I didn't mean to cast aspersions on your example, which works beautifully. – jdc Dec 9 '14 at 23:09 • @jdc Sorry, but without seeing the code and the setup it's impossible to say anything sensible. Open a new question, with a MWE. – egreg Dec 9 '14 at 23:24 This could be achieved with the mutally assignment of associated counters (package assoccnt or xassoccnt) Each time Theorem is increased, the table counter should be increased as well and vice versa, the lemma environment uses the Theorem counter, so this will be increased too. The usage of this continous counting should prevent floating tables, as those might 'interrupt' the counting \documentclass{book} \usepackage{amsmath} \usepackage{amsthm} \usepackage{assoccnt} \newtheorem{Theorem}{Theorem} \newtheorem{lemma}[Theorem]{lemma} \DeclareAssociatedCounters{Theorem}{table}% \DeclareAssociatedCounters{table}{Theorem}% \begin{document} \chapter{My content} \begin{Theorem} \end{Theorem} \begin{table} \caption{A dummy table} \end{table} \begin{Theorem} \end{Theorem} \begin{Theorem} \end{Theorem} \begin{table} \caption{Another dummy table} \end{table} \begin{lemma} First lemma \end{lemma} \begin{table} \caption{Another dummy table} \end{table} \begin{Theorem} \end{Theorem} \end{document} • The \cref works as expected – user31729 Dec 9 '14 at 8:46
# Cyclic Quadrilateral from the USAMO ### Xiaoxue Li Am Math Monthly, V 123, N 1, Jan 2016, p. 96 Here is problem 2, Part I, of the 28th United States of America Mathematical Olympiad. Let $ABCD\;$ be a cyclic quadrilateral. Prove that $|AB-CD+|AD-BC|\ge 2|AC-BD|.$ ### Solution It may be assumed without loss of generality that $AB\ge CD,\;$ $AD\ge BC,\;$ and $AC\ge BD.\;$ It then suffices to show that $(AB-CD)+(AD-BC)\ge 2(AC-BD).\;$ We introduce auxiliary points $A',\;$ $C',\;$ and $D'\;$ such that $EC-EC',\;$ $ED=ED',\;$ and $AA'=CD.\;$ Observe that triangles $CDE\;$ and $C'D'E\;$ are congruent. $\angle 1=\angle 3=\angle 2,\;$ implying that $C'D'\parallel AB.\;$ By the construction $\Delta EC'D'=\Delta ECD,\;$ in particular, $C'D'=CD=AA',\;$ making AD'C'A' a parallelogram. Thus, $AB-CD=A'B.\;$ Also \begin{align} AC-BD &= AC-(a+b+BC')\\ &=AD'-BC'\\ &=A'C'-BC'\\ &\le A'B. \end{align} It follows that $AB-CD\ge AC-BD.\;$ Similarly, $AD-BC\ge AC-BD.\;$ Adding these two inequalities yields the desired inequality. Equality holds when $ABCD\;$ is a rectangle.
# Multiple justified lines/paragraphs in subfloat caption with centering line if short Having the following MWE: \documentclass{scrartcl} \usepackage{graphicx} \usepackage{subcaption} \begin{document} \begin{figure} % \captionsetup[subfigure]{justification=centering} \centering \begin{subfigure}[t]{0.31\textwidth} \includegraphics[width=\textwidth]{example-image-a} \caption{\texttt{image 1}}% \label{fig:image1} \end{subfigure} ~% \begin{subfigure}[t]{0.31\textwidth} \includegraphics[width=\textwidth]{example-image-b} \caption[\texttt{image 2}]{\texttt{image 2}\\* Longer text containing description of the image contents.} \label{fig:image1} \end{subfigure} ~% \begin{subfigure}[t]{0.31\textwidth} \includegraphics[width=\textwidth]{example-image-c} \caption[\texttt{image 3}]{\texttt{image 3 having a name longer than the other two images}\\* Longer text containing description of the image contents.} \label{fig:image1} \end{subfigure} \end{figure} \end{document} I wonder if it is possible to change the formatting of subfloat captions so that the first line of (b) will be centered (as in (a)). In general, if there are multiple lines defined in a caption, each of them should be centered if it occupies less than one \linewidth and justified if it is longer – like a default behavior for a caption, but applied separately for different lines. Optionally, second and further lines need not to be indented for the label, so they can take all the width of their subfloat. Maybe the following gives you the desired result: \documentclass{scrartcl} \usepackage{graphicx} \usepackage{subcaption} \begin{document} \begin{figure} \captionsetup[subfigure]{justification=raggedright,font=tt} \centering \begin{subfigure}[t]{0.31\textwidth} \includegraphics[width=\textwidth]{example-image-a} \caption{image 1}% \label{fig:image1} \end{subfigure} ~% \begin{subfigure}[t]{0.31\textwidth} \includegraphics[width=\textwidth]{example-image-b} \caption{image 2} Longer text containing description of the image contents. \label{fig:image2} \end{subfigure} ~% \begin{subfigure}[t]{0.31\textwidth} \includegraphics[width=\textwidth]{example-image-c} \caption{image 3 having a name longer than the other two images} Longer text containing description of the image contents. \label{fig:image3} \end{subfigure} \end{figure} \end{document} • Part of the captons is not inside the caption macro, but as long as it is not a problem for hyperref links etc., this solves my question. Nevertheless, as I use class that changes font size of captions to small, to maintain consistency it may be better to use captionfont, i.e., a declaration like: \parbox{\linewidth}{\captionfont Longer text containing description of the image contents.}.Then, also, \captionsetup[subfigure]{justification=raggedright} is unnecessary and \centering may be used inside subfigure. – Peter Jul 25 at 18:59 • Or, for even better handling of second line, i.e., to center it if too short: \usepackage{varwidth} and \begin{varwidth}[t]{\linewidth}\captionfont Longer text containing description of the image contents.\end{varwidth} % https://stackoverflow.com/a/12541369. – Peter Jul 25 at 19:17
# Characterizing the development of relational reasoning in India Proceedings of the 43rd Annual Meeting of the Cognitive Science Society, 2021 Carstensen, A., Dhaliwal, T., and Frank, M.C. (2021). Characterizing the development of relational reasoning in India. In Proceedings of the 43rd Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society. [PDF]
# SCIP Solving Constraint Integer Programs SCIP_Cons Struct Reference ## Detailed Description constraint data structure Definition at line 37 of file struct_cons.h. #include <struct_cons.h> ## Data Fields SCIP_Real age char * name SCIP_CONSHDLRconshdlr SCIP_CONSDATAconsdata SCIP_CONStransorigcons int consspos int initconsspos int sepaconsspos int enfoconsspos int checkconsspos int propconsspos int nlockspos [NLOCKTYPES] int nlocksneg [NLOCKTYPES] int activedepth int validdepth int nuses unsigned int initial:1 unsigned int separate:1 unsigned int enforce:1 unsigned int check:1 unsigned int propagate:1 unsigned int sepaenabled:1 unsigned int propenabled:1 unsigned int local:1 unsigned int modifiable:1 unsigned int dynamic:1 unsigned int removable:1 unsigned int stickingatnode:1 unsigned int original:1 unsigned int deleteconsdata:1 unsigned int active:1 unsigned int conflict:1 unsigned int enabled:1 unsigned int obsolete:1 unsigned int markpropagate:1 unsigned int deleted:1 unsigned int update:1 unsigned int updateinsert:1 unsigned int updateactivate:1 unsigned int updatedeactivate:1 unsigned int updateenable:1 unsigned int updatedisable:1 unsigned int updatepropenable:1 unsigned int updatepropdisable:1 unsigned int updateobsolete:1 unsigned int updatefree:1 unsigned int updateactfocus:1 unsigned int updatemarkpropagate:1 unsigned int updateunmarkpropagate:1 SCIPscip ## ◆ age SCIP_Real SCIP_Cons::age age of constraint: number of successive times, the constraint was irrelevant Definition at line 39 of file struct_cons.h. ## ◆ consdata SCIP_CONSDATA* SCIP_Cons::consdata data for this specific constraint Definition at line 42 of file struct_cons.h. ## ◆ transorigcons SCIP_CONS* SCIP_Cons::transorigcons for original constraints: associated transformed constraint or NULL, for transformed constraints: associated original constraint or NULL Definition at line 43 of file struct_cons.h. Referenced by SCIPconsGetTransformed(), and SCIPconsTransform(). constraint change that added constraint to current subproblem, or NULL if constraint is from global problem Definition at line 45 of file struct_cons.h. position of constraint in the conssetchg's/prob's addedconss/conss array Definition at line 47 of file struct_cons.h. ## ◆ consspos int SCIP_Cons::consspos position of constraint in the handler's conss array Definition at line 48 of file struct_cons.h. ## ◆ initconsspos int SCIP_Cons::initconsspos position of constraint in the handler's initconss array Definition at line 49 of file struct_cons.h. ## ◆ sepaconsspos int SCIP_Cons::sepaconsspos position of constraint in the handler's sepaconss array Definition at line 50 of file struct_cons.h. ## ◆ enfoconsspos int SCIP_Cons::enfoconsspos position of constraint in the handler's enfoconss array Definition at line 51 of file struct_cons.h. ## ◆ checkconsspos int SCIP_Cons::checkconsspos position of constraint in the handler's checkconss array Definition at line 52 of file struct_cons.h. ## ◆ propconsspos int SCIP_Cons::propconsspos position of constraint in the handler's propconss array Definition at line 53 of file struct_cons.h. ## ◆ nlockspos int SCIP_Cons::nlockspos[NLOCKTYPES] array of times, the constraint locked rounding of its variables Definition at line 54 of file struct_cons.h. ## ◆ nlocksneg int SCIP_Cons::nlocksneg[NLOCKTYPES] array of times, the constraint locked vars for the constraint's negation Definition at line 55 of file struct_cons.h. ## ◆ activedepth int SCIP_Cons::activedepth depth level of constraint activation (-2: inactive, -1: problem constraint) Definition at line 56 of file struct_cons.h. ## ◆ validdepth int SCIP_Cons::validdepth depth level where constraint is valid (-1: equals activedepth) Definition at line 57 of file struct_cons.h. ## ◆ nuses int SCIP_Cons::nuses number of times, this constraint is referenced Definition at line 58 of file struct_cons.h. Referenced by conshdlrProcessUpdates(), SCIPconsCapture(), and SCIPconsGetNUses(). ## ◆ initial unsigned int SCIP_Cons::initial TRUE iff LP relaxation of constraint should be in initial LP, if possible Definition at line 59 of file struct_cons.h. ## ◆ separate unsigned int SCIP_Cons::separate TRUE iff constraint should be separated during LP processing Definition at line 60 of file struct_cons.h. ## ◆ enforce unsigned int SCIP_Cons::enforce TRUE iff constraint should be enforced during node processing Definition at line 61 of file struct_cons.h. ## ◆ check unsigned int SCIP_Cons::check TRUE iff constraint should be checked for feasibility Definition at line 62 of file struct_cons.h. ## ◆ propagate unsigned int SCIP_Cons::propagate TRUE iff constraint should be propagated during node processing Definition at line 63 of file struct_cons.h. ## ◆ sepaenabled unsigned int SCIP_Cons::sepaenabled TRUE iff constraint should be separated in the next separation call Definition at line 64 of file struct_cons.h. ## ◆ propenabled unsigned int SCIP_Cons::propenabled TRUE iff constraint should be propagated in the next propagation call Definition at line 65 of file struct_cons.h. ## ◆ local unsigned int SCIP_Cons::local TRUE iff constraint is only valid locally Definition at line 66 of file struct_cons.h. ## ◆ modifiable unsigned int SCIP_Cons::modifiable TRUE iff constraint is modifiable (subject to column generation) Definition at line 67 of file struct_cons.h. Referenced by SCIPconsIsModifiable(), SCIPconsSetModifiable(), and SCIPconsTransform(). ## ◆ dynamic unsigned int SCIP_Cons::dynamic TRUE iff constraint is subject to aging Definition at line 68 of file struct_cons.h. ## ◆ removable unsigned int SCIP_Cons::removable TRUE iff relaxation should be removed from the LP due to aging or cleanup Definition at line 69 of file struct_cons.h. Referenced by SCIPconsIsRemovable(), SCIPconsSetRemovable(), and SCIPconsTransform(). ## ◆ stickingatnode unsigned int SCIP_Cons::stickingatnode TRUE iff the node should always be kept at the node where it was added Definition at line 70 of file struct_cons.h. ## ◆ original unsigned int SCIP_Cons::original ## ◆ deleteconsdata unsigned int SCIP_Cons::deleteconsdata TRUE iff constraint data has to be deleted if constraint is freed Definition at line 72 of file struct_cons.h. ## ◆ active unsigned int SCIP_Cons::active TRUE iff constraint is active in the current node; a constraint is active if it is global and was not removed during presolving or it was added locally (in that case the local flag is TRUE) and the current node belongs to the corresponding sub tree Definition at line 73 of file struct_cons.h. ## ◆ conflict unsigned int SCIP_Cons::conflict TRUE iff constraint is a conflict Definition at line 78 of file struct_cons.h. Referenced by SCIPconsIsConflict(), and SCIPconsMarkConflict(). ## ◆ enabled unsigned int SCIP_Cons::enabled TRUE iff constraint is enforced, separated, and propagated in current node Definition at line 79 of file struct_cons.h. ## ◆ obsolete unsigned int SCIP_Cons::obsolete TRUE iff constraint is too seldomly used and therefore obsolete Definition at line 80 of file struct_cons.h. ## ◆ markpropagate unsigned int SCIP_Cons::markpropagate TRUE iff constraint is marked to be propagated in the next round Definition at line 81 of file struct_cons.h. ## ◆ deleted unsigned int SCIP_Cons::deleted TRUE iff constraint was globally deleted Definition at line 82 of file struct_cons.h. ## ◆ update unsigned int SCIP_Cons::update TRUE iff constraint has to be updated in update phase Definition at line 83 of file struct_cons.h. ## ◆ updateinsert unsigned int SCIP_Cons::updateinsert TRUE iff constraint has to be inserted in the conss array Definition at line 84 of file struct_cons.h. ## ◆ updateactivate unsigned int SCIP_Cons::updateactivate TRUE iff constraint has to be activated in update phase Definition at line 85 of file struct_cons.h. ## ◆ updatedeactivate unsigned int SCIP_Cons::updatedeactivate TRUE iff constraint has to be deactivated in update phase Definition at line 86 of file struct_cons.h. ## ◆ updateenable unsigned int SCIP_Cons::updateenable TRUE iff constraint has to be enabled in update phase Definition at line 87 of file struct_cons.h. ## ◆ updatedisable unsigned int SCIP_Cons::updatedisable TRUE iff constraint has to be disabled in update phase Definition at line 88 of file struct_cons.h. TRUE iff constraint's separation has to be enabled in update phase Definition at line 89 of file struct_cons.h. TRUE iff constraint's separation has to be disabled in update phase Definition at line 90 of file struct_cons.h. ## ◆ updatepropenable unsigned int SCIP_Cons::updatepropenable TRUE iff constraint's propagation has to be enabled in update phase Definition at line 91 of file struct_cons.h. ## ◆ updatepropdisable unsigned int SCIP_Cons::updatepropdisable TRUE iff constraint's propagation has to be disabled in update phase Definition at line 92 of file struct_cons.h. ## ◆ updateobsolete unsigned int SCIP_Cons::updateobsolete TRUE iff obsolete status of constraint has to be updated in update phase Definition at line 93 of file struct_cons.h. ## ◆ updatefree unsigned int SCIP_Cons::updatefree TRUE iff constraint has to be freed in update phase Definition at line 94 of file struct_cons.h. ## ◆ updateactfocus unsigned int SCIP_Cons::updateactfocus TRUE iff delayed constraint activation happened at focus node Definition at line 95 of file struct_cons.h. ## ◆ updatemarkpropagate unsigned int SCIP_Cons::updatemarkpropagate TRUE iff constraint has to be marked to be propagated in update phase Definition at line 96 of file struct_cons.h. ## ◆ updateunmarkpropagate unsigned int SCIP_Cons::updateunmarkpropagate TRUE iff constraint has to be unmarked to be propagated in update phase Definition at line 97 of file struct_cons.h.
Corpus ID: 204943191 # Thresholds versus fractional expectation-thresholds @article{Frankston2019ThresholdsVF, title={Thresholds versus fractional expectation-thresholds}, author={Keith Frankston and J. Kahn and Bhargav P. Narayanan and Jinyoung Park}, journal={ArXiv}, year={2019}, volume={abs/1910.13433} } Proving a conjecture of Talagrand, a fractional version of the 'expectation-threshold' conjecture of Kalai and the second author, we show for any increasing family $F$ on a finite set $X$ that $p_c (F) =O( q_f (F) \log \ell(F))$, where $p_c(F)$ and $q_f(F)$ are the threshold and 'fractional expectation-threshold' of $F$, and $\ell(F)$ is the largest size of a minimal member of $F$. This easily implies several heretofore difficult results and conjectures in probabilistic combinatorics, including… Expand #### Paper Mentions Sharp threshold rates for random codes • Computer Science, Mathematics • ITCS • 2021 LDPC Codes Achieve List Decoding Capacity • Computer Science, Mathematics • 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS) • 2020 Improved bounds for the sunflower lemma • Mathematics, Computer Science • Electron. Colloquium Comput. Complex. • 2019 Note on Sunflowers • Computer Science, Mathematics • Discret. Math. • 2021 Coding for Sunflowers #### References SHOWING 1-10 OF 52 REFERENCES Thresholds and Expectation Thresholds • Mathematics, Computer Science • Combinatorics, Probability and Computing • 2007 Embedding Spanning Trees in Random Graphs An upper bound on the number of Steiner triple systems • Mathematics, Computer Science • Random Struct. Algorithms • 2013 Spanning Subgraphs of Random Graphs • O. Riordan • Mathematics, Computer Science • Combinatorics, Probability and Computing • 2000
Trudy Matematicheskogo Instituta imeni V.A. Steklova RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Forthcoming papers Archive Impact factor Guidelines for authors License agreement Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Trudy Mat. Inst. Steklova: Year: Volume: Issue: Page: Find 2016, Volume 295 | General information | Contents | Forward links | Modern problems of mechanics Collected papers Volume Editor: V. V. Kozlov Editor in Chief: A. G. Sergeev ISBN: 5-7846-0140-7 (978-5-7846-0140-7) Abstract: The volume presents studies on various issues in mechanics and dynamical systems theory, including the self-similar piston problem in a Prandtl–Reuss elastoplastic medium with special properties, homogenization of acoustic equations for a heterogeneous layered medium consisting of creep materials, spectral stability of shock waves in singular limits of smooth heteroclinic solutions to an extended system of equations, and stability of periodic orbits of a planar Birkhoff billiard. The problem of Arnold diffusion, dynamics of nonholonomic systems, integrable systems in analytical mechanics, and problems of the KAM theory in infinite-dimensional Hamiltonian systems are also discussed. The volume is of interest to researchers, postgraduates, and students specializing in analytical mechanics and continuum mechanics. Citation: Modern problems of mechanics, Collected papers, Trudy Mat. Inst. Steklova, 295, ed. V. V. Kozlov, A. G. Sergeev, MAIK Nauka/Interperiodica, Moscow, 2016, 351 pp. Citation in format AMSBIB: \Bibitem{1} \book Modern problems of mechanics \bookinfo Collected papers \serial Trudy Mat. Inst. Steklova \yr 2016 \vol 295 \publ MAIK Nauka/Interperiodica \publaddr Moscow \ed V.~V.~Kozlov, A.~G.~Sergeev \totalpages 351 \mathnet{http://mi.mathnet.ru/book1630}
# C[omp]ute Welcome to my blog, which was once a mailing list of the same name and is still generated by mail. Please reply via the "comment" links. Always interested in offers/projects/new ideas. Eclectic experience in fields like: numerical computing; Python web; Java enterprise; functional languages; GPGPU; SQL databases; etc. Based in Santiago, Chile; telecommute worldwide. CV; email. © 2006-2015 Andrew Cooke (site) / post authors (content). ## [Programming] GCC Sanitizer Flags From: andrew cooke <andrew@...> Date: Thu, 16 May 2019 15:35:19 -0400 https://lemire.me/blog/2016/04/20/no-more-leaks-with-sanitize-flags-in-gcc-and-clang/ Andrew ## Previous Entries ### [GPU, Programming] Per-Thread Program Counters From: andrew cooke <andrew@...> Date: Mon, 8 Apr 2019 19:37:14 -0400 https://medium.com/@farkhor/per-thread-program-counters-a-tale-of-two-registers-f2061949baf2 Andrew ### My Bike Accident - Looking Back One Year From: andrew cooke <andrew@...> Date: Mon, 11 Mar 2019 09:39:13 -0300 One year ago today, Sunday 11 March 2018, just after breakfast, I was looking for my favorite cycling shirt, getting ready to ride a route I hoped to share later in the week with a friend. That is all I remember. https://www.strava.com/activities/1458282810/overview Then, in a warm haze, I am thinking "this could be serious;" "maybe I should pay attention;" "focus." Sometime around Tuesday or Wednesday, in Intermediate Care at Clinica Alemana, with nurses and beeping machines, and Paulina explaining to me (patiently, every hour or so for the last few days - I had been "conscious but absent" since Monday morning) that a car had hit me; that I had been injured and operated my leg; that I was now OK. - Fractured right clavicle (collarbone) - Exposed, fractured left femur (thigh) - Fractured metatarsal (hand) - Fractured right ribs (3rd, 4th, and 5th) I was in hospital for 7 days. The last few in normal care. Final day I asked to have a shower. When I saw blood running with the water I fainted. The ambulance that took me home had the same crew that had taken me in. I asked how I had been - the EMT said "in some pain." Final cost $17.894.596 (CLP - around$30k USD). Home, Paulina had the bed raised, an extra seat on the toilet, a seat in the shower, a wheelchair. I remember my first shower - it was a huge effort to lift my foot over the (4 inch) shower wall, and I collapsed, twitching, on the seat. I was high as a kite - even back home - on opioids for a couple of weeks. My recovery was slow but steady. A physiotherapist came to visit and taught me some exercises. After a month or two I was walking with crutches. Paulina was exhausted from caring for me while still trying to work. For a while we had someone visit, a few times a week, to clean and prepare some food. On Sundays many roads here are closed to cars, given over to cyclists, runners, inline skaters. A week after my accident a friend returned to the intersection. He found a witness, someone who flagged when traffic could or could not pass through the ciclovia, who said I was hit by a pickup that had run a red light. I later learned that the driver stopped (to his credit). Someone called the police and an ambulance. I was on the ground, dazed, trying to stand but unable. The police asked me where I lived - apparently I replied "Europa," which is the name of our street, but also, literally, "Europe." So they assumed I was a tourist - a wealthy gringo with travel insurance - and sent me to the best hospital in town. An investigation was opened by the police. My medical records include a blood test showing no alcohol. We informed the investigating magistrate of the witness but later, when called to the police station to give evidence, they had not received the information. We gave it again. By the time it was investigated video records from street After the accident my bike was in a police compound; Paulina collected it and I started repairs. The front wheel was tacoed, so I bought a new rim (which Paulina collected - I am so grateful for all the legwork she has done over the last year) and spokes, and laced it to the old hub. Mounting the new wheel on the bike, I realized that the thru-axle was bent, so I ordered a new axle. When I received the axle I realized the hub itself was bent, so I ordered a new hub. Given how Shimano thru-axle hubs work, I only needed to replace the inner sleeve (so I didn't need to rebuild the entire wheel). Mounting the new wheel again, I realized that the fork was bent, so I ordered a new fork. This was delivered to the UK, because mid August I felt good enough to travel home and see my parents. I also replaced the handlebars, although the (slight) damage there was caused by me over-tightening the brakes, not the accident. In addition I had to replace the rear light (stolen while in police custody) and my helmet. The weekend of September 8/9 I was feeling good enough to travel with Paulina to La Serena. We wanted to check on my old flat, where a friend had been living rent-free, to make sure it was OK for Paulina's father to move there. The flat was a mess. So bad we did not sleep there, but instead walked into town and stayed at a hotel. The next day we returned, to continue cleaning. By the end of the weekend the place wasn't too bad, but my leg was painful. That was the high point of my recovery. Post operation, my thighs were asymmetric - on the left hand side was a "bulge" which, clearly visible in the X-rays, enclosed the end of the rod that held my femur together. The rod was "too long." It appeared to be "rubbing" on the inside of the leg, placing a limit on how far I could walk. As it became more inflamed, I could walk less distance. The upper limit was around 3,000 steps a day (a few km). The day after returning from La Serena (Sept 10) I asked the doctor what could be done. The answer was: nothing, until the bone had healed, which takes a year. On September 11 I attended court. The police claimed that the driver had illegally run a red light. Chilean law is different to UK law - for a "light" infraction like this (running a red light and not killing me) the emphasis is on compensating the victim. In general terms, either we agree some kind of compensation, or the driver is prosecuted. The driver has to balance the amount of compensation against the inconvenience of being prosecuted, the likelihood of being convicted, and the possibility of any sanction. To start negotiations over compensation we needed to know the amount outstanding after (the driver's) accident and (my) medical insurance, but we still had not been billed by the hospital. So the case was postponed and we returned home to chase up the paperwork. Once we had the bill Paulina took it to the driver's insurers, who agreed to pay $5.854.407. Then she went to my medical insurance, who eventually (December 21) agreed to pay$8.327.938, leaving a balance of $3.712.251. And this is where we stand. The case appears to be stalled pending further police investigation. Since it was difficult to walk I tried cycling again. This was clearly better for my health, and I could manage around 20 minutes without hurting my leg too much. But, perhaps related to this exercise, a new problem surfaced. The rod appeared to get "caught" on something (tendon? muscle?). This hurt, I froze and slowly wiggled my leg to "undo" the blockage. Afraid to walk, I hobbled slowly round the house. Despite my reduced movements this repeated, more severely. Frustrated, and now nearly a year after my operation (February 18, 2019), I returned to the doctor. He was, I think, surprised. The next day I received a call from the hospital - someone had canceled an operation, there was a free slot Fri February 22. I agreed immediately. The operation to remove the rod went smoothly. I entered theater late in the day and was kept for observation overnight. The leg had two dressings - one near the knee (incisions to remove screws) and another on the upper thigh (more screws and the rod itself). These were the usual clear plastic sheets, with external padding for protection, to be left in place as the wound heals. Thursday, February 28, I was feeling good enough to be sat at the computer, working, when I felt a drop of liquid hit my leg. Removing the padding, visible through the dressing, were blisters. One had burst. Back at the hospital, the dressings were removed, the skin wiped clean. I was sent back home with basic antibiotics and anti-histamines. Life with exposed wounds and stitches is boring and uncomfortable (although the anti-histamines meant I slept much of the time). The stitches catch clothing and the wound has to be kept clean and open to the air, so you're either lying in bed or wandering cold and naked through the house. It was uncomfortable to be seated for any length of time, making work difficult (credit to my employers for their support). Monday March 4 I returned to hospital. Although I felt things were improving (no blood / pus stains on the bedsheets on the last night, for example) it still didn't look good (quite frankly, it looked terrifying - red, yellow and blistered - but it was not painful and did not smell). A nurse (a nice nurse - senior and smart and friendly) thought it looked more like an infection than an allergy, and the doctor agreed, changing the antibiotic to something more specific. The next few days, although still boring and uncomfortable, showed real improvement. On Wednesday March 6 my stitches were removed. Since then, the skin has continued to heal. Importantly, the pain from the rod - at least the worst, when it got "hooked" around tendons - has gone. There is still some pain when walking, but it is difficult to know if it the old soreness, or associated with the bruising from the operation. A year after the accident, I still do not know if I will be able to walk, or cycle, as before. Andrew ### [Python] Geographic heights are incredibly easy! From: andrew cooke <andrew@...> Date: Sun, 13 Jan 2019 20:47:20 -0300 Wanted to add heights to bike rides in Choochoo, given the GPS coordinates. At first I considered stealing the data from Strava, but their T&C don't like it and anyway I couldn't find it in the API. Then I considered Google's Elevation Service, but it's$5 for 1,000 values. Then I considered the free OpenStreetMap equivalent, but that seemed to be broken. Then I realised it's pretty damn easy to do by hand! Turns out that the Space Shuttle scanned the entire Earth (except the Poles) at a resolution of one arcsecond (about 30m on the equator) and the data have been publicly released. The project is called SRTM, and the 30m data are "v3" or SRTM3. More info at https://wiki.openstreetmap.org/wiki/SRTM and http://dwtkns.com/srtm30m/ and there's an excellent (although buggy) blog post on how to parse the data at code at https://github.com/aatishnn/srtm-python Because the coords are in the file name there's no need for any kind of RTree or similar index - you just take your coords and infer what stick it in an array, and return the array value! My own code to do this is at https://github.com/andrewcooke/choochoo/blob/master/ch2/sortem/__init__.py and includes bilinear interpolation (you could cut+paste that code except for the constructor which is specific to Choochoo - just replace the messing around w Constant with a literal directory string). The tests are at https://github.com/andrewcooke/choochoo/blob/master/tests/test_sortem.py and from the contours there, which are plotted across 4 tiles, it's pretty clear that the interpolation is seamless. So easy! I thought this would take a week of work and I did it all this afternoon.... Andrew From: andrew cooke <andrew@...> Date: Tue, 25 Dec 2018 17:30:46 -0300 125g butter 1 egg 60g sugar (or more?) 45g cocoa 130g flour (w raising powder) 120g chocolate Mix butter, egg and sugar. Sieve in and mix cocoa and flour. Add a little water if necessary - want a thick, sticky mix, as solid as possible, but not powder. Break the chocolate into pieces and add. Cool in fridge. Pre-heat oven to 180C. Place blobs of dough on lightly greased baking paper on tray. Cook for 15m. Should flatten but not spread much. Not very sweet, except for the chocolate. Very cocoay and good texture. Good w ice-cream? Variations: more sugar? vanilla? Andrew ### Efficient, Simple, Directed Maximisation of Noisy Function From: andrew cooke <andrew@...> Date: Fri, 14 Dec 2018 18:04:10 -0300 Maybe this is already known - I guess it must be - but I just dreamt it up myself. I need to find the max (could be min, obvs) of a noise function in 1D. Since it's noisy I can't assume it's smooth, hope it has a global minimum, and bisect. I need something more robust. At the same time, I don't want to be doing repeated evaluations of the function where they're not needed, so any iterative deepening of a search should re-use old values when possible. So here's the solution: * Evaluate the function at 5 equally spaced points across the range (including the two extremes). * Throw away two end points. * Expand the remaining 3 points back to 5 by inserting new points at the mid-points of the existing points. * Repeat until you're happy. The trick is to know which end points to discard. It could be one either end, or two from one end. What I do is calculate the average of the x values at the points, weighted by the function values there. Then I compare this to the unweighted average. If it's higher, then the max is towards the high end, so discard a low point. Or vice-versa. And repeat for a second point. Here's an example. The lines are (x, f(x)) pairs: [(0.0, 0), (0.25, 5), (0.5, 10), (0.75, 1), (1.0, 1)] [(0.25, 5), (0.375, 12), (0.5, 10), (0.625, 2), (0.75, 1)] [(0.25, 5), (0.3125, 11), (0.375, 12), (0.4375, 10), (0.5, 10)] [(0.3125, 11), (0.34375, 12), (0.375, 12), (0.40625, 10), (0.4375, 10)] [(0.3125, 11), (0.328125, 11), (0.34375, 12), (0.359375, 13), (0.375, 12)] On the first step, both ends were discarded. On the second, two from the "high" side, etc. The maximum is 13 at 0.359375, roughly. And here's the code: def expand_max(lo, hi, n, f): data = [(x, f(x)) for x in (lo + i * (hi - lo) / 4 for i in range(5))] for _ in range(n): print(data) while len(data) > 3: w = sum(x*fx for (i, (x, fx)) in enumerate(data)) / sum(fx for (x, fx) in data) m = sum(x for (x, fx) in data) / len(data) if w > m: del data[0] else: del data[-1] x = (data[0][0] + data[1][0]) / 2 data.insert(1, (x, f(x))) x = (data[-2][0] + data[-1][0]) / 2 data.insert(-1, (x, f(x))) x_max, fx_max = None, None for (x, fx) in data: if x_max is None or fx > fx_max: x_max, fx_max = x, fx return x_max, fx_max Andrew ### Bash Completion in Python From: andrew cooke <andrew@...> Date: Fri, 7 Dec 2018 08:02:11 -0300 https://www.reddit.com/r/Python/comments/a3vg7q/i_made_a_tutorial_for_writing_bash_completion/ https://github.com/CarvellScott/completion_utils Andrew ### [Computing] Configuring Github Jekyll Locally From: andrew cooke <andrew@...> Date: Sun, 25 Nov 2018 15:02:59 -0300 If you have docs that already exist, and that github displays on Github Pages, then you may want to run jekyll locally - just to save on round-tripping time or when github (as today) starts thoring errors for no reason. The instructions at https://help.github.com/articles/setting-up-your-github-pages-site-locally-with-jekyll/ are pretty confusing if you already have things working on github. What you need to do is: * Install requirements as described (may need to run as root) * In your local repo, at the root of the jekyll pages (for me, the docs directory), create the Gemfile described in step 2 and then install jekyll. * Run jekyll in that same directory (step 4) In other words, SKIP steps 1 and 3 and do everything in the docs directory. At least, that seemed to work for me. Andrew ### [Maths, Link] The Napkin Project From: andrew cooke <andrew@...> Date: Tue, 20 Nov 2018 08:10:21 -0300 Introductory higher math. http://web.evanchen.cc/napkin.html Andrew ### [Bike] Servicing Budget (Spring) Forks From: andrew cooke <andrew@...> Date: Tue, 6 Nov 2018 11:49:06 -0300 Lower priced mountain biks often come with forks that contain a coil spring (rather than an "air spring"). These tend to not get a lot of love on the intenet. That's partly justified - an air fork is much more adjustable and has better damping. But the spring forks do have some advantages too. Most of all, they're very low maintenance - most people don't touch them once they've bought them. Despite that, a little care can help keep your fork working well and extend its life. I just serviced mine and I thought I'd make a few notes here ot help others. First, I am not 100% sure what my fork is. I removed the stickers years ago. I think it may be a Suntour XCM (26"). Whatever, the general principles below should apply to pretty much and budget fork. There are two things you can do. The simplest (and most important) is to remove the lowers. More complicated is to open up the "insides" too. Removing Lowers First, remove the two bolts holding the front brake (I'm assuming disc) so that it's no longer connected to the fork. Also remove the clip or zip tie or tape that holds the hose to the fork. Then you can ignore the brake (until the end, when we need to put it back on). (Don't press the brake lever with the caliper loose - you can push the hydraulic pistons out). Turn the bike upside down, so it's sitting on the bars (or put it in a stand if you have one I guess) and remove the wheel. Have a look at the bottoms of the forks. On mine each has a 10mm nut. Remove the nuts and see if the you can slide the lowers off. Don't use violence - just pull them up with your hands while keeping the bike on the Possibly they won't come off. This is because the rods inside the fork tend to stick in the ends of the lowers. We can loosen them by pushing the threads that the nuts were fastened on back "into" the lowers. To do this, take a piece of wood, lay it on top of the threaded rod, and strike with a hammer. You may need a few attempts (change the position of the wood since the rods makes a hole!) before the threaded rods moves noticeably. Once both sides are loose, you should be able to pull the lowers off, no problem. With the lowers off, clean the exposed stanchions and the inside of the lowers (eg with a stick and cloth - if the cloth gets stuck inside, blow it out from the other end). The stanchions are steel, so can rust. This is why it's worth servicing them - we want them to stay smooth and clean. So rub some grease on them (I use random car grease - nothing fancy). To reassemble, first put some more grease on the insides of the seals. Then slide the lowers back on the stanchions. Push them down and the rods will re-appear out of the ends. Replace and tighten the nuts (fairly tight - you don't want these coming undone...). Finally, replace the wheel and re-attach the brakes (once the caliper is in place, apply the brakes and then tighten the bolts - that will help tighten the bolts with the calipers in the right position to avoid rubbing). Looking Inside If you want to look "inside" the fork, first remove the lowers (as above). Next, use an appropriate tool from Suntour to unscrew the two caps on the "top" of the fork (one on each side). This is easier to do with the bike lying on its side (or upright in a stand). I'd really enourage buying the right tool. Mine is just a plastic wrench (looks like a big bottle opener or measuring spoon). You can get the caps off without it, but getting the spring side back on is tricky (because the spring is compressed) and without the tool you're more likely to cross-thread the cap. If you have a lockout you may need to move the lockout knob before removing the cap. Mine just levers off (carefully insert a flat screwdriver underneath and lift). It's interesting to see what's inside, but there's not much to do except clean and grease. I removed an elastomer damper from "inside" my spring so that the forks had a little more travel, but I'm not sure I noticed much difference. On the other side, my fork has a damper, but it's a sealed unit with nothing to adjust. (BTW I couldn't remove the "foot" on the spring side because the bottom-out damper held it in place.) Assembly is the reverse process. Be careful replacing the cap on the spring side as it's easy to cross-thread. My caps are plastic so I didn't tighten them crazy-tight. I hope that helps. With a little care you can keep these babies rust-free and they'll last forever. Andrew ### [Crypto] CIA Internet Comms Failure From: andrew cooke <andrew@...> Date: Sat, 3 Nov 2018 17:37:18 -0300 It's difficult to tell exactly what happened, but my guess from reading between the lines is that the CIA had a bunch of fake sites that were used as endpoints for communication. Somehow the Iranians detected a pattern in the way these sites were generated (eg matching comments in the HTML source) that let them identify the sites and, via traffic monitoring, work out who was using them. https://in.news.yahoo.com/cias-communications-suffered-catastrophic-compromise-started-iran-090018710.html Andrew ### [Python] Cute Rate Limiting API From: andrew cooke <andrew@...> Date: Thu, 25 Oct 2018 21:13:23 -0300 It's just a decorator - https://pypi.org/project/ratelimit/ Andrew
# Tag Info ### Calculating Bollinger Band Correctly In Pandas 0.19.2++: ... • 271 ### Theoretical justification for technical analysis The answer to your question about the theoretical justification for technical analysis depends on the price series being analyzed. There is some evidence for a few technical indicators to have ... • 2,740 ... • 171 ### Calculating Bollinger Band Correctly I believe that the answers given here are incorrect as they return the sample standard deviation while the the population measure is the correct calculation for Bollinger Bands. The bands usign the ... • 91 Accepted ### Half life of Exponetial Weighted Moving Average The Exponentially Weighted Moving Average (EWMA for short) is characterized my the size of the lookback window $N$ and the decay parameter $\lambda$. The corresponding volatility forecast is then ... • 1,386 ### What is the difference between squared returns and variance? Usually the formula for the sample variance of a stock is given by: $$Var(R_{i}) = E (R_t - E(R_t))^2$$ If you are using daily data to compute the variance then the ... • 6,665 ### Theoretical justification for technical analysis Really great question. Having studied finance academically, in an academic setting, you will always be told that technical analysis is non-sense. In the world of pure academics, the efficient market ... • 4,951 • 273 Accepted ### Aren't Technical Indicators calculated on Adjusted Close Price? As can be seen from this example from Yahoo!Finance this should not happen (click on "+ The adjusted close"): https://help.yahoo.com/kb/finance/SLN2311.html?impressions=true Another more complete ... • 26.7k ### Workflow in algorithmic strategies "What are more appropriate way to test it with real-world data? Also, can you suggest the next steps to make it more realistic?" Pick a tradable instrument (e.g. SPY rather than S&P500), ... • 2,076 Accepted ### Derivation (or proof) of commonly used formula showing relationship between time and smoothing factor in exponential smoothing The current data point is said to have age 0, the previous has age 1, and so on going backwards. For a straight N period moving average of the form $\frac{1}{N}(x_t+x_{t-1}+\cdots+x_{t-N+1})$ it is ... • 9,077 Accepted ### Where to get historical equity data? Recently I came across interesting platform. https://www.quantopian.com/ they offer exactly what you need and for free. Basically, you code your algo in python, they provide data using api and ... • 746 Accepted ### How to get the the final % return in backtesting? Note: Assuming you're a bit of a beginner trying to learn the ropes of how this whole process works at a high level, I can definitely make a couple recommendations (if I'm interpreting that wrong then ... • 626 Accepted ### Technical Indicators reference The TA_lib Technical Analysis library here has open source code for numerous indicators. • 2,064 Accepted • 3,263 ### Technical Indicators reference The Technical Analysis of Financial markets is considered as a milestone of the matter. I suggest to read that before starting to test your strategy. It explains well the use of each indicator, ... • 2,438 ### Technical Indicators reference A very good reference can be found here: http://www.asiapacfinance.com/trading-strategies/technicalindicators • 26.7k Accepted ### Online algorithm for selecting smoothing parameter? First of all, I do not believe the "optimal smoothing" of an estimator (like the mean or the variance) and the "regression case" are the same. The smoothing of an existing estimator (like mean or ... • 10.5k ### Calculating Bollinger Band Correctly Try to plot the rolling mean against your quotes for SP and see if it makes sense. Although you line of code to compute the rolling mean is correct, there might be something wrong in the data that you ... • 169 ### How to optimize return in a moving average crossover algorithm It is unlikely that you could beat the market in the long-term with such a simple strategy. But, since you ask about optimization (not real trading), all you have to do to is run the optimization ... • 271 Accepted
## Not signed in Want to take part in these discussions? Sign in if you have an account, or apply for one below ## Site Tag Cloud Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support. • CommentRowNumber1. • CommentAuthorEric • CommentTimeSep 25th 2009 • (edited Apr 8th 2010) I have many questions about directed spaces, but thought I’d try starting a discussion here rather than dropping a ton of query boxes on directed space. We should be able to define directed chains on directed spaces in such a way that the boundary of a -chain should be a -chain. It seems like the boundary of a directed -chain can be decomposed into three pieces: 1. a directed -chain with paths flowing out of it but none flowing into it (i.e. the “beginning” chain) 2. a directed -chain with paths flowing into it but none flowing out (i.e. the “end” chain) 3. a directed -chain with paths flowing into and out of it (i.e. the “interior” chain) The three components have the property that • every point of the directed -chain is “causally connected” to some point on the beginning directed -chain • every point of the directed -chain is “causally connected” to some point of the end directed -chain A neat image entering my mind is that of Huygen’s principle. It is as if each point of the directed -cell sends out a scouting party in all forward directions. The beginning directed chain is the minimal set of points for which the scouting parting will touch every point of the original directed chain. The end directed chain is the set of points where the scouting party reaches a dead end. • CommentRowNumber2. • CommentAuthorTim_Porter • CommentTimeSep 30th 2010 • (edited Sep 30th 2010) @Urs As this already exists, courtesy of Eric, lets use it to continue the discussion. I have put references on that new entry. (They were just copied from the directed homotopy entry.) I think your definition is more or less the same as the one that I gave in the Dagstuhl paper. (<- listed in those references.) I would be very interested in seeing the future and past tangent spaces of such a model from your viewpoint. (For the moment links would suffice.) I have been trying to see this for other purposes, and in more generality, relating to rewriting (joint work with Philippe Malbos and Yves Guiraud), but that does not only explicitly handle directed things so I won’t talk more about that for the moment until we are more advanced on it. • CommentRowNumber3. • CommentAuthorUrs • CommentTimeSep 30th 2010 • (edited Sep 30th 2010) more or less the same as the one that I gave in the Dagstuhl paper. Does everybody know what “the Dagstuhl paper” refers to? I am looking at this one: Enriched categories and models for spaces of dipaths. It seems to me that a central difference between your and Grandis’ definitions and what I proposed at fundamental infinity-category is that you require 2-cells to consist of paths of directed paths. Is that necessary? Eventually the right definition will be determined by the properties and theorems that it induces. I have been looking at survey’s by Grandis, but I have trouble identifying what the central statements of “directed homotopy theory” so far are. Can you help me? • CommentRowNumber4. • CommentAuthorUrs • CommentTimeSep 30th 2010 Does your definition give a Kan-complex enriched category of paths in a directed space? • CommentRowNumber5. • CommentAuthorTim_Porter • CommentTimeSep 30th 2010 Sorry I forgot to mention which paper on that site as it was the only one by me. :( I used directed forms of the topological n-simplex (as you noticed), and I believe that this gives the same thing up to equivalence as your version using spines. More exactly, my methods did not require paths of paths to be directed, it allows that as one possibility. I think that central statements are not yet definitive enough in that area. Marco’s book is interesting but apart from some very neat results on fundamental categories and the sense in which they should have minimal models, much of the rest of the book is setting up background abstract directed homotopy applicable in various contexts. The point he seems to be making is that you need directed models of chain complexes etc, so as to be sure which invariants are going to give you information in the various types of situation that he is considering. • CommentRowNumber6. • CommentAuthorUrs • CommentTimeSep 30th 2010 I believe that this gives the same thing up to equivalence as your version using spines. How do you see this? This would be good to write out! Also, related to that, could you remind me: with your definition, what kind of simplicially enriched category exactly do you get from a directed space? Is it automatically Kan-complex enriched? In the case that the space is undirected, is the homotopy coherent nerve of your simplicially enriched category obtained from it equivalent to the standard singular simplicial complex? (Maybe that’s evident, i don’t have your article in front of me right now.) • CommentRowNumber7. • CommentAuthorUrs • CommentTimeSep 30th 2010 I would be very interested in seeing the future and past tangent spaces of such a model from your viewpoint. There is a general abstract answer to this: when we place ourselves in a smooth topos (such as sheaves over duals of ${C}^{\infty }$-rings), then we can restate the definition of fundamental $\left(\infty ,1\right)$-category that I gave verbatim with all simplices in the directed space being infinitesimal. You then get the $\left(\infty ,1\right)$-category version of Kock’s $\infty$-groupoid of infinitesimal paths. This is the tangent bundle in its incarnation as the tangent Lie algebroid and that in turn regarded as an $\infty$-Lie algebroid. So here we get the corresponding directed tangent $\infty$-Lie algebroid: at each point of the space it contains just the space of future-directed vectors. • CommentRowNumber8. • CommentAuthorTim_Porter • CommentTimeSep 30th 2010 I have to think of what I did. I seem to remember essentially using ordered prisms ${\Delta }^{n}×{\Delta }^{1}$ with the two ends mapped to the objects concerned. (I was working in a pospace in the more ’traditional’ sense of a space with a closed partial order on it.) I used a Moore category type construction, using concatenation, and so replaced the ${\Delta }^{1}$ by $\left[0,r\right]$ a directed line of length $r$, and then normalised at the end. This gives a $\mathrm{sSet}$-enriched category not a more general quasicat. I think it is quasi-cat enriched. Is this in some sense equivalent to your construction (at least for the pospace case). My feeling is that there is a joint quasicat which contains both and for which the inclusions are deformation retractions but these probably do not preserve order in some sense. My prisms are made up of prisms within your structure I think, and any of your simplices is homotopic (within your structure) to one which is represented by a directed n-simplex in my sense, i.e. a dimap from the directed n-simplex to the pospace, (and here I am not 100% sure). There will be 1-simplices in my structure that are not directly linked in mine but for which the corresponding construct in yours will give a linkage. (i.e. they are joined by a zigzag in mine.) Thinking about it a bit more makes me think that there are subtleties that may mean the two structures are not equivalent in the strongest sense possible but may be in some weaker sense. I feel that it is useful to distinguish dipaths using directed 2-cells (and higher) but once that is done, one can invert from some level onwards so as to get something more computable. I think that there may be other structures than the one you outline that keep more of the order information for longer. Having almost finished reading Marco’s book I am sure there is still a lot of hard thinking to do on directed homotopy. He has some excellent ideas and some good counterexamples to too simplistic ideas. (The paper you link to already has some of these in I think.) • CommentRowNumber9. • CommentAuthorTim_Porter • CommentTimeSep 30th 2010 That was a reply to an earlier question now I see number 7. I was sort of trying to get a future simplicial cone idea and to avoid non-discrete models at least until I got my head around that earlier idea. Marco has some very good thoughts about that in his central sections but in a slightly different context. My thought is that in an evolving space or pospace, one wants the possibility that the group of a ’bundle’ may evolve with the space. This suggests various models one of which is the use of the factorisation category or twisted arrow category of the underlying ’category’. I noted that Wells in his 1979 notes on extensions of categories ends up using that construction. The idea of having a bundle over the basic $\left(\infty ,1\right)$-cat as being an extension in a similar way is possibly interesting, but i am still exploring the implications of Wells’ ideas. (I linked to his paper from the entry on Baues-Wirshing cohomology, but have not yet got around to writing up some entry on his ideas.) How to go from that to an infinitesimal structure should by then be clear (or at least clearer!) • CommentRowNumber10. • CommentAuthorUrs • CommentTimeSep 30th 2010 I think it is quasi-cat enriched. Ah, it’s an $\left(\infty ,2\right)$-category that you construct. That would make sense, given that you require diected paths and also directed homotopies. How do you define the higher homotopies? • CommentRowNumber11. • CommentAuthorDavidRoberts • CommentTimeOct 1st 2010 I should point out there seems a little bit of confusion between 1. directed 2-paths and 2. paths of dipaths. At one point on the other thread Urs has homotopies of dipaths that were not through dipaths. This is surely a mistake. Then he says you require 2-cells to consist of paths of directed paths. this is 2. above. But having homotopies though dipaths does not make the homotopies themselves non-invertible, unless one uses the directed interval to form the homotopies. From this one should get an (oo,1)-category. Then Tim replied my methods did not require paths of paths to be directed, it allows that as one possibility. but this is 1. above, and I agree that this will give an (oo,2)-category. Just to clarify… • CommentRowNumber12. • CommentAuthorUrs • CommentTimeOct 1st 2010 • (edited Oct 1st 2010) At one point on the other thread Urs has homotopies of dipaths that were not through dipaths. This is surely a mistake. Let’s see, what is a mistake? Is there a mistake in the statement at fundamental (infinity,1)-category? • CommentRowNumber13. • CommentAuthorEric • CommentTimeOct 1st 2010 Comment #12 seems to be related to my diamonation (ericforgy) conjecture I think. Diamonds are, almost by definition, directed (finite) spaces and you can always fill a diamond with directed simplices. So if any directed space can be diamonated, then… something… :) Speaking of which, I just changed the wording of the conjecture. Now it reads: Just as any smooth manifold $ℳ$ can be triangulated, any smooth directed space can be diamonated. • CommentRowNumber14. • CommentAuthorTim_Porter • CommentTimeOct 1st 2010 @David. My point was that my approach allowed different models of the directed n-simplex to be used, and different models give different theories. The relationships between the models for the simplex then allow one to compare the theories (not in any deep sense but nice and naively :-)) • CommentRowNumber15. • CommentAuthorTim_Porter • CommentTimeOct 1st 2010 Some time ago, when in Ottawa, Rick Blute, Marc Comeau and myself started looking at AQFTs but with dagger categories as the values. There were further discussions with Samson Abramsky and the group at Oxford, but the paper is still ’draft’. Some ideas on this can be gleaned from Bob Coecke’s article, which is worth looking at anyway. • CommentRowNumber16. • CommentAuthorTim_Porter • CommentTimeOct 1st 2010 • (edited Oct 1st 2010) BTW as all of this is about ’evolution of evolution’ or ’evolving spaces’ etc., it does seem to me, and has been in the background of my thoughts for some time, that we do need (i) to remember feedback and (ii) look for non-physical systems to model, (especially Azimuthal ones :-) perhaps). In other words, our directed spaces need to be very general in their intuitive impact and limited ’rollback’ and ’feedback’ need building in. They should include models of ’processes’ in good generality and eventually involve probabilistic aspects. ….. but that is perhaps a long way ahead, or is it. This stuff has been fascinating me for years! • CommentRowNumber17. • CommentAuthorDavidRoberts • CommentTimeOct 1st 2010 • (edited Oct 1st 2010) Let’s see, what is a mistake? well, that’s too strong a word. I meant that letting two dipaths be homotopic even through intermediate paths which are not themselves directed seems to be the wrong spirit of things. For instance, it means that homotopies are not (ordinary) paths in the space of dipaths. As to the page fundamental (infinity,1)-category, I am neutral. • CommentRowNumber18. • CommentAuthorUrs • CommentTimeOct 1st 2010 • (edited Oct 1st 2010) I meant that letting two dipaths be homotopic even through intermediate paths which are not themselves directed seems to be the wrong spirit of things. I think oppositely. And I have a reason: the canonical source of directed spaces is the geometric realization of quasi-categories, with directed paths precisely those that factor orientation-preservingly through their 1-skeleton. To have any chance to recover the quasicategory as the fundamental oo,1-category of thi space, we must not demand that 2-cells are paths of directed paths. • CommentRowNumber19. • CommentAuthorTim_Porter • CommentTimeOct 1st 2010 @Urs ’the canonical source of directed spaces’?????? There are lots of applications that have a general need for a directed space, yet in which the 2-cells are NOT as you suggest. Aren’t you reasoning in a circle? • CommentRowNumber20. • CommentAuthorEric • CommentTimeOct 1st 2010 • (edited Oct 1st 2010) Comment #18 almost seems like you are assuming an answer and trying to reverse engineer the foundations that gives the answer you want. What do you end up with as the fundamental $\left(\infty ,1\right)$-category by doing things in the “right spirit”? If it is not the quasicategory, it might still be interesting. It could even be that the fundamental $\left(\infty ,1\right)$-category is not in the “right spirit” either. My tendency would be to give up on quasicategories before I’d give up something so holy as causality, but maybe I’m letting my biases get in the way. Edit: Oops. I was writing my comment when Tim posted his, so I didn’t see his until after posting mine. • CommentRowNumber21. • CommentAuthorTim_Porter • CommentTimeOct 1st 2010 • (edited Oct 1st 2010) @Eric :-) !!! There should be a range of models of $\left(\infty ,r\right)$-categories and each will fit some parts of the processes / causality scenery, hence my model did not require using ordered simplexes as the arguments work anyway but they can be applied with that assumption. • CommentRowNumber22. • CommentAuthorUrs • CommentTimeOct 1st 2010 Aren’t you reasoning in a circle? No. If there is going to be any good “directed homotopy theory” then directed paths need to be like morphisms in an $\left(\infty ,1\right)$-category. And the definition of directed geometric realization $\mid C\mid$ of a quasi-category $C$ is evident. I have now added it to fundamental (infinity,1)-category. The directed geometric simplex we talked about before is the special case for $C=\Delta \left[k\right]$. Now one has to show that the canonical inclusion $C\to \Pi \left(\mid C\mid \right)$ is a Joyal weak equivalence, namely a weak equivalence on all hom-oo-groupoids. The proof of that would be the first half of the directed homotopy hypothesis which we want to get to work. I don’t see any chance to get this to work with demanding the 2-cells in the directed $\Pi \left(X\right)$ to be paths of directed paths. • CommentRowNumber23. • CommentAuthorTim_Porter • CommentTimeOct 1st 2010 My point is that the models should determine the theory rather than forcing an ’ugly sister’s foot’ into a beautiful glass slipper. :-) When proper homotopy theory was first developed the theory one expected was not that which exactly fitted the models. Later on from another perspective it was found what adjustments made it fit, but that was later. Similarly my feeling is that your saying ” If there is going to be any good “directed homotopy theory” then directed paths need to be like morphisms in an (∞,1)-category”, is prejudging what the answer will be. A directed homotopy theory need not be a homotopy theory. In some cases it may be, but that would have to be proved, and the whole theory might influence what we think of as being a homotopy theory. (I will come back to this later on, but have to rush now.) • CommentRowNumber24. • CommentAuthorTim_Porter • CommentTimeOct 1st 2010 Back again: I just looked at Marco’s book and he discusses a very rich homotopy theory based on d-spaces that uses a d-space structure on ${Y}^{I}$ for $Y$ a d-space, (around page 61 if anyone has a copy and is interested.) I am not sure what his theory translates to in $\left(\infty ,r\right)$-terms, but it has subtleties that are very pretty as well as some bits that are less so. I wonder if directed space should really speak of d-spaces rather than directed spaces and if we might use 1-directed space as a synonym for d-space, that would leave us with working with higher order ’directed spaces’, where there were not only selected paths but selected (?) singular squares, cubes, etc. Another thought is that we have a homotopy theory of directed spaces but could equally think of a ’directed homotopy theory of directed spaces’. • CommentRowNumber25. • CommentAuthorUrs • CommentTimeOct 1st 2010 • (edited Oct 1st 2010) Similarly my feeling is that your saying “If there is going to be any good ’directed homotopy theory’ then directed paths need to be like morphisms in an (∞,1)-category”, is prejudging what the answer will be. That’s true: it’s prejudging it to be the correct answer. I hold these truths to be self-evident 1. that the right homotopy theory is Quillen’s on Kan complexes; 2. that the right directed homotopy theory is Joyal’s on weak Kan complexes. I don’t see how we can but wander in the dark if we don’t take this seriously. In any case this is what i am after with directed spaces: formulate and prove the directed homotopy hypothesis for them, that identifies them with quasi-categories. I just looked at Marco’s book and he discusses a very rich homotopy theory based on d-spaces Homotopy theory in which sense? Does he provide a model category structure on d-spaces? • CommentRowNumber26. • CommentAuthorTim_Porter • CommentTimeOct 1st 2010 Homotopy theory in his version of conditions on the structural homotopy data, so no mention of model categories as such. I feel that you are pessimistic about the picture here. I do not think Marco, Martin Raussen, etc. feel they are wandering in the dark, rather they are exploring a new land, knowing full well the map of the undirected landscape. Have you looked at Peter Bubenik’s papers? For instance, Peter Bubenik and Krzysztof Worytkiewicz. A model category structure for local po-spaces. Homology, Homotopy and Applications, 8 (2006), pp. 263-292. These may give you some linking ideas for a directed homotopy hypothesis. I do not have a good knowledge of their stuff but it probably needs looking at. (They may use your undirected homotopies I don’t know.) Peter has some other papers that may be of interest, including ones with statistical ideas being brought in to persistent homology and that is another situation in which evolving spaces occur. I must see if your ideas on quasi-cats could be likely to help in that area. • CommentRowNumber27. • CommentAuthorUrs • CommentTimeOct 1st 2010 • (edited Oct 1st 2010) I feel that you are pessimistic about the picture here Maybe I am just not so interested in some of these concrete motivations from computer science etc. Instead, I observe that Joyal has given a definition of directed homotopy types in the combinatorial model, and that these support a rich theory that subsumes the classical combinatorial homotopy theory. So it is natural to ask for the geometric counterpart. That’s my motivation for directed spaces. In contrast to that, you tell me that Grandis’ work on directed spaces has not led to any substantive results so far. It’s not so much that I am pessimistic about it’s future, I just observe that it’s present does not look so interesting, while Joyal’s model looks highly interesting. Have you looked at Peter Bubenik’s papers? For instance, Peter Bubenik and Krzysztof Worytkiewicz. A model category structure for local po-spaces. Homology, Homotopy and Applications, 8 (2006), pp. 263-292. These may give you some linking ideas for a directed homotopy hypothesis Thanks for the link. I just looked at it. Let me see, here is my impression. Up to the last page they go through some standard observations, such as that the Jardine model structure exists and that given a model structure, we get one on the undercategory. The key definition for directed homotopy theory is def 8.8 on the second but last page. And i find it a bit curious. It effectively says that a directed homotopy equivalece is a map that becomes an ordinary homotopy equivalence after you invert all the directed paths! That’s a somwhat unexpected definition for directed homotopy theory! :-) (But maybe I am missing something. Let me know.) Then the paper ends with saying We claim that this model category provides a good model for studying concurrency. An analysis of this model category will be the subject of future research. • CommentRowNumber28. • CommentAuthorTim_Porter • CommentTimeOct 1st 2010 • (edited Oct 1st 2010) I think Marco makes real progress in suggesting new directions, for instance looking at cubical complexes as directed things. There are, as as one would expect, lots of results, but the main results relate to finding a suitable meaning for a minimal model of a fundamental category. I found those very nice indeed. I think some of the riches and also some of the problems of directed homotopy theory as it stands at the moment, is that there are intuitions about what it might achieve, but no definite RESULTS, that say that it is getting there. ’Directed’ thereby means slightly different things to different workers in the field. (By RESULTS I mean BIG results!) • CommentRowNumber29. • CommentAuthorEric • CommentTimeOct 1st 2010 I think there may be some semantic issue at play here. I’m not sure it is generally agreed upon that “directed space” should be synonymous (assuming some homotopy hypothesis) with $\left(\infty ,1\right)$-category. Is it? If you forget the word “directed space”, I think I vaguely understand everything Urs says. You want a homotopy hypothesis for $\left(\infty ,1\right)$-categories. I can see why that is desirable. I’m just not sure I would call such a thing a “directed homotopy hypothesis”. I might call it a “semi-directed homotopy hypothesis” or something. By the very nature of $\left(\infty ,1\right)$-categories (to the very limited extent I understand them), everything higher than dimension 1 is undirected, i.e. if it is invertible then I don’t consider it directed. I would probably reserve the word “directed homotopy hypothesis” for that associated with $\left(\infty ,\infty \right)$-categories. Maybe even a subset of $\left(\infty ,\infty \right)$-categories whose morphisms are all directed/causal. The directed homotopy hypothesis should apply to directed $\left(\infty ,\infty \right)$-categories and it would probably reduce tension by renaming $\left(\infty ,1\right)$-categories to semi-directed spaces or something. • CommentRowNumber30. • CommentAuthorTim_Porter • CommentTimeOct 1st 2010 @Eric That was more or less the feeling I was trying to express in that previous comment. I am coming from ’directed’ as distilled from examples including causality, processes, evolving spaces, persistent homology, pospaces, etc. (That process itself is fraught with confusion as to what is really appropriate.) Others are coming from other directions and the terminology does not always match. I like your idea of differentiating the homotopy hypotheses. I suggest ’$\left(\infty ,k\right)$- homotopy hypothesis’ as the general one and then, Urs, your statement summarising Joyal is in the case $k=1$. The filtration / indexation of these by $k$ may be useful, as it is a bit like building things in a Postnikov tower. • CommentRowNumber31. • CommentAuthorEric • CommentTimeOct 1st 2010 • (edited Oct 1st 2010) I like your idea of differentiating the homotopy hypotheses. I suggest ’$\left(\infty ,k\right)$- homotopy hypothesis’ as the general one and then, Urs, your statement summarising Joyal is in the case $k=1$. Cool. Yeah, but the $k=\infty$ doesn’t necessary deserve to be called “directed” either. We could have $\left(\infty ,k\right)$- homotopy hypotheses and also directed $\left(\infty ,k\right)$-homotopy hypotheses where directed is distinguished by the fact that all $j$-morphisms for $j\le k$ are directed. Edit: A few 10s of minutes later… This indicates that we should be able to speak of $\left(\infty ,k\right)$-spaces and $\left(\infty ,k\right)$-directed spaces where an $\left(\infty ,k\right)$-directed space is an $\left(\infty ,k\right)$-space where all $j$-paths for $j\le k$ are directed. However, I would move that the only $\left(\infty ,k\right)$-directed space that displays the “right spirit” would be $k=\infty$, i.e. a “directed space” is an $\left(\infty ,\infty \right)$-directed space. “Space” would then be synonymous with $\left(\infty ,0\right)$-directed space. • CommentRowNumber32. • CommentAuthorTim_Porter • CommentTimeOct 1st 2010 • (edited Oct 1st 2010) That would be my feeling, but there would be cases where there was truncation etc. That would be quite fun to look at. My experience with proper homotopy suggests there might be several different useful variants of homotopy ’groups’ and Marco Grandis’ ideas would seem to suggest the same thing. At any stage one can invert above any particular level and get a perhaps more calculable model of the same directed thing. Sometimes there would only be sense in looking in low dimensions. For instance if we have an evolving space we have an evolving homotopy type. and if we are not worried about where a point at time t flows to at time t+1, say, then the can use simple models for the homotopy type and model the change by a 1-cell that is not invertible. My vague and perhaps silly picture is of a map $p:X\to ℝ$ and we watch the way in which the homotopy type of ${p}^{-1}\left(t\right)$ varies with $t$. (You can imagine a contour height function, perhaps a Morse function on a manifold or whatever.) • CommentRowNumber33. • CommentAuthorMike Shulman • CommentTimeOct 1st 2010 I don’t see any reason to believe that there is only one useful/interesting notion of “directed space” or “directed homotopy theory.” Certainly it makes sense to regard quasicategories as one sort of directed space and look for a notion of directed topological space with a homotopy theory that would be equivalent to it. But I see no reason to expect that all applications falling under the general heading of “directed homotopy theory” would use models that are equivalent to that one. • CommentRowNumber34. • CommentAuthorEric • CommentTimeOct 2nd 2010 I don’t see any reason to believe that there is only one useful/interesting notion of “directed space” or “directed homotopy theory. Yep yep. I agree. I didn’t mean to suggest otherwise. I think all the $\left(\infty ,k\right)$-directed spaces could be interesting and useful for various applications. My last comment was more about the “default” case. I was suggesting that when speaking of $\left(\infty ,k\right)$-directed spaces, the case most likely referred to colloquially as merely “directed space”, i.e. the default case, should probably be $\left(\infty ,\infty \right)$-directed spaces. Urs seems to want “directed space” to refer to the $\left(\infty ,1\right)$ case, but that introduces a possible conflict with the way others think about it. But what we consider “default” is not really important at all. How does the concept sound, i.e. specifying homotopy hypotheses by their position in $\left(n,r\right)$, e.g. the $\left(\infty ,1\right)$-homotopy hypothesis? I like it (but my vote doesn’t count of course) :) It might be good to refer to $\left(\infty ,1\right)$-spaces as “quasi-directed spaces” so the $\left(\infty ,1\right)$-homotopy hypothesis would relate quasi-categories to quasi-directed spaces. How does that sound? • CommentRowNumber35. • CommentAuthorTim_Porter • CommentTimeOct 2nd 2010 I proposed that as an initial guideline a term such as 1-directed be applied to Marco’s d-spaces where selected paths are declared to be ’directed’, then 2-directed would mean 2-d-spaces with specified paths plus squares (with suitable axioms) being in the selection and so on. That terminology is perhaps not optimal but might work. • CommentRowNumber36. • CommentAuthorEric • CommentTimeOct 2nd 2010 • (edited Oct 2nd 2010) @Tim #35 That sounds like $\left(1,1\right)$-directed and $\left(2,2\right)$-directed spaces, which (similar to $\left(n,n\right)$-categories) would be called 1-directed and 2-directed. It all seems consistent to me. Edit: There is an interesting old discussion on this topic at (n,r)-category. • CommentRowNumber37. • CommentAuthorTim_Porter • CommentTimeOct 2nd 2010 I’m not that enthralled to the $\left(n,r\right)$ terminology as there are things that are very useful but don’t quite fit into it. Am I right in thinking that a groupoid enriched category would be a (2,1)-category? (Sorry to ask such a simple question.) • CommentRowNumber38. • CommentAuthorTim_Porter • CommentTimeOct 2nd 2010 On the page of the (n,r)-category I found Even though they have no special name, (n,1)-categories are widely studied. But track 2-categories, and groupoid enriched categories are often studied and track n-categories are n-cats with all n-cells invertible, so presumably are (n,n-1)-categories. Am I right? Hence (2,1)-categories are track 2-categories as used by Baues. The hole in the terminology comes from having models that are weak n-categories but then seem strict above that! :-) Yes it is possible at least partially so. A 2-crossed complex corresponds to a Kan complex which has uniquely defined ’thin’ fillers above dimension 2 (I think), so is sort of strict there but in lower dimensions corresponds to a Gray-groupoid /2-crossed module. (I hope the idea is vaguely clear.) • CommentRowNumber39. • CommentAuthorMike Shulman • CommentTimeOct 2nd 2010 There doesn’t seem much point to me in using (n,r) terminology for directed spaces, since an undirected space is already (∞,0); only the r needs to be notated. If you want to truncate to some n<∞, you could just say “r-directed homotopy n-type”. I would also expect the “default” notion of “directed space” to be one in which paths are directed but not 2-paths. It’s not clear to me that there is a unique notion even here, though. Probably there is one notion of a “directed space equipped with a chosen skeleton” which is equivalent to a quasicategory, but it’s not obvious to me that that is the most natural notion of “directed space” for all other applications as well. • CommentRowNumber40. • CommentAuthorTim_Porter • CommentTimeOct 3rd 2010 @Mike Various points: (i) The argument that 2-paths should be directed is based on an intuition in terms of rewriting or time. If we deform a 1-cell non-reversibly into another target one then we want to represent that deformation as a 2-cell, but the 2-cell should not then be invertible, as the second target 1-cell has, in some sense, been simplified from the first one. (ii) The usual way in which a pospace leads to a ’directed space’ in fact leads to one in which 2-cells would be not necessarily reversible. (iii) Any groupoid is a category, any ’track’ 2-category is a 2-category. Your default would be analogous to saying that the default for categories should be that they are groupoids! (iv) I do not particularly like the term ’directed space’ because a directed set is not just a set with a partial order, but has a confluence condition. (Working with people in Lyon, they are using confluence in dimension n, and in that context, are also using (strict) n-categories in which the last direction is invertible but not necessarily the earlier ones.) Higher dimensional confluence is very interesting but nowhere in the realm of our discussions are we introducing ’directed’ with that sort of sense. • CommentRowNumber41. • CommentAuthorUrs • CommentTimeOct 3rd 2010 • (edited Oct 3rd 2010) I don’t see any reason to believe that there is only one useful/interesting notion of “directed space” or “directed homotopy theory.” Time will tell. Recall where this discussion originated in, around #18. The well understood combinatorial part of 1-directed spaces serves at least to show that it is a good idea to have a defininition of fundamental $\left(\infty ,1\right)$-category where the 2-cells are not paths of directed paths. That’s because there is an evident notion of “directed geometric realization” of a quasi-category. This is certainly a canonical example of a class of directed spaces. And to recover the original $\left(\infty ,1\right)$-category from its directed geometric realization, one must not impose these paths of directed paths. I think my point is: we have a huge amount of information about the combinatorial side of the 1-directed homotopy hypothesis. It seems natural to build on that, while it seems less natural to invest lots of energy into alternative approaches that don’t have much of a guiding principle. My point is not that such alternative approaches won’t eventually exist and be useful. But that we’ll be in the dark about them without further guidance or results. So when somebody says that a fundamental $\left(\infty ,1\right)$-category certainly must have paths of directed paths as its 2-cells, I point out that, no, on the contrary, in the only approach about which we do have substantial insight, this is wrong. • CommentRowNumber42. • CommentAuthorEric • CommentTimeOct 3rd 2010 • (edited Oct 3rd 2010) Note: This comment was in response to Tim’s comment #40 above. I’m guessing the tendency to want 1-paths directed and 2-paths undirected has to do with “bigon thinking” (which is potentially biased in my opinion). If your source and target 1-paths share the same start and end points, then the 2-path connecting them CANNOT be directed. It is causally impossible. The source 1-path cannot evolve into the target 1-path. So bigons are not in the “right spirit” of directed spaces. If you think of a directed space conceptually as a river with island barriers and branches forcing you to move around them, then a directed 1-path would be the course traced out by a tiny boat that could only move sideways (transverse to the flow) as it floats downstream. It must always “move forward”, but can shift left or right (assuming downstream is “up” like the old video games). Now, tie a bunch of tiny boats together. A directed 2-path would be the surface traced out by the tiny boats as they tried steering around. This causal shape is quite different in spirit than a bigon. • CommentRowNumber43. • CommentAuthorTim_Porter • CommentTimeOct 3rd 2010 I think that we are, as sometimes happens, talking at crossed purposes. I do not think we are in the dark about the more general gadgets. For instance, Marco puts forward cubical sets as one example of his theory, yet is taking the directed n-cube in the usual sense. He talks about geometric realisation as a d-space. BUT the directed n-cube is also possible to view as a higher dimensionally directed cell, so with very little extra work I think one would have a n-category type version with n-cells not necessarily invertible (and that is the point NOT NECESSARILY. They may be but need not be.) The Grandis form of d-space comes with a forget functor to spaces. This has left and right adjoints. The left one says the only distinguished paths are constant, the right one says all paths are distinguished. In general what I am saying is that we have to take things in between as well. If the model is to have directed geometric content then deformations / 2-cells should be paths in the space of directed paths without that assumption directed geometric content goes by the board. The point you make still is useful, since there would be a morphism of some sort to a well understood object but the object that one really needs to study is sitting over it. Although I do not subscribe wholeheartedly to Marco’s picture of things, I think his results show that there is a lot of useful categorical machinery that can be brought to bear on the problem and in the generality that you seem to want to avoid. • CommentRowNumber44. • CommentAuthorEric • CommentTimeOct 3rd 2010 So when somebody says that a fundamental $\left(\infty ,1\right)$-category certainly must have paths of directed paths as its 2-cells I don’t think anyone would say this (on purpose). If someone poses the question “Should the fundamental $\left(\infty ,1\right)$-category of an $\left(\infty ,1\right)$-space contain directed 2-paths of directed 1-paths?” I think the answer is obviously, “No”. All 2-paths would be invertible (unless I’m horribly confused) and hence cannot be directed. If someone poses the question “Should the fundamental category of a directed space contain directed 2-paths of directed 1-paths?” I think the answer would be, “That depends on how you define ’directed space’”. I think you have already taken it as a given definition that a directed space is an $\left(\infty ,1\right)$ space, which is creating some confusion among people who think about directed spaces. That would take some convincing, but is conceivable. I’d be willing to recalibrate if shown the errors of my ways. I think a decent name for an $\left(\infty ,1\right)$-space would be “quasi-directed space”. Then, as I’ve said above, the homotopy hypothesis you are after would relate quasi-categories and quasi-directed spaces. To deserve the word “space”, I think you need $n=\infty$. Then $\left(\infty ,0\right)$-space is just a space, $\left(\infty ,1\right)$-space is a quasi-directed space, and an $\left(\infty ,\infty \right)$-space (or just $\infty$-space) would be a directed space. • CommentRowNumber45. • CommentAuthorTim_Porter • CommentTimeOct 3rd 2010 In all this, with a shape theorists hat on, I feel that a directed space should be a space, plus structure, such as a local order. I have never liked the tendency amongst some topologists of calling and simplicial set ‘a space’. This is the same problem as here. Imagine some space with a sense of flow, or with a sense of being evolving (each point having a time counter, but time could be branching). There would be objects = points, 1-cells being paths consistent with the time counter, and a useful notion would be to deform one such path to another through intermediate similar paths. (I suspect some fibred categorical model might eventually be useful here, say over a locale, but for today, I will stick with a topological picture.) There would, for instance, be the possibility that two different paths, which were not linked by any such deformation could be deformed into a third one, a sort of confluence situation. (The third path was in the ’possible future cone’ of both of the first two.) We would not want to consider the two original paths to be directly linked by a 2-cell, although they would be by a zig-zag of 2-cells. We would have a 2-category( after a bit more fiddling around and dividing out by suitable 2-homotopies). I do not see that this situation is that strange, and should be able to be subsummed under a notion of model for the directed homotopy of a ’directed space’. It may not be a $\left(\infty ,1\right)$-category, … bad luck, it is there and needs studying as a situation modelling an interesting segment of directed homotopy. (Directed homotopy theory may be an ugly sister’s foot that does not fit well into the immediately available (glass slipper) models but will be related to them.) Proper homotopy theory does not fit that neatly into the quasi-cat framework but still has rich structure that mirrors quite well the geometry of non-compact manifolds. Similar points hold to strong shape theory. They are very manageable and give us good information. They also have very clear n-cat interpretations and historically led to some of the models for weak infinity categories (look at Batanin’s work). I am not that pessimistic about the notion of directed homotopy. I think it leads to some interesting problems. (And I know that glass slipper was probably a mistranslation from the French, so we don’t need to discuss that here!:-)) Getting away from that discussion slightly, Marco has some nice things to say about future invariance and past invariance. He tries to see how the directed homotopy of the future changes along a directed path, and to use this to thin down the fundamental category of a d-space to something smaller. This looks good and I may try and explain it (once I understand it better). • CommentRowNumber46. • CommentAuthorEric • CommentTimeOct 3rd 2010 • (edited Oct 3rd 2010) Note: Posted before seeing Tim’s previous cool comment. Here is an example… Let’s say that the direction of our directed space is “up” and we have two impenetrable islands as depicted below. Note: The arrows depicts the maximum speed a boat can travel around the bottom island. Similar to a light cone. The upper left island is too far away from the bottom right arrow so that any “boat” that passes on the right of the first island cannot get around to the left of the left island. How would the fundamental categories differ depending on what types of 2-paths are allowed? Would they be identical? • CommentRowNumber47. • CommentAuthorUrs • CommentTimeOct 3rd 2010 Tim says: puts forward cubical sets as one example of his theory But nobody has a model for directed homotopy types (aka $\left(\infty ,1\right)$-categories) in terms of cubical sets. Eric says I don’t think anyone would say [that 2-cells should run through directed paths] (on purpose). David R. said this in #18 in this thread. Mike made a remark to that extent in #5 in the parallel thread “fundamental $\left(\infty ,1\right)$-category” My emphasis here on the fact that we should be looking at the directed homotopy theory which does exists – namely Joyal’s – is to point out that in that context it is clearly not the right thing to demand. This is part of what I mean by “being in the dark”. If we want to find a good model of directed spaces and fundamental $\left(\infty ,1\right)$-categories, we should check against existing results, not just against our intuition. Our intuition may be wrong. • CommentRowNumber48. • CommentAuthorTim_Porter • CommentTimeOct 3rd 2010 @Urs I must say that the most developed directed homotopy theory is not only André’s. (Although I have NOT seen this.) The views of Martin Raussen, Lizbeth Fajstrupp, Marco, Eric Goubault, and Philippe Gaucher (and others) also need noting. That is a considerable body of hard mathematics related to the general notion of directed algebraic topology. As a ’devil’s advocate’ let me say that even if the theory put forward by André was beautiful, (and most surely it will be) the need of the applications AND very natural situations that occur in mathematics might say that there was no actual use for that theory in this area as it did not fit the situations of interest. My point is to go beyond yours and say : Suppose we go for the more general model, then we can for calculations, escape down to something weaker, such as André’s and use this to find invariants, etc. Once that is done we have an obstruction / lifting/ rectification problem somewhat similar to that when we lift back up from chain complexes to crossed complexes and then beyond that to quadratic complexes. We might not expect all models in Joyal’s sense to lift to non-trivial models in the next level up (2-directed ones), the obstructions would then tell us a lot of information about the situation being modelled. Likewise there would be ‘Joyal models’ that would lift non-uniquely to 2-directed models and again that gives a classification problem. (Passing from 2-directed to 1-directed might be analogous to localising.) Another idea is that a useful model for some directed spatial phenomena may be presheaves (or perhaps homotopy coherent presheaves) of quasi-categories. Think of this as indexed homotopy theory. I do not necessarily mean the base is spatial or that it represents time. It represents a ’parameter space’ and could be almost anything. Would that provide a model for the situations that Eric and I hope to model, and what would the resulting theory look like. • CommentRowNumber49. • CommentAuthorUrs • CommentTimeOct 3rd 2010 • (edited Oct 3rd 2010) not only André’s. (Although I have NOT seen this.) You have: as I said, I am talking about quasi-categories. We know that these are a good directed analog of ordinary homotopy theory. We know three items in this table very well $\begin{array}{ccc}& \mathrm{combinatorial}& \mathrm{topological}\\ 0-\mathrm{directed}& \mathrm{Kan}\mathrm{complex}& \mathrm{topological}\mathrm{space}\\ 1-\mathrm{directed}& \mathrm{quasicategory}& ???\end{array}$ So the evident question is what fills this table in the bottom right corner? Whatever it is, that deserves to be called a topological model for directed homotopy theory. The views of Martin Raussen, Lizbeth Fajstrupp, Marco, Eric Goubault, and Philippe Gaucher (and others) also need noting. That is a considerable body of hard mathematics related to the general notion of directed algebraic topology. Okay. I am likely ignorant of most of this. Could you summarize some highlights? let me say that even if the theory put forward by André was beautiful, (and most surely it will be) the need of the applications AND very natural situations that occur in mathematics might say that there was no actual use for that theory in this area as it did not fit the situations of interest. So maybe the situations of interest that you do mean are not actually that much “homotopy theoretically”? For I find it hard to see how one can argue that quasi-categories have not proven to be the right tool for studying examples in directed homotopy theory. Another idea is that a useful model for some directed spatial phenomena may be presheaves (or perhaps homotopy coherent presheaves) of quasi-categories. That’s the next step. $\left(\infty ,2\right)$-topos theory modeled by presheaves of directed spaces. • CommentRowNumber50. • CommentAuthorTim_Porter • CommentTimeOct 3rd 2010 Sorry but I do not see quasi categories as being about directed homotopy theory. They are more important than that!:-) They do provideinsights, but I am a bit allergic to the idea of ‘ ’right tool’, and what examples of directed homotopy theory are you thinking of. In fact although I worked on quasi categories back in the 1980s I have not really read André or Lurie in detail, although I do have a generally good idea of what they have done. • CommentRowNumber51. • CommentAuthorEric • CommentTimeOct 3rd 2010 • (edited Oct 3rd 2010) @Urs #47 Eric says I don’t think anyone would say [that 2-cells should run through directed paths] (on purpose). David R. said this in #18 in this thread. Mike made a remark to that extent in #5 in the parallel thread “fundamental $\left(\infty ,1\right)$-category” Actually, I think it is fine (but what do I know?!) to want 2-paths to pass through directed 1-paths. Sorry about that. What I meant was that when talking about $\left(\infty ,1\right)$-stuff then you shouldn’t expect this to be done in a directed way, i.e. the 2-paths passing through directed 1-paths are themselves undirected because they are invertible. If you want to allow 2-paths between undirected 1-paths, then what would your fundamental category for my diagram above look like? I think it would be different than the fundamental category when 2-paths are restricted to pass between directed. a.k.a. causal, 1-paths. I think (but am likely wrong) that if 2-paths only pass through directed 1-paths, then the fundamental category in my example will consist of just three equivalence class of paths illustrated below. However, if 2-paths are not restricted to passing through directed 1-paths, then we pick up an addition equivalence class of paths illustrated below that would not be allowed if things were causal. I’m probably just completely confused. How do things work in this trivial example? • CommentRowNumber52. • CommentAuthorTim_Porter • CommentTimeOct 3rd 2010 I’m afraid I do not understand your calculation. The paths should all be feasible and the black one isn’t. I understand what your trying to produce, but I’m not sure that this example gives it. My understanding of the fundamental category would be to have all points as objects and then (?) homotopy classes of directed paths between them. Is the question whether (?) should be ’directed’ or not? I will have to think a bit. (doh) • CommentRowNumber53. • CommentAuthorUrs • CommentTimeOct 3rd 2010 • (edited Oct 3rd 2010) Sorry but I do not see quasi categories as being about directed homotopy theory. Do you agree that Kan complexes provide a combinatorial model for ordinary homotopy theory? • CommentRowNumber54. • CommentAuthorTim_Porter • CommentTimeOct 3rd 2010 No. But before you dispair of me, they provide a combinatorial model of ordinary homotopy types. I believe that quasi categories, as they generalise categories, do provide a good combinatorial model for 1-directed homotopy types, i.e., in which the way in which directed 1-cells deform seems to be undirected. Quasi-categories are about ordinary homotopy theory as well as, of course, comparisons of homotopy types are not necessarily invertible. That gives a locally Kan simplicially enriched category and hence a quasicategory (as you know well). I do not think that it is wise to stop there, as there are probably cases of (perhaps local) pospaces that have a difference in higher directedness and the assumption that 2-cells need not be directed would not seem to make sense there. I think that homotopy of dipaths and dihomotopy of dipaths both work, are related, and, probably the former is easier to calculate with than the latter. These both assume that the homotopy passes through dipaths, since from the theory there are no other maps around. (In my paper, I did not investigate whether the simplicially enriched categories that result are Kan enriched, or quasi-cat enriched or not at all, something that would have been useful.) • CommentRowNumber55. • CommentAuthorUrs • CommentTimeOct 3rd 2010 • (edited Oct 3rd 2010) No. Then it’s not wonder that we cannot agree! ;-) they provide a combinatorial model of ordinary homotopy types. That’s what the homotopy 1-category of Kan complexes knows about. But the full Kan-complex enriched category of all Kan complexes does much more. It knows the full homotopy theory. I believe that quasi categories, as they generalise categories, do provide a good combinatorial model for 1-directed homotopy types, So maybe we can find agreement after all! I do not think that it is wise to stop there, as there are probably cases of (perhaps local) pospaces that have a difference in higher directedness and the assumption that 2-cells need not be directed would not seem to make sense there Sure, but that’s then talking about 2-directed spaces, 3-directed spaces, etc. Is that maybe the whole disagreement that you thought I meant by “directed homotopy theory” the $n$-directed version for all $n$, whereas what I did mean was the 1-directed version? I suggest: let’s get the 1-directed homotopy hypothesis under control first. Once we understand that, we can think about higher directedness. Experience from the combinatorial side suggests that after 1 it will get much harder. • CommentRowNumber56. • CommentAuthorTim_Porter • CommentTimeOct 3rd 2010 I agree! No I did not think you meant that I felt that without an index the term should be kept for the general case and not ’purloined’ for use of what may be 1-directed. (We should not rush into calling it that but it will do as an interim term at least.) I can see some hope in higher dimensions but it is vaguer. • CommentRowNumber57. • CommentAuthorMike Shulman • CommentTimeOct 4th 2010 I don’t have time to follow this whole discussion, but re: whether 2-paths between directed paths should pass through only directed paths, consider the 2-disc made into a directed space in such a way that there are exactly two directed paths (up to reparametrization) from (-1,0) to (1,0): one going clockwise around the top and one going counterclockwise around the bottom. These two paths give rise to two 1-morphisms in the fundamental (whatever)-category; are these two 1-morphisms equivalent? If 2-paths between directed paths can pass through undirected paths, then yes, they are: the disc itself provides a homotopy. But that is greatly against my intuition for what it means to call this a “directed space.” Maybe there are different notions of directed space; which is/are useful? That should be the question. Yes, quasicategories are undoubtedly useful. But I think it is not valid to call them “a notion of directed space” until we have a prior notion of directed space and have proven them equivalent to it. The only reason it’s at all sensible to call Kan complexes “a notion of space” is because of the truth of the homotopy hypothesis; the implication doesn’t go the other way. • CommentRowNumber58. • CommentAuthorUrs • CommentTimeOct 4th 2010 The only reason it’s at all sensible to call Kan complexes “a notion of space” is because of the truth of the homotopy hypothesis; the implication doesn’t go the other way. Right. But once it is true, we have that a quasi-category is evidently a Kan complex with directed edges. Concernign your disk-example: I find it very useful to buld intuition from the directed geometric realizations of quasi-categories: the ordinary geom realization with directed paths the order preserving maps through the 1-skeleton start start and end at vertices. in that pciture one sees that your disk example should be regarded as producing two equivalent 1-paths. • CommentRowNumber59. • CommentAuthorEric • CommentTimeOct 4th 2010 • (edited Oct 4th 2010) Regarding my diagrams in #51, Tim said in #52: I’m afraid I do not understand your calculation. The paths should all be feasible and the black one isn’t. I understand what your trying to produce, but I’m not sure that this example gives it. My understanding of the fundamental category would be to have all points as objects and then (?) homotopy classes of directed paths between them. Is the question whether (?) should be ’directed’ or not? I will have to think a bit. (doh) If it helps, we should think of all my arrows as originating from the same point in the “distant past” and recombining at the same point in the “distant future”. It is clear that if the allowed 2-paths only pass through directed 1-paths, then the black path is not allowed. However, maybe I’m just confused, it seems that if 2-paths are not constrained to pass through directed 1-paths, then the black path is allowed. Is the black path allowed, i.e. is it part of the fundamental category, in what Urs is talking about? • CommentRowNumber60. • CommentAuthorMike Shulman • CommentTimeOct 4th 2010 A quasicategory is evidently a Kan complex with directed edges, but that does not necessarily mean that it is the same as a directed space. The whole point I’m making is that it’s not clear that directed geometric realizations of quasicategories are the right way to build intuition for directed spaces. • CommentRowNumber61. • CommentAuthorTim_Porter • CommentTimeOct 4th 2010 @Eric Terminology : I think of the fundmental groupoid as being on all points of the space and then one can hope to find a nice cuddly model on a few selected base points that is equivalent. Similalry the fundamental category here has all points in the space as its objects and then one looks for a nice model. (Marco discusses the problem of what nice in a full chapter so I won’t try to here.) You are talking about the hom category within that from $-\infty$ to $\infty$. Next a path is selected in your d-space if it is increasing (going up the page) and makes an angle of between 45 and 135 degrees to the positive x-axis.. (right?). Then I agree $\Pi \left(-\infty ,\infty \right)$ has three elements, the three blue arrows. The black arrow does not exist. Any d-path between the points is d-homotopic to one of the three. Perhaps I have misunderstood your blue discs. If you interpret them as being as in Mike’s example then something interesting happens. To be more precise, make them not solid islands but ’no go areas’ ’shallows’, ’whirlpools’ or whatever imagery tickles your fancy, then no directed path in the plane can pass through them. The black paths is still not there but whilst dihomotopies still give the same answer, perhaps, (and this is debatable) allowing homotopies to use paths that DO pass through them makes the three elements collapse to just one. @Mike Marco introduced d-spaces as a neat setting to do something he called directed homotopy theory. The choice was because it has nice categorical properties AND contains lots of examples relating to the concrete situations in which directed homotopy was being applied (pospaces etc.) dTop has exponentiable objects so one can form a d-spaces of d-paths in a d-space, and d-homotopy corresponds to paths in that. I am not 100% convinced by d-spaces because of the sort of example that you mention, but their theory works very nicely and is quite pretty. One form of a directed homotopy hypothesis would be to find a geometric realisation from some nice category of weak infinity cats back to dTop, which gave an equivalence of homotopy categories in some shape. That is not the version that Urs mentions of course. His woud be interesting as well as it would give additional tools for quasi-cats. Must rush. I should be writing a report on a paper, and then have to go to the Dentists! :-) • CommentRowNumber62. • CommentAuthorEric • CommentTimeOct 4th 2010 • (edited Oct 4th 2010) @Tim #61: My visual aid was “boats in a river” so an “impenetrable island” is definitely a “no go” area :) The intention of the blue disk is that no 1-path can pass through it and no 2-path can pass across it. • CommentRowNumber63. • CommentAuthorTim_Porter • CommentTimeOct 4th 2010 ’Nothing’? do you mean really nothing or nothing directed? I understood what you said but not precisely, i.e. in directed land the black path just does not exist, but neither is it reachable from one of the blue arrow by a homotopy (<-not directed). (And I would not like to drive a boat through shallows or across a whirlpool either. :-)) • CommentRowNumber64. • CommentAuthorUrs • CommentTimeOct 4th 2010 Mike, sure, one will have to formulate and prove the directed homotopy hypothesis, in order to be sure. I am talking about this in the parallel thread. I haven’t proven it yet, but it looks very plausible to me that the directed geometric realization and the definition of fundamental (oo,1)-category that I gave should work. In any case, the directed geometric realization is a plausible geometric incarnation of quasi-categories, and as such it does shed light on questions like in your disk example. It provides a background framework against which to test intuition about that disk-example, I think. In the parallel thread I sketch what one still needs to prove to get the directed homotopy hypothesis. I looks like it might actually be an easy proof, using the standard homotopy hypothesis, but I don’t have it yet. • CommentRowNumber65. • CommentAuthorTim_Porter • CommentTimeOct 4th 2010 Can I suggest ’(1-)directed homotopy hypothesis’ be used otherwise the same confusion will reign as before! • CommentRowNumber66. • CommentAuthorEric • CommentTimeOct 4th 2010 And until the connection to existing notions of “directed spaces” is better established, I would recommend using “quasi-directed space” to describe the other side of the $\left(\infty ,1\right)$-homotopy hypothesis coin opposite quasi-categories. • CommentRowNumber67. • CommentAuthorUrs • CommentTimeOct 4th 2010 Eric, as was pointed out before, it does not make sense to keep the “$\infty$“-around when talking about homotopy-theoretical things. That’s implicit. • CommentRowNumber68. • CommentAuthorEric • CommentTimeOct 4th 2010 • (edited Oct 4th 2010) as was pointed out before, it does not make sense to keep the “$\infty$“-around when talking about homotopy-theoretical things. That’s implicit. I think I understand this, but there is no harm I can see in keeping it explicit. Ultimately, I “think” there might be a difference between “1-directed” (being short for $\left(1,1\right)$-directed) and $\left(\infty ,1\right)$-directed, which I’d still like to call “quasi-directed”. Getting the terminology right from the beginning could avoid some clashes down the road. Has anyone ever studied “(abstract) finitary homotopy” where we have an abstract interval rather than a continuum interval? I can imagine cases where you might have a large abstract complex with subcomplexes connected by abstract edges (and higher cells) that can be interpreted as (abstract) finitary homotopies. In this finitary environment it would seem that you could have $\left(n,r\right)$-homotopy hypothesis for finite $n$ (even if it isn’t very exciting). After all, homotopy is about sliding spaces around and this can be done abstractly without a continuum interval. The $\left(\infty ,-\right)$ implies a continuum. Note to self (and anyone else interested): • CommentRowNumber69. • CommentAuthorUrs • CommentTimeOct 4th 2010 Has anyone ever studied “(abstract) finitary homotopy” where we have an abstract interval rather than a continuum interval? Yes, that’s the homotopy theory modeled by Kan complexes. The role of the interval is played by the interval groupoid. The (∞,−) implies a continuum. No, it doesn’t It’s the geometric version of homotopy theory in terms of topological spaces that does. The combinatorial version in terms of simplicial sets and Kan complexes does not. • CommentRowNumber70. • CommentAuthorTim_Porter • CommentTimeOct 4th 2010 There are very nice results by Jonathan Barmak (Buenos Aires) on finite spaces. They are well worth looking at. Eric’s link to Peter May’s webpage on finite topological spaces has links to his papers with Gabriel Minian. • CommentRowNumber71. • CommentAuthorTim_Porter • CommentTimeOct 4th 2010 Finite spaces are great fun to work with. and when I have written the 100 or so pages for the n-Lab that I have more or less promised to do, I might write some more on that subject. They are also very useful. Note the result of McCord from way back, that finite topological spaces model ALL homotopy types of compact simplicial complexes and therefore an Awesome part of homotopy theory. Barmak’s work (for his PhD thesis!) pushed things a lot further and peter May seems to think very highly of that work, so this is an area to watch. • CommentRowNumber72. • CommentAuthorEric • CommentTimeOct 4th 2010 • (edited Oct 4th 2010) Barmak’s work (for his PhD thesis!) pushed things a lot further and peter May seems to think very highly of that work, so this is an area to watch. Yeah. I was happily surprised to see so many recent references. When I was in grad school, I independently rediscovered (although it is pretty obvious I guess) that you can get interesting finite topological models of continuum spaces in exactly the way outlined in some of those papers. The thing I thought was interesting is that any such model must necessarily be non-Hausdorff. That is why I throw in a “(non-Hausdorff)” whenever I mention it. This means that any finitary model of the physical universe (modest proposal) must necessarily be non-Hausdorff. This would likely wreak some havoc with some popular “no go” theorems. Hausdorff spaces are pretty deeply ingrained in the minds of most physicists. In my dissertation, I wrote: When it comes to constructing continuum models of space-time, the proposition that distinct events are in fact separated is usually seen as a reasonable requirement. Hence, the underlying topological spaces on top of which continuum models of space-time are to be built are usually assumed to be Hausdorff.${}^{1}$ [snip] 1. Assuming space-time to be Hausdorff is convenient, but is by no means necessary. Note the result of McCord from way back I LOVE that “old school” algebraic topology stuff. My all time favorite mathematician is Whitney. I treasure my copy of Lefschetz. I wonder what those guys would think of this n-Stuff? • CommentRowNumber73. • CommentAuthorMike Shulman • CommentTimeOct 4th 2010 If you want a notion of directed space to give you intuitions about quasicategories, that’s great. I’m just saying there may be other inequivalent notions of directed space. In particular, the notion of directed space you’re working with in the other thread has the property that a sub-path of a directed path is not in general directed, nor is a constant path in general directed. Something like this seems to be necessary if you want an equivalence with quasicategories, but it is not part of my a priori intuition about what a directed space should be (nor, I believe, is it part of Grandis’ d-spaces). • CommentRowNumber74. • CommentAuthorEric • CommentTimeOct 5th 2010 a sub-path of a directed path is not in general directed Would this occur in only extreme pathological cases, or is it a general feature of this approach? Would the same ever happen in Grandis’ approach? • CommentRowNumber75. • CommentAuthorTim_Porter • CommentTimeOct 5th 2010 • (edited Oct 5th 2010) Marco (p. 51 of his book and in his papers) does not require subpaths of d-paths to be d-paths. The d-space structure is given exactly as in directed topological space. As this is an abstract structure, you can try to tweek an example for which a subpath ’closure’ axiom does hold and delete one subtah that does not seem to matter, so one can make it pathological. Mike’s example of the directed circle could be made into one which was subpath closed, I think, but and that would change the fundamental category as we would then have the set of ’objects’ would include all points on the boundary of the disc. In your river example, you looked at d-paths from -infinity to infinity and just at that part of the fundamental category, but there would be fun going on on the boundaries of the circular islands as well. Suppose you take a point on one those which has a tangent line which has too shallow a gradient (i.e. less than 45 degs), I don’t think the ’boat’ could escape! Fun. It is something like a deadlock in the Swiss flag example, but there is no ’hole’ into which the path is disappearing. (Am I right?) That example and similar ones should be looked at in a LOT of detail. • CommentRowNumber76. • CommentAuthorMike Shulman • CommentTimeOct 5th 2010 I read part (2) of the definition here as implying closure under “subpaths” in a suitable sense. E.g. if $I\to X$ is directed and $0, then choose the natural increasing homeomeorphism $I=\left[0,1\right]\to \left[a,b\right]$ and compose it with the inclusion $\left[a,b\right]↪\left[0,1\right]$ to show that the “subpath from a to b” of any directed path is also directed. The definition part (1) also explicitly includes all constant paths as directed. (These are both differences from the definition given at fundamental (infinity,1)-category, which only allows reparamatrization by homeomorphisms and does not include constant maps.) Therefore, the object-set of the fundamental category must be the set of all points of the space. Unless I’m reading something wrong somewhere? • CommentRowNumber77. • CommentAuthorTim_Porter • CommentTimeOct 5th 2010 You are right. That point had escaped me as I was thinking of reparametrisation whilst Marco talks of ’partial reparametrisation’. The object set must be all the points because the constant paths are selected. It is worth noting, as I have mentioned before, that Marco puts a lot of effort (Chapter 3, pp145-226) into modelling the fundamental category. By this he means that he is seeking small models (representative objects and arrows) that encode enough of the structure. Equivalence is not the question here as that destroys too much of the structure. In reading the book, this struck me as being the most immediately important chapter. I should offer to summarise it, and will have to write a review of the book shortly, but I cannot do so just now. • CommentRowNumber78. • CommentAuthorUrs • CommentTimeOct 5th 2010 • (edited Oct 5th 2010) If you want a notion of directed space to give you intuitions about quasicategories, that’s great. No, the other way round: I want to use quasi-categories to give me intuition for what a notion of directed space can be that has a chance of supporting a directed homotopy hypothesis. For I think that it is clear that quasi-categories are one side of the directed homotopy hypothesis, so we can use them to deduce what the other side can be and what not. I’m just saying there may be other inequivalent notions of directed space. In particular, the notion of directed space you’re working with in the other thread has the property that a sub-path of a directed path is not in general directed, nor is a constant path in general directed. Something like this seems to be necessary if you want an equivalence with quasicategories, but it is not part of my a priori intuition about what a directed space should be (nor, I believe, is it part of Grandis’ d-spaces). Yes, so for the purposes of actually doing directed homotopy theory I find it hard to see how other definitions would do the trick. Of course there may be many sensible definitions of directed topology . But that’s a big difference to directed homotopy theory. • CommentRowNumber79. • CommentAuthorMike Shulman • CommentTimeOct 6th 2010 Urs, it seems to me that you have a very blinkered view of what “directed homotopy theory” might mean. It sounds to me as though it should encompass the homotopy theory of directed spaces, for whatever notion(s) of directed space turn out to be useful. One notion of directed space may turn out to have a homotopy theory that is equivalent to quasicategories, but another may not, and yet it would still be a homotopy theory that (to my mind) merits inclusion under the term “directed homotopy theory”. • CommentRowNumber80. • CommentAuthorMike Shulman • CommentTimeOct 6th 2010 In particular, Grandis’ book is called “Directed Homotopy Theory,” but as I pointed out above, his notion of directed space is not likely to be equivalent to quasicategories. I think we should let the people who’ve been studying “directed homotopy theory” all this time have some say in what the term means. (-: • CommentRowNumber81. • CommentAuthorDavidRoberts • CommentTimeOct 6th 2010 One thing which may be worth considering is that for each directed space structure D on the geometric n-simplex you’ll get a different notion of fundamental oo,1-category, subject to the requirement that the standard retracts behave well with respect to D. For example, I was thinking of perhaps having a codimension one foliation on the n-simplex and directed paths are deemed to be those transverse to the leaves. This statement is a bit rubbery, because one has to take care of boundaries and corners, and we may have to restrict to PL- or piecewise-smooth paths • CommentRowNumber82. • CommentAuthorEric • CommentTimeOct 6th 2010 In particular, Grandis’ book is called “Directed Homotopy Theory,” but as I pointed out above, his notion of directed space is not likely to be equivalent to quasicategories. I think we should let the people who’ve been studying “directed homotopy theory” all this time have some say in what the term means. (-: Yes yes. Please Urs, if possible could you try to avoid the naming conflict here and call your stuff (as important and useful as it may be) something else? I’ve proposed “quasi-directed” for what you’re doing, but the name doesn’t matter AS LONG AS you don’t call it “directed”. I think the stuff Grandis is working on deserves priority for that adjective. • CommentRowNumber83. • CommentAuthorUrs • CommentTimeOct 6th 2010 • (edited Oct 6th 2010) Urs, it seems to me that you have a very blinkered view of what “directed homotopy theory” might mean. That’s right! I think homotopy theory is such an important concept that it would be good to be very restrictive about what to call any supposed generalization. I think we should let the people who’ve been studying “directed homotopy theory” all this time have some say in what the term means. (-: Sure, but this is turning the way the discussion came about on its head now: I am not trying to make all people adhere to some notion (I would if I had more time and energy! ;-). No, what happened here was conversely that I found that discussion of existing notions of “directed homotopy theory” were clearly missing some aspect, because quasi-categories did not fit in, so I started to suggest that whatever directed spaces are taken to be, one canonical class examples must come from quasi-categories, and there must be a notion of fundamental $\left(\infty ,1\right)$-category recovering this and we must not dismiss notions of directed spaces and fundamental $\left(\infty ,1\right)$-category that achieve this. In short, a notion of directed homotopy theory maybe does not have to be restricted to one corresponding to quasi-categories, but if it does not at least include it, it is clearly missing something. I said this before, but I say it again: there is a difference between topology and homotopy theory. Much of what motivates existing work on “directed homotopy theory” to me seems to be more like directed topology . But I don’t want to fight about terminology. What I want is to highlight that there is a glaring open problem in directed homotopy theory (by whatever interpretation of the term): find the notion of directed space that proves a directed homotopy hypothesis with quasi-categories on one side. And that looking at this glaring open problem does adjust some intuitions about what directed spaces should obviously be like. I think it is important to think about this. • CommentRowNumber84. • CommentAuthorDavidRoberts • CommentTimeOct 6th 2010 I certainly accept Urs approach. I just would like to see if there is a way to get a fundamental (oo,1)-category structure from different directed space structures on ${\Delta }^{n}$. My inspiration comes from thinking of the nerves of various ’crossed beasties’, whereby the definition of the fundamental ’crossed beastie’ of the n-simplex determined it for all the rest. There is certainly a directed space structure on ${\Delta }^{2}$ where the interior points are in the image of a directed path: If one imagines ${\Delta }^{2}$ as being the triangle with base the interval $\left[0,1\right]×\left\{0\right\}\subset {ℝ}^{2}$, then directed paths are those with increasing first coordinate. Then more generally (and I’m starting to think aloud here), take a model $↑{\Delta }^{n}$ of the n-simplex affinely embedded in ${ℝ}^{\infty }$ (using only the first $n+1$ coordinates) whereby the induced path $\left[0,n\right]\to {ℝ}^{\infty }$ given by restricting to the spine is monotonically increasing in its first coordinate. Then any path in the 1-skeleton such that the order of the vertices is always increasing has increasing first coordinate. Now define a path in $↑{\Delta }^{n}$ to be directed if the path $I\to ↑{\Delta }^{n}↪{ℝ}^{\infty }\stackrel{{\mathrm{pr}}_{1}}{\to }ℝ$ is increasing. This captures Urs’ notion of directed path. Then there is a homotopy inside the 2-skeleton fixing the endpoints between order-preserving paths, such that this homotopy is through directed paths. Note that this homotopy is invertible. I believe that it should be possible to define the standard retractions ${s}_{n,k}$ of an $n$-simplex onto a horn such that given a directed path $I\to ↑{\Delta }^{n}$ the map $\left\{t\right\}×I\to I×↑{\Delta }^{n}\stackrel{{s}_{n,k}}{\to }↑{\Delta }^{n}↪{ℝ}^{\infty }\stackrel{{\mathrm{pr}}_{1}}{\to }ℝ$ is increasing for all $t\in I$. Then if we define the simplicial set $n↦\mathrm{dSpace}\left(↑{\Delta }^{n},X\right)$ it should be a quasicategory with a map to Urs’ fundamental quasicategory. • CommentRowNumber85. • CommentAuthorTim_Porter • CommentTimeOct 6th 2010 To a large extent I agree with you, Urs. You say And that looking at this glaring open problem does adjust some intuitions about what directed spaces should obviously be like I think I would say ’And that looking at this glaring open problem does add some knowledge about what directed spaces are like.’ If one identified a class of directed spaces that did correspond to quasi-categories, that would provide us with very useful insights/information and probably very useful methods, to attack at least some of the other types of ’directed spaces’ that seem to occur in examples. The classification of different classes of directed homotopy theories might be the aim. The interactions between them may be very enlightening. I changed ’intuition’ to ’knowledge’, because the intuition is coming from a wide range of directions (some of which I intend to discuss in some new entries) and the applications are important. The open problem you mention would at very least provide newish examples of directed spaces which have a good directed homotopy theory but would also delimit and characterise that part of directed homotopy theory which can be faithfully modelled using quasicategories. I deleted ’obviously’ because although I have worked on directed homotopy theory quite a lot, I do not know exactly the ’correct’ form of input ’space plus ??? ’ that should be used in order to cover the desired examples in the best way. • CommentRowNumber86. • CommentAuthorMike Shulman • CommentTimeOct 6th 2010 What I want is to highlight that there is a glaring open problem in directed homotopy theory (by whatever interpretation of the term): find the notion of directed space that proves a directed homotopy hypothesis with quasi-categories on one side. I agree with that. And that looking at this glaring open problem does adjust some intuitions about what directed spaces should obviously be like. I don’t so much agree with that, because as I’ve said I think there are likely to be multiple useful notions of “directed space.” So finding out something about the ones which model quasicategories doesn’t mean that our prior intuitions about a different notion of directed space were wrong. But yes, let’s not fight about terminology. • CommentRowNumber87. • CommentAuthorTim_Porter • CommentTimeNov 21st 2010 • (edited Nov 21st 2010) I have been tidying up directed topological space. I got rid of some of the older query boxes, and added in a bit more comment early on. • CommentRowNumber88. • CommentAuthorTim_Porter • CommentTimeNov 21st 2010 Further I have added in a pair of examples from Marco’s book which illustrate the problem of finding small invariants of directed homotopy type.
# ISO 4264 ## Petroleum products - Calculation of cetane index of middle-distillate fuels by the four variable equation active, Most Current Organization: ISO Publication Date: 1 June 2018 Status: active Page Count: 14 ICS Code (Liquid fuels): 75.160.20 ##### scope: This document specifies a procedure for the calculation of the cetane index of middle-distillate fuels from petroleum-derived sources. The calculated value is termed the "cetane index by four-variable equation". Throughout the remaining text of this document, the term "cetane index" implies cetane index by four-variable equation. This document is applicable to fuels containing non-petroleum derivatives from tar sand and oil shale. It is not applicable to pure hydrocarbons, nor to distillate fuels derived from coal. Cetane index calculations do not take into account the effects from additives used to enhance the Cetane number. NOTE 1 This document was originally developed using a matrix of fuels, some of which contain non-petroleum derivatives from tar sands and oil shale. NOTE 2 The cetane index is not an alternative way to express the cetane number; it is a supplementary tool, to be used with due regard for its limitations. NOTE 3 The cetane index is used to estimate the cetane number of diesel fuel when a test engine is not available to determine this property directly, or when insufficient sample is available for an engine rating. The most suitable range of fuel properties for application of this document is as follows: Fuel property Range Cetane number 32,5 to 56,5 Density at 15 °C, kg/m3 805,0 to 895,0 10 % (V/V) distillation recovery temperature, °C 171 to 259 50 % (V/V) distillation recovery temperature, °C 212 to 308 90 % (V/V) distillation recovery temperature, °C 251 to 363 Within the range of cetane number (32,5 to 56,5), the expected error of the prediction via the cetane index equation will be less than ±2 cetane numbers for 65 % of the distillate fuels examined. Errors can be greater for fuels whose properties fall outside this range of application. As a consequence of sample-specific biases observed, the expected error can be greater even when the fuel's properties fall inside the recommended range of application. Therefore, users can assess the required degree of prediction agreement to determine the fitness-for-use of the prediction. NOTE 4 Sample specific biases were observed for distillate fuels containing FAME (fatty acid methyl ester). ### Document History ISO 4264 June 1, 2018 Petroleum products - Calculation of cetane index of middle-distillate fuels by the four variable equation This document specifies a procedure for the calculation of the cetane index of middle-distillate fuels from petroleum-derived sources. The calculated value is termed the “cetane index by... May 1, 2013 Petroleum products - Calculation of cetane index of middle-distillate fuels by the four variable equation AMENDMENT 1 A description is not available for this item. August 15, 2007 Petroleum Products - Calculation of Cetane Index of Middle-Distillate Fuels by the Four-Variable Equation This International Standard describes a procedure for the calculation of the cetane index of middle-distillate fuels from petroleum-derived sources. The calculated value is termed the “cetane index... October 1, 1995 Petroleum Products - Calculation of Cetane Index of Middle-Distillate Fuels by the Four-Variable Equation This International Standard describes a procedure for the calculation of the cetane index of middle-distillate fuels from petroleum-derived sources. The calculated value is termed the “cetane index...
Volumes of Revolution 1. Jul 31, 2008 LHC I've encountered a weird problem in my text...somewhat by accident =P My text only covers volumes of revolution through the disk method, and one of the questions was: Find the volume of the solid obtained when the given region is rotated about the x-axis. c) Under y = 1/x from 1 to 4 Using the disk method, I got the answer $$\frac{3\pi}{4}$$... Ok, so I wonder...what happens if I try the shell/rings method? So this is what I do: I thought that the radius of such shells would be the height of the function, so it would be "y". And, the length of such shells would be the distance from the function to the line x = 1, ...so (1/y - 1)... Because of that, I ended up trying this: $$V = \int_{0}^{1} 2\pi \ y\ (\frac{1}{y} - 1) dy$$ This turns out to yield $$\pi$$ I'm so confused right now haha...could someone please tell me what I did wrong? Either my shell method was wrong, or the disk method was...or...both =S... Last edited: Jul 31, 2008 2. Jul 31, 2008 LHC I just found out that I get the answer if I do this: $$V = \int_{\frac{1}{4}}^{1} 2\pi \ y\ (\frac{1}{y} - 1) dy + \pi \times (\frac{1}{4})^2 \times 3$$ And that's basically taking shells from y = 1/4 to y = 1, then adding that cylinder that's left behind (from x = 1 to x = 1, and from y = 0 to y = 1/4). So...that made sense. But can anyone tell me why my shell method previously described in the original post was wrong? 3. Jul 31, 2008 Dick Your shell integral needs to be broken into two parts. For y<1/4 the length of the shell isn't (1/y-1), it's just 3. 4. Jul 31, 2008 LHC Ohhh....*LED above head suddenly flickers*...I get it. I had the wrong length of the shell! Thanks for explaining that to me. =D
# How Long it Takes to Upgrade In this analysis we will explore the amount of time it takes Buffer Publish users to upgrade to a paid subscriptions. We’ll look at all users that upgraded to a paid subscription in 2017 and 2018, and build some exploratory charts to get a feel of how long it takes these customers to upgrade. In this analysis, we find the following facts: • 33% of customers upgraded in 7 days or less (67% took more than 7 days) • 44% of customers upgraded in 14 days or less (56% took more than 14 days) • 55% of customers upgraded in 30 days or less (45% took more than 30 days) • 79% of customers upgraded in 120 days or less (21% took more than 120 days) • 83% of customers upgraded in 365 days or less (17% took more than 365 days) select * where first_charge_at >= '2017-01-01' and first_charge_at < '2019-01-01' There are around 82 thousand customers that upgraded in 2017 and 2018. ## Number of Days to Upgrade Let’s begin by plotting the distribution of the number of days it took these users to upgrade to their first paid subscription. We can see that many users upgraded relatively quickly after signing up. However there is also a very long tail of users that took a long time to upgrade. To get a better understanding of the exact percentages, we can plot the cumulative distribution function (CDF). This is how you interpret the plot: if you hover over any point on the curve (X), you will see a y value that represents the proportion of users that upgraded in X days or less. For example, if you find the point on the curve where number of days to upgrade = 7, you will see that y = 0.325. This means that 32.5% of customers upgraded for the first time within 7 days of signing up. From this, we can infer that 67.5% of customers took more than 7 days to upgrade. This method can be followed for any point on the curve. • 33% of customers upgraded in 7 days or less (67% took more than 7 days) • 44% of customers upgraded in 14 days or less (56% took more than 14 days) • 55% of customers upgraded in 30 days or less (45% took more than 30 days) • 79% of customers upgraded in 120 days or less (21% took more than 120 days) • 83% of customers upgraded in 365 days or less (17% took more than 365 days)